PyBlosxom |
/SW/business/KnowledgeTree:
Accessing the KnowledgeTree API
KnowledgeTree[1] is a very popular server-based Open Source document management system. Something that some users (like me, or rather my clients) need to do is allow certain people to add or manipulate documents in KnowledgeTree without having to have a login ID and knowledge of the KnowledgeTree user interface. Enter the API, and a little custom PHP scripting....
Oddly enough, I found at least three different documents on the wiki[2] that seemed to talk about three different approaches to using the API. Oddly (should I say suspiciously?) because the level of detail was just enough to be interesting, but just short of being useful. Ie. for two of them, I just could not figure it out. I even saw a post on the KnowledgeTree forum asking for more detail / a concrete example (me too! me too!) and the only reply was a curt link to one of the near useless wiki pages that I have already mentioned. And needless to say, my own post was ignored. Whats up? (Some conspiratorial possibilities come to mind....)
The only API approach that I have been able to get working is the "REST web service framework"[3], which, for better or worse, only works as of the currently bleeding edge KnowledgeTree version 3.6.0 (will NOT work with current stable 3.5.4a). [3] is also sorely lacking in detail, but in combination with a little code surfing in
knowledgetree/ktwebservice/webservice.php
I was able to divine what was needed to get it working. Here I will hopefully provide some missing detail for Google to find....
One can of course play with the KnowledgeTree REST web service through a browser, as the means of communication with the server is via POST parameters attached to the server URL. This is also a good way to see the exact format of the XML response the server gives back.
To achieve the same result with PHP one must use libcurl through the PHP curl extension[4]. Since [4] is also a little skimpy on detail, [5] is a very useful supplement. To cut to the chase, I created a function as follows:
$site is the REST URL of the KnowledgeTree server, and $fields are the POST parameters that are to go along with it. This function simply POSTs these parameters to the URL (exactly the same as entering $site?$fields into your web browser).
Here is a concrete and currently working example of how to get the contents of the KnowledgeTree root directory:
status_code != 0 ){ echo 'Error - authentication failed: ' . $xml->message; } else { $session_id = $xml->results; echo "Login successful, session ID = " . $session_id; } // *********************************** // List contents of root folder (id=1) // *********************************** $postfields = "method=get_folder_contents&session_id=$session_id&folder_id=1"; $response = curlPost($url, $postfields); $xml = new SimpleXMLElement($response); echo " Get root folder contents:
"; if( $xml->status_code != 0 ){ echo 'Error - get_folder_contents failed: ' . $xml->message; } else { // print_r($xml); // to see data structure echo "folder ID = " . $xml->results->folder_id . "
"; echo "folder name = " . $xml->results->folder_name . "
"; echo "folder path = " . $xml->results->full_path . ""; foreach ($xml->results->items->item as $value) { echo "item type = " . $value->item_type . " "; echo "item ID = " . $value->id . " "; echo "item name = " . $value->filename . "
"; } } // *********************************** // Logout // *********************************** $postfields = "method=logout&session_id=$session_id"; $response = curlPost($url, $postfields); $xml = new SimpleXMLElement($response); echo "Logging out....
"; if( $xml->status_code != 0 ){ echo 'Error - get_folder_contents failed: ' . $xml->message; } else { echo 'successful!'; } ?>
The key point is that there were three operations in the above script, with three corresponding POST strings:
Operation POST string Login login&password=123456&username=admin List Directory get_folder_contents&session_id=$session_id&folder_id=1 Logout method=logout&session_id=$session_id
And something else that is already working -- to add a document to KnowledgeTree, use this POST string:
$document = "bodybg.jpg"; // located in /var/uploads$postfields = "method=add_document&session_id=$session_id&folder_id=1&title=$document&filename=$document&documenttype=Default&tempfilename=/vol/www/vsc/apps/kt-dms-oss-3.6.0/var/uploads/$document";
[1] http://www.knowledgetree.com/
[2] http://wiki.knowledgetree.com/
[3] http://wiki.knowledgetree.com/REST_Web_Service
[4] http://php.net/manual/en/book.curl.php
[5] http://devzone.zend.com/article/1081-Using-cURL-and-libcurl-with-PHP
posted at: 01:08 | path: /SW/business/KnowledgeTree | permanent link to this entry
/Coding/php:
Accessing the KnowledgeTree API
KnowledgeTree[1] is a very popular server-based Open Source document management system. Something that some users (like me, or rather my clients) need to do is allow certain people to add or manipulate documents in KnowledgeTree without having to have a login ID and knowledge of the KnowledgeTree user interface. Enter the API, and a little custom PHP scripting....
Oddly enough, I found at least three different documents on the wiki[2] that seemed to talk about three different approaches to using the API. Oddly (should I say suspiciously?) because the level of detail was just enough to be interesting, but just short of being useful. Ie. for two of them, I just could not figure it out. I even saw a post on the KnowledgeTree forum asking for more detail / a concrete example (me too! me too!) and the only reply was a curt link to one of the near useless wiki pages that I have already mentioned. And needless to say, my own post was ignored. Whats up? (Some conspiratorial possibilities come to mind....)
The only API approach that I have been able to get working is the "REST web service framework"[3], which, for better or worse, only works as of the currently bleeding edge KnowledgeTree version 3.6.0 (will NOT work with current stable 3.5.4a). [3] is also sorely lacking in detail, but in combination with a little code surfing in
knowledgetree/ktwebservice/webservice.php
I was able to divine what was needed to get it working. Here I will hopefully provide some missing detail for Google to find....
One can of course play with the KnowledgeTree REST web service through a browser, as the means of communication with the server is via POST parameters attached to the server URL. This is also a good way to see the exact format of the XML response the server gives back.
To achieve the same result with PHP one must use libcurl through the PHP curl extension[4]. Since [4] is also a little skimpy on detail, [5] is a very useful supplement. To cut to the chase, I created a function as follows:
$site is the REST URL of the KnowledgeTree server, and $fields are the POST parameters that are to go along with it. This function simply POSTs these parameters to the URL (exactly the same as entering $site?$fields into your web browser).
Here is a concrete and currently working example of how to get the contents of the KnowledgeTree root directory:
status_code != 0 ){ echo 'Error - authentication failed: ' . $xml->message; } else { $session_id = $xml->results; echo "Login successful, session ID = " . $session_id; } // *********************************** // List contents of root folder (id=1) // *********************************** $postfields = "method=get_folder_contents&session_id=$session_id&folder_id=1"; $response = curlPost($url, $postfields); $xml = new SimpleXMLElement($response); echo " Get root folder contents:
"; if( $xml->status_code != 0 ){ echo 'Error - get_folder_contents failed: ' . $xml->message; } else { // print_r($xml); // to see data structure echo "folder ID = " . $xml->results->folder_id . "
"; echo "folder name = " . $xml->results->folder_name . "
"; echo "folder path = " . $xml->results->full_path . ""; foreach ($xml->results->items->item as $value) { echo "item type = " . $value->item_type . " "; echo "item ID = " . $value->id . " "; echo "item name = " . $value->filename . "
"; } } // *********************************** // Logout // *********************************** $postfields = "method=logout&session_id=$session_id"; $response = curlPost($url, $postfields); $xml = new SimpleXMLElement($response); echo "Logging out....
"; if( $xml->status_code != 0 ){ echo 'Error - get_folder_contents failed: ' . $xml->message; } else { echo 'successful!'; } ?>
The key point is that there were three operations in the above script, with three corresponding POST strings:
Operation POST string Login login&password=123456&username=admin List Directory get_folder_contents&session_id=$session_id&folder_id=1 Logout method=logout&session_id=$session_id
And something else that is already working -- to add a document to KnowledgeTree, use this POST string:
$document = "bodybg.jpg"; // located in /var/uploads$postfields = "method=add_document&session_id=$session_id&folder_id=1&title=$document&filename=$document&documenttype=Default&tempfilename=/vol/www/vsc/apps/kt-dms-oss-3.6.0/var/uploads/$document";
[1] http://www.knowledgetree.com/
[2] http://wiki.knowledgetree.com/
[3] http://wiki.knowledgetree.com/REST_Web_Service
[4] http://php.net/manual/en/book.curl.php
[5] http://devzone.zend.com/article/1081-Using-cURL-and-libcurl-with-PHP
posted at: 01:08 | path: /Coding/php | permanent link to this entry
/SW/business/KnowledgeTree:
Upgrading KnowledgeTree
Per this link http://wiki.knowledgetree.com/Upgrading_KnowledgeTree it is exceedingly simple. Here is the way I do it (unpacking into a new directory each time, not unpacking over top of the old version):
Make sure ownership is correct for the new version:
chown -R www-data:www-data kt-3.6.0/
Then bring up the login dialog, and add "setup/upgrade.php" to the end of the browser URL, for example:
https://apps.vancouversolidcomputing.com/knowledgetree/setup/upgrade.php
Click the "Next" button a hand full of times and it should just work.
posted at: 08:32 | path: /SW/business/KnowledgeTree | permanent link to this entry
/SW/business/eGroupware:
eGroupware Configuration Hints
I have recently been playing with eGroupware, and so far am liking it a lot. However, the documentation has some holes the makes getting certain things (like e-mail) to work rather difficult.
E-mail:
I am still scratching my head over getting e-mail notifications to work....
Limiting the Number of Users:
This would be for the case of providing eGroupware as a commercial service in return for a monthly fee: ie. a certain number of seats for a certain fee.
I am not seeing an elegant solution. The inelegant solution would be for me (the vendor) to retain administrative control, and manually create a certain number of users for a customer. Give the customer the list of userids and passwords. Then the customer can login and modify userids and passwords to his liking. (But cannot create new userids without administrative control.)
[1] http://www.nabble.com/not-possible-SMTP-(smtps)-SSL-on-email---td23186066s3741.html
posted at: 10:48 | path: /SW/business/eGroupware | permanent link to this entry
/Hosting/NearlyFreeSpeech:
Review of Web Hosting at Nearly Free Speech.net[1]:
I have been hosting several websites with nearlyfreespeech.net for over a year now, and believe I have found the perfect host for small sites. They may in fact be perfect for large sites as well, but I don't personally have one to try them on with.
Basically they provide a great service at an incredibly cheap price. The cheap price comes from the fact that you pay as you go for both disk storage (US$0.01 / MB / day) and bandwidth (sliding scale[2] starting at $1/G), ie. the bigger the site, or the busier the site, the larger the monthly bill will be. For a small static site without a lot of activity, you could easily pay as little as ten cents per month for hosting. I have several small modestly active sites, including this blog and:
One of the sites uses MySQL, which I believe costs one cent per day. Combined, all of my sites have been costing me about one dollar per month to keep running.
Using Paypal, you can add as little as US$0.25 to your account at a time (they take a service charge, I think six cents). You may reduce the percentage of the service charge by increasing the size of the deposit, ie. there is a thirty cent service charge for a US$5 deposit.
Servers are FreeBSD, and you get full Unix shell account access with your account. Any time I have reached for a standard UNIX utility, it has been there: Midnight Commander, Unison, and nano come to mind. In addition to FTP, there is also SSH access to the account.
The only thing I have wanted and found them lacking was the Apache modpython module. That may be a FreeBSD limitation, I don't know.
The service and website have been so flawless that I literally have not once felt the need to try to contact support.
[1] https://www.nearlyfreespeech.net/
[2] https://www.nearlyfreespeech.net/services/hosting.php#pricing
posted at: 09:31 | path: /Hosting/NearlyFreeSpeech | permanent link to this entry
/SW/email:
archivemail: An Automated Means of Capping Mailbox Size
archivemail automates something that I used to do manually about once a month: remove old and large e-mails from my e-mail client's trash folder. A cron job now does the job for me, once per week:
14 13 * * 1 nice /home/userid/scripts/archivemail.shrunning my archivemail.sh script:
#!/bin/sh archivemail --days=180 --output-dir=/home/userid/ --suffix='_archive_%y%m%d_%X' /home/ckoen/Mail/trash/ archivemail --size=50000 --days=15 --output-dir=/home/userid/ --suffix='_big_%y%m%d_%X' /home/ckoen/Mail/trash/ find /home/userid/trash_* -mtime +30 -type f -exec ls -al {} \; find /home/userid/trash_* -mtime +30 -type f -exec rm -rf {} \;
The above syntax should be quite readable, with the first line deleting e-mails older then 180 days, and the second deleting e-mails bigger then 50k. Removed e-mails are dumped in my home directory in a compressed mbox file (readable by mutt), and kept for 30 days.
archivemail will handle IMAP, mbox, MH and Maildir format mailboxes. Yes, that means it is supposed to be able to pull down big chunks of e-mail from a remote IMAP mailbox, though I have not tested that feature....
Be sure to use the --dry-run option, which makes no actual changes, while setting up and testing.
posted at: 01:47 | path: /SW/email | permanent link to this entry
/Coding/php:
XML to Object Conversion
In working with the KnowledgeTree API[2] I found that the response to my http posts to the API came back in the form of XML. I needed to get that XML into PHP-processable form, and a quite a bit of googling mostly turned up home-grown solutions, a lot of them referring to themselves as "xmltoarray" functions . Until I found the PHP-native solution, SimpleXMLElement[1]. Hopefully this post will help to push SimpleXMLElement a little higher in Google's search listings....
Suppose I have a big junk of XML in string form, such as this response to a KnowledgeTree API directory listing:
0 − 1 Root Folder / −− - −
2 F n/a n/a DroppedDocuments n/a DroppedDocuments n/a Administrator n/a n/a n/a n/a n/a n/a n/a n/a RWA n/a n/a folder folder Folder n/a - −
11 F n/a n/a Public n/a Public n/a Administrator n/a n/a n/a n/a n/a n/a n/a n/a RWA n/a n/a folder folder Folder n/a
in a variable called $response. Converting to a structured object is simply:
$xml = new SimpleXMLElement($response);
And then the object might be processed as follows:
if( $xml->status_code != 0 ){ echo 'Error - operation failed: ' . $xml->message; } else { // print_r($xml); // to see data structure echo " folder ID = " . $xml->results->folder_id . "
"; echo "folder name = " . $xml->results->folder_name . "
"; echo "folder path = " . $xml->results->full_path . ""; foreach ($xml->results->items->item as $value) { echo "item type = " . $value->item_type . " "; echo "item ID = " . $value->id . " "; echo "item name = " . $value->filename . "
"; } }
[1] http://php.net/manual/en/book.simplexml.php
[2] http://wiki.knowledgetree.com/REST_Web_Service
posted at: 06:27 | path: /Coding/php | permanent link to this entry
/Linux/misc:
101 Things You Can Do On Linux But Not on Microsoft Windows
I might not make it all the way to 101, but I will give it a go:
[1] http://www.openssh.com/
[2] http://en.wikipedia.org/wiki/Open_source_software
[3] http://distrowatch.com/
[4] http://blog.langex.net/index.cgi/Linux/router-bridge/build-your-own-router.html
[5] http://en.wikipedia.org/wiki/List_of_Linux_distributions
[6] http://www.linux.org/dist/list.html
[7] http://distrowatch.com/
posted at: 00:54 | path: /Linux/misc | permanent link to this entry
/SW/business/KnowledgeTree:
Adding a Chinese Language Pack Plugin to Knowledgetree
This was really quite unnecessarily hard to find. Turns out they are found on the KnowledgeTree Forge[1], and here[2] is the list. There are actually two Simplified Chinese packs, this one[3] seemed more official so that is what I downloaded[4].
Installation is then quite simple. Move the downloaded tarball into knowledgetree/plugins/i18n and then untar it, ie.
tar -xvf SimplifiedChinese.tgz
You should then see a "SimplifiedChinese" sub-directory appear under i18n, and in my experience the plugin should activate automatically, ie. if you now go to the Knowledgetree login dialog "Simplified Chinese" should appear in the language drop-down menu. If it does not, login as a Knowledgetree admin and go to
Administration » Miscellaneous » Plugins
where you might have to hit the "Reread Plugins" button at the bottom, or click on the "Simplified Chinese Translation" check box to activate it.
[1] http://forge.knowledgetree.com/
[2] http://forge.knowledgetree.com/gf/project/?action=ProjectTroveBrowse&_trove_category_id=306
[3] http://forge.knowledgetree.com/gf/project/zhcn/
[4] http://forge.knowledgetree.com/gf/project/zhcn/frs/
posted at: 06:23 | path: /SW/business/KnowledgeTree | permanent link to this entry
/SW/business/WebERP:
Installing WebERP Language Packs
From http://www.weberp.org/ download the desired language pack zip files and unzip them in your WebERP instances' "locale" directory. Then change ownership of all files to www-data.
Make sure the server has a locale for each of the languages you want to use[1]:
dpkg-reconfigure locales
Note that there must exist a system locale that is EXACTLY THE SAME as the WebERP locale, ie. zh_CN is NOT the same as zh_CN.utf8.
Now you can login to WebERP as admin and create users with language defaults for any of the supported languages (which will be apparent in the language drop-down list in the user profile editor).
Note that I had to install the ttf-arphic-gkai00mp package to get simplified Chinese (zh_CN) working, but Firefox and Opera do not seem to autodetect the GB2132 font well and encoding must be selected manually from the browser menu.
[1] http://www.weberp.org/FrequentlyAskedQuestions
posted at: 02:34 | path: /SW/business/WebERP | permanent link to this entry
/SW/business/SugarCRM:
SugarCRM Language Pack Installation
https://www.sugarcrm.com/forums/showthread.php?t=34942 is a very useful post.
Thereafter the language is an option at login. I have found no way to hard code a default language for a particular user (though there is a system-wide default). I have also discovered that a userid in the current SugarCRM, once created, can never be deleted.
posted at: 01:22 | path: /SW/business/SugarCRM | permanent link to this entry
/Linux/router-bridge:
How to Build Your Own Linux Network Router
Gentoo is justifiably held in great esteem for their very good documentation. I am going to give you a simplified version of this guide[1], from a Debian perspective, and also, some of the things I do while building a router are simpler by design. Here are a couple other interesting links for background reading: [2][3]
Why would you want to do this? Cheap commercial routers often do not work very well, choking up on certain kinds of traffic, even locking up regularly so that someone must manually cycle the power to restart them. If you build your own router, you can keep the software up-to-date, which is a big security advantage over the commercial competition. And you can install any software you want on it, like your own web and e-mail server, for instance. This is not meant to be an exhaustive list....
Start with the cheapest, oldest laptop you can find with the capacity for the number of network cards you want to use (two for a wired *or* wireless local network, three for a wired *and* wireless local network). One network card is needed to connect to the outside world (presumably, the internet) and another one for *each* local network that you want to connect to the internet (typically, a wired and / or a wireless network).
Note that a really old laptop, like the Pentium One that I use, has no CD and no USB. The easiest way to install Linux on it is to remove the hard drive and place it temporarily in another computer (or a USB enclosure) for the Linux installation. A minimal install is all that is necessary, just enough to get a terminal command prompt and functioning networking. Note that at least on Debian, standard kernels will work right off the shelf. Then replace the newly installed drive in your soon-to-be router.
Setting up a router for a wired LAN (Local Area Network) is actually a subset of setting up a wireless router, so I will just describe a wireless router here. (Turning a wireless configuration into a wired configuration just requires a minor alteration or two....) You need a wireless card that will talk to the hostap_cs kernel driver, and also supports "Master" mode. These are not easy to find in, in my experience. I have stumbled across two, one of which broke and I am now having quite a hard time replacing it.
The orinoco_cs and hostap_cs drivers support many of the same cards. Best to just blacklist the orinoco_cs driver and take your laptop shopping for cards. You really need to test the card before buying it (easy in the second hand Chinese markets I shop in). If you find a card that the hostap_cs driver recognizes, test for Master mode with the iwconfig command:
iwconfig wlan0 mode MasterIf the card does not like Master mode, you will get an error something like:
# iwconfig eth1 mode MasterIf it works, ifconfig will show, in part:
Error for wireless request "Set Mode" (8B06) :
SET failed on device eth1 ; Invalid argument.
wlan0 IEEE 802.11b ESSID:"clayton" Nickname:""(Note the "Mode:Master" part.)
Mode:Master Frequency:2.462 GHz
I will avoid great detail here. The most probable options are, your "outside world" network card will either connect directly and probably be called "eth0", or it will connect using PPPOE which you will probably configure with a very simple and straight-forward piece of software called "pppoeconf" and result in a "ppp0" interface. For routing purposes, all you need to know is what the interface is called, and that it works.
As for the wireless card: give it a static IP and set it to Master mode in /etc/network/interfaces:
auto eth0
iface eth0 inet dhcp
auto wlan0
iface wlan0 inet static
wireless-essid somename
address 192.168.8.1
netmask 255.255.255.0
network 192.168.8.0
broadcast 192.168.8.255
wireless-mode Master
wireless-channel 11
wireless-key somepassword
Note that in the above, eth0 connects to the internet, and therefore in this case I am not using PPPOE. I will address the slightly more complicated case of PPP in /etc/network/interfaces at a later date.
We will do them at the same time because the same software does both! Install the "firehol" package. Then create a /etc/firehol/firehol.conf file as follows:
# firehol configuration for a masquerading server version 5 # The network of our internal LAN. home_ips="192.168.8.0/24" # try "mac" to filter on MAC addresses # blacklist full 192.168.8.101 192.168.8.51 192.168.8.53 # DHCP needs 0.0.0.0/255.255.255.255 access. interface wlan0 dhcp1 policy return server dhcp accept # interface eth0 internet src not "${UNROUTABLE_IPS}" interface eth0 internet protection strong 10/sec 10 server "smtp http icmp ssh" accept server donkey2 accept server ident reject with tcp-reset client all accept # reduce noise in the syslog by dropping this stuff silently server "dhcp samba" drop interface wlan0 wlan src "${home_ips}" policy reject server "http dns ssh icmp" accept client all accept # server dhcp drop interface eth1 lan src "${home_ips}" policy reject server "http dns ssh icmp" accept client all accept router internet2wlan inface eth0 outface wlan0 masquerade reverse client all accept server ident reject with tcp-reset router internet2lan inface eth0 outface eth1 masquerade reverse client all accept server ident reject with tcp-reset
There are tutorials out there that will step you through the creation of this file, which is how I started, but if you are careful about the customizaion process, you should be able to use my config as your starting point.
Some salient points:
Install the dnsmasq package. Add the following line to /etc/dnsmasq.conf:
dhcp-range=192.168.8.50,192.168.8.150,12h
Restart dnsmasq, and your router should now respond to DHCP requests from the wireless network.
Wasn't that simple? Comments / errata welcome.
[1] http://www.gentoo.org/doc/en/home-router-howto.xml
[2] http://www.bit-tech.net/bits/2008/06/27/build-your-own-router/1
[3] http://thoughtattic.com/security/MakeYourOwnRouter.html
posted at: 03:32 | path: /Linux/router-bridge | permanent link to this entry
/Admin/LAN:
How to Build Your Own Linux Network Router
Gentoo is justifiably held in great esteem for their very good documentation. I am going to give you a simplified version of this guide[1], from a Debian perspective, and also, some of the things I do while building a router are simpler by design. Here are a couple other interesting links for background reading: [2][3]
Why would you want to do this? Cheap commercial routers often do not work very well, choking up on certain kinds of traffic, even locking up regularly so that someone must manually cycle the power to restart them. If you build your own router, you can keep the software up-to-date, which is a big security advantage over the commercial competition. And you can install any software you want on it, like your own web and e-mail server, for instance. This is not meant to be an exhaustive list....
Start with the cheapest, oldest laptop you can find with the capacity for the number of network cards you want to use (two for a wired *or* wireless local network, three for a wired *and* wireless local network). One network card is needed to connect to the outside world (presumably, the internet) and another one for *each* local network that you want to connect to the internet (typically, a wired and / or a wireless network).
Note that a really old laptop, like the Pentium One that I use, has no CD and no USB. The easiest way to install Linux on it is to remove the hard drive and place it temporarily in another computer (or a USB enclosure) for the Linux installation. A minimal install is all that is necessary, just enough to get a terminal command prompt and functioning networking. Note that at least on Debian, standard kernels will work right off the shelf. Then replace the newly installed drive in your soon-to-be router.
Setting up a router for a wired LAN (Local Area Network) is actually a subset of setting up a wireless router, so I will just describe a wireless router here. (Turning a wireless configuration into a wired configuration just requires a minor alteration or two....) You need a wireless card that will talk to the hostap_cs kernel driver, and also supports "Master" mode. These are not easy to find in, in my experience. I have stumbled across two, one of which broke and I am now having quite a hard time replacing it.
The orinoco_cs and hostap_cs drivers support many of the same cards. Best to just blacklist the orinoco_cs driver and take your laptop shopping for cards. You really need to test the card before buying it (easy in the second hand Chinese markets I shop in). If you find a card that the hostap_cs driver recognizes, test for Master mode with the iwconfig command:
iwconfig wlan0 mode MasterIf the card does not like Master mode, you will get an error something like:
# iwconfig eth1 mode MasterIf it works, ifconfig will show, in part:
Error for wireless request "Set Mode" (8B06) :
SET failed on device eth1 ; Invalid argument.
wlan0 IEEE 802.11b ESSID:"clayton" Nickname:""(Note the "Mode:Master" part.)
Mode:Master Frequency:2.462 GHz
I will avoid great detail here. The most probable options are, your "outside world" network card will either connect directly and probably be called "eth0", or it will connect using PPPOE which you will probably configure with a very simple and straight-forward piece of software called "pppoeconf" and result in a "ppp0" interface. For routing purposes, all you need to know is what the interface is called, and that it works.
As for the wireless card: give it a static IP and set it to Master mode in /etc/network/interfaces:
auto eth0
iface eth0 inet dhcp
auto wlan0
iface wlan0 inet static
wireless-essid somename
address 192.168.8.1
netmask 255.255.255.0
network 192.168.8.0
broadcast 192.168.8.255
wireless-mode Master
wireless-channel 11
wireless-key somepassword
Note that in the above, eth0 connects to the internet, and therefore in this case I am not using PPPOE. I will address the slightly more complicated case of PPP in /etc/network/interfaces at a later date.
We will do them at the same time because the same software does both! Install the "firehol" package. Then create a /etc/firehol/firehol.conf file as follows:
# firehol configuration for a masquerading server version 5 # The network of our internal LAN. home_ips="192.168.8.0/24" # try "mac" to filter on MAC addresses # blacklist full 192.168.8.101 192.168.8.51 192.168.8.53 # DHCP needs 0.0.0.0/255.255.255.255 access. interface wlan0 dhcp1 policy return server dhcp accept # interface eth0 internet src not "${UNROUTABLE_IPS}" interface eth0 internet protection strong 10/sec 10 server "smtp http icmp ssh" accept server donkey2 accept server ident reject with tcp-reset client all accept # reduce noise in the syslog by dropping this stuff silently server "dhcp samba" drop interface wlan0 wlan src "${home_ips}" policy reject server "http dns ssh icmp" accept client all accept # server dhcp drop interface eth1 lan src "${home_ips}" policy reject server "http dns ssh icmp" accept client all accept router internet2wlan inface eth0 outface wlan0 masquerade reverse client all accept server ident reject with tcp-reset router internet2lan inface eth0 outface eth1 masquerade reverse client all accept server ident reject with tcp-reset
There are tutorials out there that will step you through the creation of this file, which is how I started, but if you are careful about the customizaion process, you should be able to use my config as your starting point.
Some salient points:
Install the dnsmasq package. Add the following line to /etc/dnsmasq.conf:
dhcp-range=192.168.8.50,192.168.8.150,12h
Restart dnsmasq, and your router should now respond to DHCP requests from the wireless network.
Wasn't that simple? Comments / errata welcome.
[1] http://www.gentoo.org/doc/en/home-router-howto.xml
[2] http://www.bit-tech.net/bits/2008/06/27/build-your-own-router/1
[3] http://thoughtattic.com/security/MakeYourOwnRouter.html
posted at: 03:32 | path: /Admin/LAN | permanent link to this entry
/Admin/Cherokee:
Introduction to the Cherokee Web Server
I came across this post[1] singing the praises of the Cherokee web server, and thought I would give it a try. So far, I would say not bad....
Per their own measurements[2] they seem to do well in the speed department, though I have read elsewhere that that advantage is found mainly in the area of smaller files. Rumor has it that relative to Apache, Cherokee has less dependencies and uses less system resources, and is very stable.
Without a doubt Cherokee has an advantage in the area of configuration, as they have a very nice and seemingly very complete GUI configurator, that puts fairly extensive help right at one's fingertips. That turned out to be a good thing, as I flailed around a bit getting a couple things working, and the GUI made the trial & error process quite a bit faster, I think.
Of course the default Debian install serves up html out of the standard /var/www web root just fine. But I am playing with Cherokee on my backup server so I need to be able to see the backuppc controls, which are Perl-based CGI.
But first, bring up that wonderful GUI configurator.... The thing is not started by default, and after it is started, it only listens on localhost:9090, so ssh to the Cherokee machine thusly:
ssh -L 9090:localhost:9090 remote_IPwhich forwards port 9090 on the local host to port 9090 on the Cherokee host. Then:
cherokee-adminon the Cherokee host, which will incidentally provide a username and password for login, for when you point a browser at localhost:9090.
The backuppc Debian package configures Apache automatically, but of course this does not work for Cherokee. In the Cherokee admin app click on "Virtual Servers --> default". Add "index.cgi" to the "Directory Indexes" field.
Then click on the "Behavior" tab, at the bottom of which you will find "Add New Rule". Configure as follows:
Add New RuleAt the bottom of the Security tab, use "Add New Pair" to add a userid / password for access to backuppc.
Rule Type Extensions Extensions cgi Handler CGI Security:Validation Mechanism Fixed List Security:Methods Basic Security:Realm backuppc
And now, do not forget to click on the "Save" button at the bottom of the GUI's left column. And create a soft link to the backuppc executable:
cd /var/wwwAnd now pointing your browser at
ln -s /usr/share/backuppc/cgi-bin/ backuppc
http://cherokeeHost/backuppc/should bring up a password dialog window, followed by the backuppc control application.
Something that is not so well documented is that php5-cgi[5] must be installed for the Cherokee FastCGI PHP handler to work (and php5-mysql if you want PHP to talk to MySQL). Python apparently needs special treatment[4], but I have not attempted to get it working.
[1] http://www.sourceguru.net/archives/202
[2] http://www.cherokee-project.com/benchmarks.html
[3] http://www.cherokee-project.com/doc/other_faq.html
[4] http://www.rkblog.rk.edu.pl/w/p/django-and-cherokee-server/
[5] http://www.howtoforge.com/how-to-install-and-configure-cherokee-web-server-with-php5-and-mysql5-on-ubuntu-8.10
posted at: 09:32 | path: /Admin/Cherokee | permanent link to this entry
/SW/business/WebERP:
webERP Installation
Download the latest version from http://www.weberp.org/
Create a directory for webERP and copy the downloaded zip file into that directory, and unzip it. (Do not try to unzip from a parent directory, as the last time I tried, webERP unzipped all the files into the current directory, not a subdirectory.)
Correct the ownership of the files:
chown -R www-data:www-data ../webERP_3.10.3/
Have a look at the installation/upgrade notes in webERP_3.10.3/doc/. For new installations, we must create a database and user for the new installation first, manually:
mysql -p Enter password: mysql> create database apps_weberp; mysql> GRANT ALL on apps_weberp.* TO 'apps_weberp'@'localhost' IDENTIFIED BY 'appsPassword';
Then edit weberp-new.sql to add a line at the top:
use apps_weberp;
where apps_weberp is the name of the database you just created, and then from the shell command line (not from the MySQL command line) run:
mysql --user=apps_weberp --password='appsPassword' < /var/www/vsc/apps/webERP_3.10.3/sql/mysql/weberp-new.sql
to import a clean new database (NOT the demo database). A "show tables;" on the new database should now show a lot of tables. Create and edit the config.php:
cp config.distrib.php config.php
vi config.php
Change the following settings in config.php:
$DefaultLanguage ='en_US';
$dbuser = 'apps_weberp';
$dbpassword = 'appsPassword';
Per [1] Q14, webERP supports having one or more companies using the same instance of webERP. Inside the "companies" directory, there is exactly one subdirectory per company, with the name of the subdirectory exactly the same as the corresponding MySQL database. (I am assuming then that there is only one MySQL user per instance of webERP, and that single user controls all the databases / companies associated with that instance.) The default install has a single subdirectory named "weberpdemo" in the "companies" directory, so this must be renamed to agree with a MySQL database:
cd /var/www/vsc/apps/webERP_3.10.3/companies
mv weberpdemo apps_weberp
The subdirectory names under the "companies" directory are apparently what is used to populate the "company" drop-down menu on the login screen. If the company you select does not have a corresponding database, you will get the error:
"The company name entered does not correspond to a database on the database server specified in the config.php configuration file"
which is actually a bit non-sensical because it has nothing to do with the config.php file.
Now you should be able to login as the admin user with default password of "weberp". Change that password. And in config.php:
$allow_demo_mode = False;
to get rid of the password display on the login screen.
[1] http://www.weberp.org/FrequentlyAskedQuestionsInstallation
posted at: 09:49 | path: /SW/business/WebERP | permanent link to this entry
/Admin/backups/misc:
Semi-Automating My Monthly Backup
Boring repetitive tasks should be scripted. Backups *really* should be automated. So here is a first step down that path for the tarball that I send to my hosted server every month:
#!/bin/sh cd /path/to/script/directory echo "My monthly backup:" echo "First archive mail trash" ./archivemail.sh echo "Now build the tar file." FILENAME="Backup`date +%Y%m%d`.tar" PATHFILE="/scratch/"$FILENAME echo "Will backup to " $PATHFILE echo "Archive /home/userid..." tar -cf $PATHFILE /home/userid echo "Add /etc..." tar -rf $PATHFILE /etc /etc/init.d/apache2 stop /etc/init.d/mysql stop echo "add /var/www..." tar -rf $PATHFILE /var/www echo "add /var/lib/mysql/" tar -rf $PATHFILE /var/lib/mysql/ /etc/init.d/apache2 start /etc/init.d/mysql start echo "Backup complete, list contents of archive" tar -tvf $PATHFILE
and then I get an e-mail telling me its all done, and there is a huge tarball waiting for me in /scratch. I run this script on the 1st of every month from cron. archivemail.sh uses archivemail[1] to clean out my Mail trash folder. I split it out in a separate script because I run it more often (once a week).
[1] http://blog.langex.net/index.cgi/SW/email/
posted at: 02:26 | path: /Admin/backups/misc | permanent link to this entry
/SW/business/KnowledgeTree:
How to Move Knowledgetree
Knowledgetree is Document Management software, so not only does it store things in MySQL, but it also stores a lot in the file system. So one cannot move a KnowledgeTree (KT) instance to a different directory and expect it to just continue working. In fact, it normally does not.
So first step: make a good backup both of the web root directory and the MySQL database.
Then the trick, tipped-off to me courtesy of a forum[1], is to go through the (quite simple) upgrade process by running, for instance
http://ofri.vancouversolidcomputing.com/knowledgetree/setup/upgrade.php
The first prompt is for the admin userid and password of the KT instance (not the MySQL password).
After that, just click through the "Next" button on several screens, and finally you will be presented with a login dialog. All done.
[1] http://forums.knowledgetree.com/viewtopic.php?t=1670
posted at: 01:58 | path: /SW/business/KnowledgeTree | permanent link to this entry
/Admin/Apache/HTTPS-SSL:
SSL Certificates 101
Generally speaking, it would appear that a vanilla single root SSL certificate, self-signed or otherwise, is only good for exactly one domain that corresponds exactly to the "common name" used in creating the certificate.
Some vendors[1] sell something called a "wildcard" certificate, where the common name on the certificate takes the form of "*.domain.com", and can be used to secure multiple sub-domains. Such a "wildcard" certificate, not suprisingly, seems to be considerably more expensive then a single root certificate. Apache even provides a built-in mechanism using a document root wildcard[2] for mapping each sub-domain to a different document root.
Some vendors like Godaddy[3] sell multiple domain certificates which seem to provide a discount to purchasing the same number of single root certificates.
A good source for a free certificate is cacert.org[4]. cacert.org will sign a certificate for you for a domain if your e-mail address is in the whois record for the domain (this is an automated process on their end, they verify your identity by sending you a link in an e-mail ....) The Apache website[5] has a nice concise explanation of how to create a server key and certificate signing request for cacert.org (or anyone else....)
Basically the process is[7]:
openssl req -nodes -new -keyout try.key -out try.csr
openssl req -noout -text -in try.csr
cacert.org certificates seem to be good for six months. They send you an e-mail in advance of expiry.
For a particular SSL-enabled Apache virtual host, force users to always use https by placing a redirect in http virtual host, ie.:
DocumentRoot /var/www/vsc/apps ServerName apps.vancouversolidcomputing.com ServerAlias apps.vancouversolidcomputing.com ServerAdmin ckoeni@gmail.com CustomLog /var/log/apache2/access.log combined Redirect / https://apps.vancouversolidcomputing.com/
[1] http://www.sslshopper.com/best-ssl-wildcard-certificate.html
[2] http://phaseshiftllc.com/archives/2008/10/27/multiple-secure-subdomains-with-a-wildcard-ssl-certificate
[3] http://www.godaddy.com/gdshop/ssl/ssl.asp?ci=9039
[4] https://www.cacert.org/
[5] http://httpd.apache.org/docs/2.0/ssl/ssl_faq.html#realcert
[6] http://httpd.apache.org/docs/2.0/ssl/ssl_faq.html#removepassphrase
[7] http://www.cacert.org/help.php?id=6
[8] http://www.cacert.org/help.php?id=4
posted at: 00:57 | path: /Admin/Apache/HTTPS-SSL | permanent link to this entry
/Admin/Apache/HTTPS-SSL:
Multiple SSL Certificates in Apache
As I noted in an earlier post, name-based virtual hosting "seemed" to be working. "Seemed". In fact, the virtual hosts were finding the correct web root and loading the correct site, but browsers were consistently giving an error to the effect that the domain name in the certificate and the domain name the browser was pointed to were not the same.
Someone on the cacert.org e-mail list[1] set me straight:
From: Pete Stephenson To: cacert-support@lists.cacert.org Subject: Re: Certificate somehow associated with wrong sub-domain? Both subdomains share the same IP address. SSL is IP-based, rather than name-based. Specifically, when a client connects to a server, it establishes the SSL connection prior to sending the HTTP Host header, so the server has no idea which specific certificate to send. Depending on the server, it may send the first certificate mentioned in the configuration file or do something else entirely. You can solve this by adding multiple SubjectAltNames to a certificate (e.g. you'd have a SAN for apps.vancouversolidcomputing.com and another one for vsc.vancouversolidcomputing.com all in a single certificate) and telling your server to use the same certificate for both subdomains. More details, including a handy shell script which can generate the required CSR (some options, like the RSA key length are manually configurable in the shell script; it doesn't prompt the user for the keylength), are available here: http://wiki.cacert.org/wiki/VhostTaskForce Cheers! -Pete
So what I take from this is:
This page[2] talks about the issue in general, and the various somewhat fuzzy and partially supported options -- "Currently the different browsers, servers and CAs all implement different and incompatible ways to use SSL certificates for several VHosts on the same server" -- this situation has not been entirely standardized yet!
This page[3] seems to recommend the cacert.org way to setup Apache with the right kind of multiple SubjectAltName certificate, complete with a script[4] for generating an appropriate Certificate Request and associated key. I used the script to generate the request, and sure enough:
# openssl req -noout -text -in vancouversolidcomputing_csr.pem Certificate Request: Data: Version: 0 (0x0) Subject: CN=www.vancouversolidcomputing.com Requested Extensions: X509v3 Subject Alternative Name: DNS:www.vancouversolidcomputing.com, DNS:vancouversolidcomputing.com, DNS:printshopdemo.vancouversolidcomputing.com, DNS:vsc.vancouversolidcomputing.com , DNS:solid.vancouversolidcomputing.com, DNS:apps.vancouversolidcomputing.com, DNS:ofri.vancouversolidcomputing.com
out comes a Certificate Request with multiple SubjectAltNames.
I then replaced *all* certificates in my Apache virtual hosts with this new certificate, ie.
SSLEngine on
SSLCertificateFile /etc/apache2/ssl/vancouversolidcomputing_crt.pem
SSLCertificateKeyFile /etc/apache2/ssl/vancouversolidcomputing_privatekey.pem
in each virtual host block for each sub-domain / web root.
The certificate now works flawlessly in Iceape (which apparently contains the cacert.org Certificate Authority information) and Internet Explorer still complains about an untrusted Certificate Authority. Neither complains about domain names not matching, which was happening before.
[3] contained several other directives in each of the SSL virtual host blocks:
UseCanonicalName On
SSLCipherSuite HIGH
SSLProtocol all -SSLv2
but I have so far found these unnecessary.
[1] https://lists.cacert.org/wws/info/cacert-support
[2] http://wiki.cacert.org/wiki/VhostTaskForce
[3] http://wiki.cacert.org/wiki/CSRGenerator
[4] http://svn.cacert.org/CAcert/Software/CSRGenerator/csr
posted at: 00:30 | path: /Admin/Apache/HTTPS-SSL | permanent link to this entry