Expat-IT Tech Bits

Home

Contact

Links

Search this site:

Categories:

/ (287)
  Admin/ (122)
    Apache/ (10)
      HTTPS-SSL/ (4)
      PHP/ (3)
      performance/ (2)
    Cherokee/ (1)
    LAN/ (4)
    LVM/ (6)
    Monitoring/ (2)
      munin/ (2)
    SSH/ (6)
    SSL/ (1)
    Samba/ (1)
    VPN-options/ (6)
      OpenVPN/ (1)
      SSH-Proxy/ (3)
      Tinc/ (1)
      sshuttle/ (1)
    backups/ (17)
      SpiderOak/ (1)
      backuppc/ (5)
      dirvish/ (1)
      misc/ (6)
      rdiff-backup/ (1)
      rsync/ (1)
      unison/ (2)
    commandLine/ (24)
      files/ (8)
      misc/ (10)
      network/ (6)
    crontab/ (1)
    databases/ (15)
      MSSQL/ (2)
      MySQL/ (8)
      Oracle/ (3)
      PostgreSQL/ (1)
    dynamicDNS/ (2)
    email/ (11)
      Dovecot/ (1)
      deliverability/ (1)
      misc/ (1)
      postfix/ (7)
      puppet/ (1)
    iptables/ (3)
    tripwire/ (1)
    virtualization/ (9)
      VMware/ (1)
      virtualBox/ (8)
  Coding/ (14)
    bash/ (1)
    gdb/ (1)
    git/ (3)
    php/ (5)
    python/ (4)
      Django/ (2)
  Education/ (1)
  Hosting/ (27)
    Amazon/ (18)
      EBS/ (3)
      EC2/ (10)
      S3/ (1)
      commandline/ (4)
    Godaddy/ (2)
    NearlyFreeSpeech/ (3)
    Rackspace/ (1)
    vpslink/ (3)
  Linux/ (30)
    Android/ (1)
    Awesome/ (3)
    CPUfreq/ (1)
    China/ (2)
    Debian/ (8)
      APT/ (3)
      WPA/ (1)
    audio/ (1)
    encryption/ (3)
    fonts/ (1)
    misc/ (6)
    remoteDesktop/ (1)
    router-bridge/ (3)
  SW/ (45)
    Micro$soft/ (1)
    browser/ (2)
      Chrome/ (1)
      Firefox/ (1)
    business/ (28)
      Drupal/ (9)
      KnowledgeTree/ (6)
      Redmine/ (2)
      SugarCRM/ (7)
      WebERP/ (2)
      WordPress/ (1)
      eGroupware/ (1)
    chat/ (1)
    email/ (1)
    fileSharing/ (2)
      btsync/ (1)
      mldonkey/ (1)
    graphics/ (2)
    research/ (2)
    website/ (6)
      blog/ (6)
        blosxom/ (3)
        rss2email/ (1)
        webgen/ (1)
  Security/ (15)
    IMchat/ (2)
    circumvention/ (2)
    cryptoCurrency/ (1)
    e-mail/ (4)
    greatFirewall/ (1)
    hacking/ (1)
    password/ (1)
    privacy/ (2)
    skype/ (1)
  Services/ (1)
    fileSharing/ (1)
  TechWriting/ (1)
  xHW/ (14)
    Lenovo/ (1)
    Motorola_A1200/ (2)
    Thinkpad_600e/ (1)
    Thinkpad_a21m/ (3)
    Thinkpad_i1300/ (1)
    Thinkpad_x24/ (1)
    USB_audio/ (1)
    scanner/ (1)
    wirelessCards/ (2)
  xLife/ (17)
    China/ (9)
      Beijing/ (5)
        OpenSource/ (3)
    Expatriation/ (1)
    Vietnam/ (7)

Archives:

  • 2016/07
  • 2016/05
  • 2016/02
  • 2016/01
  • 2015/12
  • 2015/11
  • 2015/06
  • 2015/01
  • 2014/12
  • 2014/11
  • 2014/10
  • 2014/09
  • 2014/07
  • 2014/04
  • 2014/02
  • 2014/01
  • 2013/12
  • 2013/10
  • 2013/08
  • 2013/07
  • 2013/06
  • 2013/05
  • 2013/04
  • 2013/02
  • 2013/01
  • 2012/12
  • 2012/10
  • 2012/09
  • 2012/08
  • 2012/07
  • 2012/06
  • 2012/05
  • 2012/04
  • 2012/03
  • 2012/01
  • 2011/12
  • 2011/11
  • 2011/10
  • 2011/09
  • 2011/08
  • 2011/07
  • 2011/06
  • 2011/05
  • 2011/04
  • 2011/02
  • 2010/12
  • 2010/11
  • 2010/10
  • 2010/09
  • 2010/08
  • 2010/07
  • 2010/06
  • 2010/05
  • 2010/04
  • 2010/03
  • 2010/02
  • 2010/01
  • 2009/12
  • 2009/11
  • 2009/10
  • 2009/09
  • 2009/08
  • 2009/07
  • 2009/06
  • 2009/05
  • 2009/04
  • 2009/03
  • 2009/02
  • 2009/01
  • 2008/12
  • 2008/11
  • 2008/10
  • 2008/09
  • Subscribe XML RSS Feed

    Creative Commons License
    This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
    PyBlosxom

    This site has no ads. To help with hosting, crypto donations are accepted:
    Bitcoin: 1JErV8ga9UY7wE8Bbf1KYsA5bkdh8n1Bxc
    Zcash: zcLYqtXYFEWHFtEfM6wg5eCV8frxWtZYkT8WyxvevzNC6SBgmqPS3tkg6nBarmzRzWYAurgs4ThkpkD5QgiSwxqoB7xrCxs

    Wed, 29 Apr 2009


    /Coding/php: Accessing the KnowledgeTree API

    KnowledgeTree[1] is a very popular server-based Open Source document management system. Something that some users (like me, or rather my clients) need to do is allow certain people to add or manipulate documents in KnowledgeTree without having to have a login ID and knowledge of the KnowledgeTree user interface. Enter the API, and a little custom PHP scripting....

    Oddly enough, I found at least three different documents on the wiki[2] that seemed to talk about three different approaches to using the API. Oddly (should I say suspiciously?) because the level of detail was just enough to be interesting, but just short of being useful. Ie. for two of them, I just could not figure it out. I even saw a post on the KnowledgeTree forum asking for more detail / a concrete example (me too! me too!) and the only reply was a curt link to one of the near useless wiki pages that I have already mentioned. And needless to say, my own post was ignored. Whats up? (Some conspiratorial possibilities come to mind....)

    The only API approach that I have been able to get working is the "REST web service framework"[3], which, for better or worse, only works as of the currently bleeding edge KnowledgeTree version 3.6.0 (will NOT work with current stable 3.5.4a). [3] is also sorely lacking in detail, but in combination with a little code surfing in

    knowledgetree/ktwebservice/webservice.php

    I was able to divine what was needed to get it working. Here I will hopefully provide some missing detail for Google to find....

    One can of course play with the KnowledgeTree REST web service through a browser, as the means of communication with the server is via POST parameters attached to the server URL. This is also a good way to see the exact format of the XML response the server gives back.

    To achieve the same result with PHP one must use libcurl through the PHP curl extension[4]. Since [4] is also a little skimpy on detail, [5] is a very useful supplement. To cut to the chase, I created a function as follows:

    <?php function curlPost($site, $fields){ $ch = curl_init(); // initialize curl handle curl_setopt($ch, CURLOPT_URL,$site); // set url to post to curl_setopt($ch, CURLOPT_FAILONERROR, 1); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1);// allow redirects curl_setopt($ch, CURLOPT_RETURNTRANSFER,1); // return into a variable curl_setopt($ch, CURLOPT_TIMEOUT, 9); // times out after 10s curl_setopt($ch, CURLOPT_POST, 1); // set POST method curl_setopt($ch, CURLOPT_POSTFIELDS, $fields); $answer = curl_exec($ch); // run the whole process // print_r(curl_getinfo($ch)); // echo "\n\ncURL error number:" .curl_errno($ch); // echo "\n\ncURL error:" . curl_error($ch); curl_close($ch); return $answer; } ?>

    $site is the REST URL of the KnowledgeTree server, and $fields are the POST parameters that are to go along with it. This function simply POSTs these parameters to the URL (exactly the same as entering $site?$fields into your web browser).

    Here is a concrete and currently working example of how to get the contents of the KnowledgeTree root directory:

    <?php $url = 'https://www.server.com/kt-dms-oss-3.6.0/ktwebservice/KTWebService.php'; require_once('funCurlPost.php'); // ************************** // Login // ************************** $postfields = "method=login&password=123456&username=admin"; $response = curlPost($url, $postfields); $xml = new SimpleXMLElement($response); if( $xml->status_code != 0 ){ echo 'Error - authentication failed: ' . $xml->message; } else { $session_id = $xml->results; echo "Login successful, session ID = " . $session_id; } // *********************************** // List contents of root folder (id=1) // *********************************** $postfields = "method=get_folder_contents&session_id=$session_id&folder_id=1"; $response = curlPost($url, $postfields); $xml = new SimpleXMLElement($response); echo "<p>Get root folder contents:<br>"; if( $xml->status_code != 0 ){ echo 'Error - get_folder_contents failed: ' . $xml->message; } else { // print_r($xml); // to see data structure echo "<p>folder ID = " . $xml->results->folder_id . "<br>"; echo "folder name = " . $xml->results->folder_name . "<br>"; echo "folder path = " . $xml->results->full_path . "<p>"; foreach ($xml->results->items->item as $value) { echo "item type = " . $value->item_type . " "; echo "item ID = " . $value->id . " "; echo "item name = " . $value->filename . "<br>"; } } // *********************************** // Logout // *********************************** $postfields = "method=logout&session_id=$session_id"; $response = curlPost($url, $postfields); $xml = new SimpleXMLElement($response); echo "<p>Logging out....<br>"; if( $xml->status_code != 0 ){ echo 'Error - get_folder_contents failed: ' . $xml->message; } else { echo 'successful!'; } ?>

    The key point is that there were three operations in the above script, with three corresponding POST strings:

    OperationPOST string
    Loginlogin&password=123456&username=admin
    List Directory get_folder_contents&session_id=$session_id&folder_id=1
    Logoutmethod=logout&session_id=$session_id

    And something else that is already working -- to add a document to KnowledgeTree, use this POST string:

    $document = "bodybg.jpg"; // located in /var/uploads

    $postfields = "method=add_document&session_id=$session_id&folder_id=1&title=$document&filename=$document&documenttype=Default&tempfilename=/vol/www/vsc/apps/kt-dms-oss-3.6.0/var/uploads/$document";

    [1] http://www.knowledgetree.com/
    [2] http://wiki.knowledgetree.com/
    [3] http://wiki.knowledgetree.com/REST_Web_Service
    [4] http://php.net/manual/en/book.curl.php
    [5] http://devzone.zend.com/article/1081-Using-cURL-and-libcurl-with-PHP

    posted at: 01:08 | path: /Coding/php | permanent link to this entry


    /SW/business/KnowledgeTree: Accessing the KnowledgeTree API

    KnowledgeTree[1] is a very popular server-based Open Source document management system. Something that some users (like me, or rather my clients) need to do is allow certain people to add or manipulate documents in KnowledgeTree without having to have a login ID and knowledge of the KnowledgeTree user interface. Enter the API, and a little custom PHP scripting....

    Oddly enough, I found at least three different documents on the wiki[2] that seemed to talk about three different approaches to using the API. Oddly (should I say suspiciously?) because the level of detail was just enough to be interesting, but just short of being useful. Ie. for two of them, I just could not figure it out. I even saw a post on the KnowledgeTree forum asking for more detail / a concrete example (me too! me too!) and the only reply was a curt link to one of the near useless wiki pages that I have already mentioned. And needless to say, my own post was ignored. Whats up? (Some conspiratorial possibilities come to mind....)

    The only API approach that I have been able to get working is the "REST web service framework"[3], which, for better or worse, only works as of the currently bleeding edge KnowledgeTree version 3.6.0 (will NOT work with current stable 3.5.4a). [3] is also sorely lacking in detail, but in combination with a little code surfing in

    knowledgetree/ktwebservice/webservice.php

    I was able to divine what was needed to get it working. Here I will hopefully provide some missing detail for Google to find....

    One can of course play with the KnowledgeTree REST web service through a browser, as the means of communication with the server is via POST parameters attached to the server URL. This is also a good way to see the exact format of the XML response the server gives back.

    To achieve the same result with PHP one must use libcurl through the PHP curl extension[4]. Since [4] is also a little skimpy on detail, [5] is a very useful supplement. To cut to the chase, I created a function as follows:

    <?php function curlPost($site, $fields){ $ch = curl_init(); // initialize curl handle curl_setopt($ch, CURLOPT_URL,$site); // set url to post to curl_setopt($ch, CURLOPT_FAILONERROR, 1); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1);// allow redirects curl_setopt($ch, CURLOPT_RETURNTRANSFER,1); // return into a variable curl_setopt($ch, CURLOPT_TIMEOUT, 9); // times out after 10s curl_setopt($ch, CURLOPT_POST, 1); // set POST method curl_setopt($ch, CURLOPT_POSTFIELDS, $fields); $answer = curl_exec($ch); // run the whole process // print_r(curl_getinfo($ch)); // echo "\n\ncURL error number:" .curl_errno($ch); // echo "\n\ncURL error:" . curl_error($ch); curl_close($ch); return $answer; } ?>

    $site is the REST URL of the KnowledgeTree server, and $fields are the POST parameters that are to go along with it. This function simply POSTs these parameters to the URL (exactly the same as entering $site?$fields into your web browser).

    Here is a concrete and currently working example of how to get the contents of the KnowledgeTree root directory:

    <?php $url = 'https://www.server.com/kt-dms-oss-3.6.0/ktwebservice/KTWebService.php'; require_once('funCurlPost.php'); // ************************** // Login // ************************** $postfields = "method=login&password=123456&username=admin"; $response = curlPost($url, $postfields); $xml = new SimpleXMLElement($response); if( $xml->status_code != 0 ){ echo 'Error - authentication failed: ' . $xml->message; } else { $session_id = $xml->results; echo "Login successful, session ID = " . $session_id; } // *********************************** // List contents of root folder (id=1) // *********************************** $postfields = "method=get_folder_contents&session_id=$session_id&folder_id=1"; $response = curlPost($url, $postfields); $xml = new SimpleXMLElement($response); echo "<p>Get root folder contents:<br>"; if( $xml->status_code != 0 ){ echo 'Error - get_folder_contents failed: ' . $xml->message; } else { // print_r($xml); // to see data structure echo "<p>folder ID = " . $xml->results->folder_id . "<br>"; echo "folder name = " . $xml->results->folder_name . "<br>"; echo "folder path = " . $xml->results->full_path . "<p>"; foreach ($xml->results->items->item as $value) { echo "item type = " . $value->item_type . " "; echo "item ID = " . $value->id . " "; echo "item name = " . $value->filename . "<br>"; } } // *********************************** // Logout // *********************************** $postfields = "method=logout&session_id=$session_id"; $response = curlPost($url, $postfields); $xml = new SimpleXMLElement($response); echo "<p>Logging out....<br>"; if( $xml->status_code != 0 ){ echo 'Error - get_folder_contents failed: ' . $xml->message; } else { echo 'successful!'; } ?>

    The key point is that there were three operations in the above script, with three corresponding POST strings:

    OperationPOST string
    Loginlogin&password=123456&username=admin
    List Directory get_folder_contents&session_id=$session_id&folder_id=1
    Logoutmethod=logout&session_id=$session_id

    And something else that is already working -- to add a document to KnowledgeTree, use this POST string:

    $document = "bodybg.jpg"; // located in /var/uploads

    $postfields = "method=add_document&session_id=$session_id&folder_id=1&title=$document&filename=$document&documenttype=Default&tempfilename=/vol/www/vsc/apps/kt-dms-oss-3.6.0/var/uploads/$document";

    [1] http://www.knowledgetree.com/
    [2] http://wiki.knowledgetree.com/
    [3] http://wiki.knowledgetree.com/REST_Web_Service
    [4] http://php.net/manual/en/book.curl.php
    [5] http://devzone.zend.com/article/1081-Using-cURL-and-libcurl-with-PHP

    posted at: 01:08 | path: /SW/business/KnowledgeTree | permanent link to this entry

    Tue, 28 Apr 2009


    /SW/business/KnowledgeTree: Upgrading KnowledgeTree

    Per this link http://wiki.knowledgetree.com/Upgrading_KnowledgeTree it is exceedingly simple. Here is the way I do it (unpacking into a new directory each time, not unpacking over top of the old version):

    Make sure ownership is correct for the new version:

    chown -R www-data:www-data kt-3.6.0/

    Then bring up the login dialog, and add "setup/upgrade.php" to the end of the browser URL, for example:

    https://apps.vancouversolidcomputing.com/knowledgetree/setup/upgrade.php

    Click the "Next" button a hand full of times and it should just work.

    posted at: 08:32 | path: /SW/business/KnowledgeTree | permanent link to this entry

    Fri, 24 Apr 2009


    /SW/business/eGroupware: eGroupware Configuration Hints

    I have recently been playing with eGroupware, and so far am liking it a lot. However, the documentation has some holes the makes getting certain things (like e-mail) to work rather difficult.

    E-mail:

    I am still scratching my head over getting e-mail notifications to work....

    Limiting the Number of Users:

    This would be for the case of providing eGroupware as a commercial service in return for a monthly fee: ie. a certain number of seats for a certain fee.

    I am not seeing an elegant solution. The inelegant solution would be for me (the vendor) to retain administrative control, and manually create a certain number of users for a customer. Give the customer the list of userids and passwords. Then the customer can login and modify userids and passwords to his liking. (But cannot create new userids without administrative control.)

    [1] http://www.nabble.com/not-possible-SMTP-(smtps)-SSL-on-email---td23186066s3741.html

    posted at: 10:48 | path: /SW/business/eGroupware | permanent link to this entry

    Thu, 23 Apr 2009


    /Hosting/NearlyFreeSpeech: Review of Web Hosting at Nearly Free Speech.net[1]:

    I have been hosting several websites with nearlyfreespeech.net for over a year now, and believe I have found the perfect host for small sites. They may in fact be perfect for large sites as well, but I don't personally have one to try them on with.

    Basically they provide a great service at an incredibly cheap price. The cheap price comes from the fact that you pay as you go for both disk storage (US$0.01 / MB / day) and bandwidth (sliding scale[2] starting at $1/G), ie. the bigger the site, or the busier the site, the larger the monthly bill will be. For a small static site without a lot of activity, you could easily pay as little as ten cents per month for hosting. I have several small modestly active sites, including this blog and:

    One of the sites uses MySQL, which I believe costs one cent per day. Combined, all of my sites have been costing me about one dollar per month to keep running.

    Using Paypal, you can add as little as US$0.25 to your account at a time (they take a service charge, I think six cents). You may reduce the percentage of the service charge by increasing the size of the deposit, ie. there is a thirty cent service charge for a US$5 deposit.

    Servers are FreeBSD, and you get full Unix shell account access with your account. Any time I have reached for a standard UNIX utility, it has been there: Midnight Commander, Unison, and nano come to mind. In addition to FTP, there is also SSH access to the account.

    The only thing I have wanted and found them lacking was the Apache modpython module. That may be a FreeBSD limitation, I don't know.

    The service and website have been so flawless that I literally have not once felt the need to try to contact support.

    [1] https://www.nearlyfreespeech.net/
    [2] https://www.nearlyfreespeech.net/services/hosting.php#pricing

    posted at: 09:31 | path: /Hosting/NearlyFreeSpeech | permanent link to this entry

    Wed, 22 Apr 2009


    /SW/email: archivemail: An Automated Means of Capping Mailbox Size

    archivemail automates something that I used to do manually about once a month: remove old and large e-mails from my e-mail client's trash folder. A cron job now does the job for me, once per week:

    14 13 * * 1 nice /home/userid/scripts/archivemail.sh
    running my archivemail.sh script:
    #!/bin/sh
    
    archivemail  --days=180 --output-dir=/home/userid/ 
    --suffix='_archive_%y%m%d_%X' /home/ckoen/Mail/trash/
    
    archivemail  --size=50000 --days=15 --output-dir=/home/userid/ 
    --suffix='_big_%y%m%d_%X' /home/ckoen/Mail/trash/
    
    find /home/userid/trash_* -mtime +30 -type f -exec ls -al {} \;
    find /home/userid/trash_* -mtime +30 -type f -exec rm -rf {} \;
    

    The above syntax should be quite readable, with the first line deleting e-mails older then 180 days, and the second deleting e-mails bigger then 50k. Removed e-mails are dumped in my home directory in a compressed mbox file (readable by mutt), and kept for 30 days.

    archivemail will handle IMAP, mbox, MH and Maildir format mailboxes. Yes, that means it is supposed to be able to pull down big chunks of e-mail from a remote IMAP mailbox, though I have not tested that feature....

    Be sure to use the --dry-run option, which makes no actual changes, while setting up and testing.

    posted at: 01:47 | path: /SW/email | permanent link to this entry

    Tue, 21 Apr 2009


    /Coding/php: XML to Object Conversion

    In working with the KnowledgeTree API[2] I found that the response to my http posts to the API came back in the form of XML. I needed to get that XML into PHP-processable form, and a quite a bit of googling mostly turned up home-grown solutions, a lot of them referring to themselves as "xmltoarray" functions . Until I found the PHP-native solution, SimpleXMLElement[1]. Hopefully this post will help to push SimpleXMLElement a little higher in Google's search listings....

    Suppose I have a big junk of XML in string form, such as this response to a KnowledgeTree API directory listing:

    <response> <status_code>0</status_code> <message/> − <results> <folder_id>1</folder_id> <folder_name>Root Folder</folder_name> <full_path>/</full_path> − <items> − <item> <id>2</id> <item_type>F</item_type> <custom_document_no>n/a</custom_document_no> <oem_document_no>n/a</oem_document_no> <title>DroppedDocuments</title> <document_type>n/a</document_type> <filename>DroppedDocuments</filename> <filesize>n/a</filesize> <created_by>Administrator</created_by> <created_date>n/a</created_date> <checked_out_by>n/a</checked_out_by> <checked_out_date>n/a</checked_out_date> <modified_by>n/a</modified_by> <modified_date>n/a</modified_date> <owned_by>n/a</owned_by> <version>n/a</version> <is_immutable>n/a</is_immutable> <permissions>RWA</permissions> <workflow>n/a</workflow> <workflow_state>n/a</workflow_state> <mime_type>folder</mime_type> <mime_icon_path>folder</mime_icon_path> <mime_display>Folder</mime_display> <storage_path>n/a</storage_path> <items/> </item> − <item> <id>11</id> <item_type>F</item_type> <custom_document_no>n/a</custom_document_no> <oem_document_no>n/a</oem_document_no> <title>Public</title> <document_type>n/a</document_type> <filename>Public</filename> <filesize>n/a</filesize> <created_by>Administrator</created_by> <created_date>n/a</created_date> <checked_out_by>n/a</checked_out_by> <checked_out_date>n/a</checked_out_date> <modified_by>n/a</modified_by> <modified_date>n/a</modified_date> <owned_by>n/a</owned_by> <version>n/a</version> <is_immutable>n/a</is_immutable> <permissions>RWA</permissions> <workflow>n/a</workflow> <workflow_state>n/a</workflow_state> <mime_type>folder</mime_type> <mime_icon_path>folder</mime_icon_path> <mime_display>Folder</mime_display> <storage_path>n/a</storage_path> <items/> </item> − </items> </results> </response>

    in a variable called $response. Converting to a structured object is simply:

    $xml = new SimpleXMLElement($response);

    And then the object might be processed as follows:

    if( $xml->status_code != 0 ){ echo 'Error - operation failed: ' . $xml->message; } else { // print_r($xml); // to see data structure echo "<p>folder ID = " . $xml->results->folder_id . "<br>"; echo "folder name = " . $xml->results->folder_name . "<br>"; echo "folder path = " . $xml->results->full_path . "<p>"; foreach ($xml->results->items->item as $value) { echo "item type = " . $value->item_type . " "; echo "item ID = " . $value->id . " "; echo "item name = " . $value->filename . "<br>"; } }

    [1] http://php.net/manual/en/book.simplexml.php
    [2] http://wiki.knowledgetree.com/REST_Web_Service

    posted at: 06:27 | path: /Coding/php | permanent link to this entry

    Thu, 16 Apr 2009


    /Linux/misc: 101 Things You Can Do On Linux But Not on Microsoft Windows

    I might not make it all the way to 101, but I will give it a go:

    1. You can update almost all system software (except for the kernel) without rebooting.

    2. In fact, Linux can be kept running for months through many updates, without a single shutdown or reboot or system crash. Server administrators literally do this all the time.

    3. Go for years without having to re-install your computer. "Bit rot" does not exist in Linux. It will keep booting and working without deterioration through an endless succession of minor and major software updates, until your hard drive finally fails (don't forget to make periodic backups!!).

    4. Take no specific precautions against viruses / trojans / worms / malware, and go for years without seeing one infect your computer. (I have gone ten years, most of that a full-time Linux user.)

    5. If your screen is locked-up, your system is not necessarily crashed. It might be just the X-Window Server that is hosed. First try to restart the X-server with Ctrl-Shift-BackSpace. If the keyboard is not responding, try to login to the machine from another computer with SSH[1] and restart the Window manager (kill the "X" process). Either of these options would be better for your hard drive then killing the power.

    6. Not enough memory to run everything you want to run at the same time? Run a piece of software on another (UNIX / Linux) computer and display its window on the computer you are sitting at. Just login to the other computer from a terminal using "ssh -X", start the program from the command line of the terminal that is now talking to the other computer, and its window will pop up right where you are sitting.

    7. Trivially run a web server or e-mail server on your desktop. Most Linux distributions install most servers with defaults that have it running almost instantly, out of the box. Little or no configuration required.

    8. If you are experiencing system problems, see the low-level error logs that your system is producing (and Microsoft Windows invariably hides) in the files contained within the /var/log/ directory.

    9. Trivially get the source code for any sofware running on your computer, and (non-trivially) fix / change it, if you so desire.

    10. Have a complete functioning computer system that will do most of what most people need, where all installed (Open Source[2]) software is completely free, and legally so.

    11. (For common Linux distributions[3]) Install and update all of the above software, both system AND USER PROGRAMS, from one single unified software archive. (No chasing all over the internet to find software....)

    12. For software that is not available in the free archives, find almost anything else you want, also for free, in other archives that may or not be legal in the jurisdiction where you live. Add these to your list of archives, and updating all installed software continues to be a simple one-step process.

    13. If you have problems with a given piece of software, usually it is easy to find and send a bug report to the programmers who work on it. If the problem you are reporting is serious, or the fix very simple, they will probably give you a quick reply.

    14. Have your main computer be a zippy Linux install, that the latest bloated version of Microsoft Windows cannot even be installed on, let alone run on. In 2008, my fastest machine is a Pentium III 1.1 GHz with 256M of memory. I am a power user so the memory is a bit light, I need to spend some more money on this machine that I bought for just over US$200.

    15. Build your own Linux router[4] (wired, wireless, or both, just need to somehow provide the requisite number of network cards) with the latest and greatest up-to-date software using an almost worthless Pentium One laptop. All you need are two PCMCIA card slots so that you can plug in two network cards.

    16. Have multiple IDENTICAL copies of files or directories in different places. Edit one copy and all are changed, because all the copies are POINTING TO THE SAME CONTENT on the disk. In the Unix world, there are actually two slightly different ways to do this: "symbolic" links and "hard" links.

    17. Choice: choose and install different "kinds" (distributions[5][6][7]) of Linux specializing in special needs: speed, mimimum use of disk space, "bleeding edge" vs. stable software, education, etc....

    18. More choice: from within any installed Linux distribution, choose from a long list of different window managers, allowing one to choose between desktops that are radically different in appearance and function.

    [1] http://www.openssh.com/
    [2] http://en.wikipedia.org/wiki/Open_source_software
    [3] http://distrowatch.com/
    [4] http://blog.langex.net/index.cgi/Linux/router-bridge/build-your-own-router.html
    [5] http://en.wikipedia.org/wiki/List_of_Linux_distributions
    [6] http://www.linux.org/dist/list.html
    [7] http://distrowatch.com/

    posted at: 00:54 | path: /Linux/misc | permanent link to this entry

    Mon, 13 Apr 2009


    /SW/business/KnowledgeTree: Adding a Chinese Language Pack Plugin to Knowledgetree

    This was really quite unnecessarily hard to find. Turns out they are found on the KnowledgeTree Forge[1], and here[2] is the list. There are actually two Simplified Chinese packs, this one[3] seemed more official so that is what I downloaded[4].

    Installation is then quite simple. Move the downloaded tarball into knowledgetree/plugins/i18n and then untar it, ie.

    tar -xvf SimplifiedChinese.tgz

    You should then see a "SimplifiedChinese" sub-directory appear under i18n, and in my experience the plugin should activate automatically, ie. if you now go to the Knowledgetree login dialog "Simplified Chinese" should appear in the language drop-down menu. If it does not, login as a Knowledgetree admin and go to

    Administration » Miscellaneous » Plugins

    where you might have to hit the "Reread Plugins" button at the bottom, or click on the "Simplified Chinese Translation" check box to activate it.

    [1] http://forge.knowledgetree.com/
    [2] http://forge.knowledgetree.com/gf/project/?action=ProjectTroveBrowse&_trove_category_id=306
    [3] http://forge.knowledgetree.com/gf/project/zhcn/
    [4] http://forge.knowledgetree.com/gf/project/zhcn/frs/

    posted at: 06:23 | path: /SW/business/KnowledgeTree | permanent link to this entry

    Sat, 11 Apr 2009


    /SW/business/WebERP: Installing WebERP Language Packs

    From http://www.weberp.org/ download the desired language pack zip files and unzip them in your WebERP instances' "locale" directory. Then change ownership of all files to www-data.

    Make sure the server has a locale for each of the languages you want to use[1]:

    dpkg-reconfigure locales

    Note that there must exist a system locale that is EXACTLY THE SAME as the WebERP locale, ie. zh_CN is NOT the same as zh_CN.utf8.

    Now you can login to WebERP as admin and create users with language defaults for any of the supported languages (which will be apparent in the language drop-down list in the user profile editor).

    Note that I had to install the ttf-arphic-gkai00mp package to get simplified Chinese (zh_CN) working, but Firefox and Opera do not seem to autodetect the GB2132 font well and encoding must be selected manually from the browser menu.

    [1] http://www.weberp.org/FrequentlyAskedQuestions

    posted at: 02:34 | path: /SW/business/WebERP | permanent link to this entry


    /SW/business/SugarCRM: SugarCRM Language Pack Installation

    https://www.sugarcrm.com/forums/showthread.php?t=34942 is a very useful post.

    Thereafter the language is an option at login. I have found no way to hard code a default language for a particular user (though there is a system-wide default). I have also discovered that a userid in the current SugarCRM, once created, can never be deleted.

    posted at: 01:22 | path: /SW/business/SugarCRM | permanent link to this entry

    Fri, 10 Apr 2009


    /Linux/router-bridge: How to Build Your Own Linux Network Router

    Gentoo is justifiably held in great esteem for their very good documentation. I am going to give you a simplified version of this guide[1], from a Debian perspective, and also, some of the things I do while building a router are simpler by design. Here are a couple other interesting links for background reading: [2][3]

    Why would you want to do this? Cheap commercial routers often do not work very well, choking up on certain kinds of traffic, even locking up regularly so that someone must manually cycle the power to restart them. If you build your own router, you can keep the software up-to-date, which is a big security advantage over the commercial competition. And you can install any software you want on it, like your own web and e-mail server, for instance. This is not meant to be an exhaustive list....

    Start with the cheapest, oldest laptop you can find with the capacity for the number of network cards you want to use (two for a wired *or* wireless local network, three for a wired *and* wireless local network). One network card is needed to connect to the outside world (presumably, the internet) and another one for *each* local network that you want to connect to the internet (typically, a wired and / or a wireless network).

    Note that a really old laptop, like the Pentium One that I use, has no CD and no USB. The easiest way to install Linux on it is to remove the hard drive and place it temporarily in another computer (or a USB enclosure) for the Linux installation. A minimal install is all that is necessary, just enough to get a terminal command prompt and functioning networking. Note that at least on Debian, standard kernels will work right off the shelf. Then replace the newly installed drive in your soon-to-be router.

    Get a Wireless Card that Will Work

    Setting up a router for a wired LAN (Local Area Network) is actually a subset of setting up a wireless router, so I will just describe a wireless router here. (Turning a wireless configuration into a wired configuration just requires a minor alteration or two....) You need a wireless card that will talk to the hostap_cs kernel driver, and also supports "Master" mode. These are not easy to find in, in my experience. I have stumbled across two, one of which broke and I am now having quite a hard time replacing it.

    The orinoco_cs and hostap_cs drivers support many of the same cards. Best to just blacklist the orinoco_cs driver and take your laptop shopping for cards. You really need to test the card before buying it (easy in the second hand Chinese markets I shop in). If you find a card that the hostap_cs driver recognizes, test for Master mode with the iwconfig command:

    iwconfig wlan0 mode Master
    If the card does not like Master mode, you will get an error something like:
    # iwconfig eth1 mode Master
    Error for wireless request "Set Mode" (8B06) :
    SET failed on device eth1 ; Invalid argument.
    If it works, ifconfig will show, in part:
    wlan0 IEEE 802.11b ESSID:"clayton" Nickname:""
    Mode:Master Frequency:2.462 GHz
    (Note the "Mode:Master" part.)

    Configure Networking

    I will avoid great detail here. The most probable options are, your "outside world" network card will either connect directly and probably be called "eth0", or it will connect using PPPOE which you will probably configure with a very simple and straight-forward piece of software called "pppoeconf" and result in a "ppp0" interface. For routing purposes, all you need to know is what the interface is called, and that it works.

    As for the wireless card: give it a static IP and set it to Master mode in /etc/network/interfaces:

    auto eth0
    iface eth0 inet dhcp

    auto wlan0
    iface wlan0 inet static
      wireless-essid somename
      address 192.168.8.1
      netmask 255.255.255.0
      network 192.168.8.0
      broadcast 192.168.8.255
      wireless-mode Master
      wireless-channel 11
      wireless-key somepassword

    Note that in the above, eth0 connects to the internet, and therefore in this case I am not using PPPOE. I will address the slightly more complicated case of PPP in /etc/network/interfaces at a later date.

    Set Up Routing and Firewall

    We will do them at the same time because the same software does both! Install the "firehol" package. Then create a /etc/firehol/firehol.conf file as follows:

    # firehol configuration for a masquerading server
    
    version 5
    
    # The network of our internal LAN.
    home_ips="192.168.8.0/24"
    
    # try "mac  " to filter on MAC addresses
    
    # blacklist full 192.168.8.101 192.168.8.51 192.168.8.53
    
    # DHCP needs 0.0.0.0/255.255.255.255 access.
    interface wlan0 dhcp1
      policy return
      server dhcp accept
    
    # interface eth0 internet src not "${UNROUTABLE_IPS}"
    interface eth0 internet
       protection strong 10/sec 10
       server "smtp http icmp ssh"  accept
       server donkey2 accept
       server ident reject with tcp-reset
       client all   accept
       # reduce noise in the syslog by dropping this stuff silently
       server "dhcp samba" drop
    
    interface wlan0 wlan src "${home_ips}"
       policy reject
       server "http dns ssh icmp" accept
       client all   accept
       # server dhcp drop
    
    interface eth1 lan src "${home_ips}"
       policy reject
       server "http dns ssh icmp" accept
       client all   accept
    
    router internet2wlan inface eth0 outface wlan0
       masquerade reverse
       client all      accept
       server ident    reject with tcp-reset
    
    router internet2lan inface eth0 outface eth1
       masquerade reverse
       client all      accept
       server ident    reject with tcp-reset
    

    There are tutorials out there that will step you through the creation of this file, which is how I started, but if you are careful about the customizaion process, you should be able to use my config as your starting point.

    Some salient points:

    DHCP with dnsmasq

    Install the dnsmasq package. Add the following line to /etc/dnsmasq.conf:

    dhcp-range=192.168.8.50,192.168.8.150,12h

    Restart dnsmasq, and your router should now respond to DHCP requests from the wireless network.

    Wasn't that simple? Comments / errata welcome.

    [1] http://www.gentoo.org/doc/en/home-router-howto.xml
    [2] http://www.bit-tech.net/bits/2008/06/27/build-your-own-router/1
    [3] http://thoughtattic.com/security/MakeYourOwnRouter.html

    posted at: 03:32 | path: /Linux/router-bridge | permanent link to this entry


    /Admin/LAN: How to Build Your Own Linux Network Router

    Gentoo is justifiably held in great esteem for their very good documentation. I am going to give you a simplified version of this guide[1], from a Debian perspective, and also, some of the things I do while building a router are simpler by design. Here are a couple other interesting links for background reading: [2][3]

    Why would you want to do this? Cheap commercial routers often do not work very well, choking up on certain kinds of traffic, even locking up regularly so that someone must manually cycle the power to restart them. If you build your own router, you can keep the software up-to-date, which is a big security advantage over the commercial competition. And you can install any software you want on it, like your own web and e-mail server, for instance. This is not meant to be an exhaustive list....

    Start with the cheapest, oldest laptop you can find with the capacity for the number of network cards you want to use (two for a wired *or* wireless local network, three for a wired *and* wireless local network). One network card is needed to connect to the outside world (presumably, the internet) and another one for *each* local network that you want to connect to the internet (typically, a wired and / or a wireless network).

    Note that a really old laptop, like the Pentium One that I use, has no CD and no USB. The easiest way to install Linux on it is to remove the hard drive and place it temporarily in another computer (or a USB enclosure) for the Linux installation. A minimal install is all that is necessary, just enough to get a terminal command prompt and functioning networking. Note that at least on Debian, standard kernels will work right off the shelf. Then replace the newly installed drive in your soon-to-be router.

    Get a Wireless Card that Will Work

    Setting up a router for a wired LAN (Local Area Network) is actually a subset of setting up a wireless router, so I will just describe a wireless router here. (Turning a wireless configuration into a wired configuration just requires a minor alteration or two....) You need a wireless card that will talk to the hostap_cs kernel driver, and also supports "Master" mode. These are not easy to find in, in my experience. I have stumbled across two, one of which broke and I am now having quite a hard time replacing it.

    The orinoco_cs and hostap_cs drivers support many of the same cards. Best to just blacklist the orinoco_cs driver and take your laptop shopping for cards. You really need to test the card before buying it (easy in the second hand Chinese markets I shop in). If you find a card that the hostap_cs driver recognizes, test for Master mode with the iwconfig command:

    iwconfig wlan0 mode Master
    If the card does not like Master mode, you will get an error something like:
    # iwconfig eth1 mode Master
    Error for wireless request "Set Mode" (8B06) :
    SET failed on device eth1 ; Invalid argument.
    If it works, ifconfig will show, in part:
    wlan0 IEEE 802.11b ESSID:"clayton" Nickname:""
    Mode:Master Frequency:2.462 GHz
    (Note the "Mode:Master" part.)

    Configure Networking

    I will avoid great detail here. The most probable options are, your "outside world" network card will either connect directly and probably be called "eth0", or it will connect using PPPOE which you will probably configure with a very simple and straight-forward piece of software called "pppoeconf" and result in a "ppp0" interface. For routing purposes, all you need to know is what the interface is called, and that it works.

    As for the wireless card: give it a static IP and set it to Master mode in /etc/network/interfaces:

    auto eth0
    iface eth0 inet dhcp

    auto wlan0
    iface wlan0 inet static
      wireless-essid somename
      address 192.168.8.1
      netmask 255.255.255.0
      network 192.168.8.0
      broadcast 192.168.8.255
      wireless-mode Master
      wireless-channel 11
      wireless-key somepassword

    Note that in the above, eth0 connects to the internet, and therefore in this case I am not using PPPOE. I will address the slightly more complicated case of PPP in /etc/network/interfaces at a later date.

    Set Up Routing and Firewall

    We will do them at the same time because the same software does both! Install the "firehol" package. Then create a /etc/firehol/firehol.conf file as follows:

    # firehol configuration for a masquerading server
    
    version 5
    
    # The network of our internal LAN.
    home_ips="192.168.8.0/24"
    
    # try "mac  " to filter on MAC addresses
    
    # blacklist full 192.168.8.101 192.168.8.51 192.168.8.53
    
    # DHCP needs 0.0.0.0/255.255.255.255 access.
    interface wlan0 dhcp1
      policy return
      server dhcp accept
    
    # interface eth0 internet src not "${UNROUTABLE_IPS}"
    interface eth0 internet
       protection strong 10/sec 10
       server "smtp http icmp ssh"  accept
       server donkey2 accept
       server ident reject with tcp-reset
       client all   accept
       # reduce noise in the syslog by dropping this stuff silently
       server "dhcp samba" drop
    
    interface wlan0 wlan src "${home_ips}"
       policy reject
       server "http dns ssh icmp" accept
       client all   accept
       # server dhcp drop
    
    interface eth1 lan src "${home_ips}"
       policy reject
       server "http dns ssh icmp" accept
       client all   accept
    
    router internet2wlan inface eth0 outface wlan0
       masquerade reverse
       client all      accept
       server ident    reject with tcp-reset
    
    router internet2lan inface eth0 outface eth1
       masquerade reverse
       client all      accept
       server ident    reject with tcp-reset
    

    There are tutorials out there that will step you through the creation of this file, which is how I started, but if you are careful about the customizaion process, you should be able to use my config as your starting point.

    Some salient points:

    DHCP with dnsmasq

    Install the dnsmasq package. Add the following line to /etc/dnsmasq.conf:

    dhcp-range=192.168.8.50,192.168.8.150,12h

    Restart dnsmasq, and your router should now respond to DHCP requests from the wireless network.

    Wasn't that simple? Comments / errata welcome.

    [1] http://www.gentoo.org/doc/en/home-router-howto.xml
    [2] http://www.bit-tech.net/bits/2008/06/27/build-your-own-router/1
    [3] http://thoughtattic.com/security/MakeYourOwnRouter.html

    posted at: 03:32 | path: /Admin/LAN | permanent link to this entry

    Mon, 06 Apr 2009


    /Admin/Cherokee: Introduction to the Cherokee Web Server

    I came across this post[1] singing the praises of the Cherokee web server, and thought I would give it a try. So far, I would say not bad....

    Per their own measurements[2] they seem to do well in the speed department, though I have read elsewhere that that advantage is found mainly in the area of smaller files. Rumor has it that relative to Apache, Cherokee has less dependencies and uses less system resources, and is very stable.

    Without a doubt Cherokee has an advantage in the area of configuration, as they have a very nice and seemingly very complete GUI configurator, that puts fairly extensive help right at one's fingertips. That turned out to be a good thing, as I flailed around a bit getting a couple things working, and the GUI made the trial & error process quite a bit faster, I think.

    Of course the default Debian install serves up html out of the standard /var/www web root just fine. But I am playing with Cherokee on my backup server so I need to be able to see the backuppc controls, which are Perl-based CGI.

    But first, bring up that wonderful GUI configurator.... The thing is not started by default, and after it is started, it only listens on localhost:9090, so ssh to the Cherokee machine thusly:

    ssh -L 9090:localhost:9090 remote_IP
    which forwards port 9090 on the local host to port 9090 on the Cherokee host. Then:
    cherokee-admin
    on the Cherokee host, which will incidentally provide a username and password for login, for when you point a browser at localhost:9090.

    The backuppc Debian package configures Apache automatically, but of course this does not work for Cherokee. In the Cherokee admin app click on "Virtual Servers --> default". Add "index.cgi" to the "Directory Indexes" field.

    Then click on the "Behavior" tab, at the bottom of which you will find "Add New Rule". Configure as follows:

    Add New Rule
    Rule TypeExtensions
    Extensionscgi
    HandlerCGI
    Security:Validation MechanismFixed List
    Security:MethodsBasic
    Security:Realmbackuppc
    At the bottom of the Security tab, use "Add New Pair" to add a userid / password for access to backuppc.

    And now, do not forget to click on the "Save" button at the bottom of the GUI's left column. And create a soft link to the backuppc executable:

    cd /var/www
    ln -s /usr/share/backuppc/cgi-bin/ backuppc
    And now pointing your browser at
    http://cherokeeHost/backuppc/
    should bring up a password dialog window, followed by the backuppc control application.

    Something that is not so well documented is that php5-cgi[5] must be installed for the Cherokee FastCGI PHP handler to work (and php5-mysql if you want PHP to talk to MySQL). Python apparently needs special treatment[4], but I have not attempted to get it working.

    [1] http://www.sourceguru.net/archives/202
    [2] http://www.cherokee-project.com/benchmarks.html
    [3] http://www.cherokee-project.com/doc/other_faq.html
    [4] http://www.rkblog.rk.edu.pl/w/p/django-and-cherokee-server/
    [5] http://www.howtoforge.com/how-to-install-and-configure-cherokee-web-server-with-php5-and-mysql5-on-ubuntu-8.10

    posted at: 09:32 | path: /Admin/Cherokee | permanent link to this entry

    Sun, 05 Apr 2009


    /SW/business/WebERP: webERP Installation

    Download the latest version from http://www.weberp.org/

    Create a directory for webERP and copy the downloaded zip file into that directory, and unzip it. (Do not try to unzip from a parent directory, as the last time I tried, webERP unzipped all the files into the current directory, not a subdirectory.)

    Correct the ownership of the files:

    chown -R www-data:www-data ../webERP_3.10.3/

    Have a look at the installation/upgrade notes in webERP_3.10.3/doc/. For new installations, we must create a database and user for the new installation first, manually:

    mysql -p Enter password: mysql> create database apps_weberp; mysql> GRANT ALL on apps_weberp.* TO 'apps_weberp'@'localhost' IDENTIFIED BY 'appsPassword';

    Then edit weberp-new.sql to add a line at the top:

    use apps_weberp;

    where apps_weberp is the name of the database you just created, and then from the shell command line (not from the MySQL command line) run:

    mysql --user=apps_weberp --password='appsPassword' < /var/www/vsc/apps/webERP_3.10.3/sql/mysql/weberp-new.sql

    to import a clean new database (NOT the demo database). A "show tables;" on the new database should now show a lot of tables. Create and edit the config.php:

    cp config.distrib.php config.php
    vi config.php

    Change the following settings in config.php:

    $DefaultLanguage ='en_US';
    $dbuser = 'apps_weberp';
    $dbpassword = 'appsPassword';

    Per [1] Q14, webERP supports having one or more companies using the same instance of webERP. Inside the "companies" directory, there is exactly one subdirectory per company, with the name of the subdirectory exactly the same as the corresponding MySQL database. (I am assuming then that there is only one MySQL user per instance of webERP, and that single user controls all the databases / companies associated with that instance.) The default install has a single subdirectory named "weberpdemo" in the "companies" directory, so this must be renamed to agree with a MySQL database:

    cd /var/www/vsc/apps/webERP_3.10.3/companies
    mv weberpdemo apps_weberp

    The subdirectory names under the "companies" directory are apparently what is used to populate the "company" drop-down menu on the login screen. If the company you select does not have a corresponding database, you will get the error:

    "The company name entered does not correspond to a database on the database server specified in the config.php configuration file"

    which is actually a bit non-sensical because it has nothing to do with the config.php file.

    Now you should be able to login as the admin user with default password of "weberp". Change that password. And in config.php:

    $allow_demo_mode = False;

    to get rid of the password display on the login screen.

    [1] http://www.weberp.org/FrequentlyAskedQuestionsInstallation

    posted at: 09:49 | path: /SW/business/WebERP | permanent link to this entry


    /Admin/backups/misc: Semi-Automating My Monthly Backup

    Boring repetitive tasks should be scripted. Backups *really* should be automated. So here is a first step down that path for the tarball that I send to my hosted server every month:

    #!/bin/sh cd /path/to/script/directory echo "My monthly backup:" echo "First archive mail trash" ./archivemail.sh echo "Now build the tar file." FILENAME="Backup`date +%Y%m%d`.tar" PATHFILE="/scratch/"$FILENAME echo "Will backup to " $PATHFILE echo "Archive /home/userid..." tar -cf $PATHFILE /home/userid echo "Add /etc..." tar -rf $PATHFILE /etc /etc/init.d/apache2 stop /etc/init.d/mysql stop echo "add /var/www..." tar -rf $PATHFILE /var/www echo "add /var/lib/mysql/" tar -rf $PATHFILE /var/lib/mysql/ /etc/init.d/apache2 start /etc/init.d/mysql start echo "Backup complete, list contents of archive" tar -tvf $PATHFILE

    and then I get an e-mail telling me its all done, and there is a huge tarball waiting for me in /scratch. I run this script on the 1st of every month from cron. archivemail.sh uses archivemail[1] to clean out my Mail trash folder. I split it out in a separate script because I run it more often (once a week).

    [1] http://blog.langex.net/index.cgi/SW/email/

    posted at: 02:26 | path: /Admin/backups/misc | permanent link to this entry

    Fri, 03 Apr 2009


    /SW/business/KnowledgeTree: How to Move Knowledgetree

    Knowledgetree is Document Management software, so not only does it store things in MySQL, but it also stores a lot in the file system. So one cannot move a KnowledgeTree (KT) instance to a different directory and expect it to just continue working. In fact, it normally does not.

    So first step: make a good backup both of the web root directory and the MySQL database.

    Then the trick, tipped-off to me courtesy of a forum[1], is to go through the (quite simple) upgrade process by running, for instance

    http://ofri.vancouversolidcomputing.com/knowledgetree/setup/upgrade.php

    The first prompt is for the admin userid and password of the KT instance (not the MySQL password).

    After that, just click through the "Next" button on several screens, and finally you will be presented with a login dialog. All done.

    [1] http://forums.knowledgetree.com/viewtopic.php?t=1670

    posted at: 01:58 | path: /SW/business/KnowledgeTree | permanent link to this entry

    Thu, 02 Apr 2009


    /Admin/Apache/HTTPS-SSL: SSL Certificates 101

    Generally speaking, it would appear that a vanilla single root SSL certificate, self-signed or otherwise, is only good for exactly one domain that corresponds exactly to the "common name" used in creating the certificate.

    Some vendors[1] sell something called a "wildcard" certificate, where the common name on the certificate takes the form of "*.domain.com", and can be used to secure multiple sub-domains. Such a "wildcard" certificate, not suprisingly, seems to be considerably more expensive then a single root certificate. Apache even provides a built-in mechanism using a document root wildcard[2] for mapping each sub-domain to a different document root.

    Some vendors like Godaddy[3] sell multiple domain certificates which seem to provide a discount to purchasing the same number of single root certificates.

    A good source for a free certificate is cacert.org[4]. cacert.org will sign a certificate for you for a domain if your e-mail address is in the whois record for the domain (this is an automated process on their end, they verify your identity by sending you a link in an e-mail ....) The Apache website[5] has a nice concise explanation of how to create a server key and certificate signing request for cacert.org (or anyone else....)

    Basically the process is[7]:

    cacert.org certificates seem to be good for six months. They send you an e-mail in advance of expiry.

    For a particular SSL-enabled Apache virtual host, force users to always use https by placing a redirect in http virtual host, ie.:

    <VirtualHost *:80> DocumentRoot /var/www/vsc/apps ServerName apps.vancouversolidcomputing.com ServerAlias apps.vancouversolidcomputing.com ServerAdmin ckoeni@gmail.com CustomLog /var/log/apache2/access.log combined Redirect / https://apps.vancouversolidcomputing.com/ </VirtualHost>

    [1] http://www.sslshopper.com/best-ssl-wildcard-certificate.html
    [2] http://phaseshiftllc.com/archives/2008/10/27/multiple-secure-subdomains-with-a-wildcard-ssl-certificate
    [3] http://www.godaddy.com/gdshop/ssl/ssl.asp?ci=9039
    [4] https://www.cacert.org/
    [5] http://httpd.apache.org/docs/2.0/ssl/ssl_faq.html#realcert
    [6] http://httpd.apache.org/docs/2.0/ssl/ssl_faq.html#removepassphrase
    [7] http://www.cacert.org/help.php?id=6
    [8] http://www.cacert.org/help.php?id=4

    posted at: 00:57 | path: /Admin/Apache/HTTPS-SSL | permanent link to this entry


    /Admin/Apache/HTTPS-SSL: Multiple SSL Certificates in Apache

    As I noted in an earlier post, name-based virtual hosting "seemed" to be working. "Seemed". In fact, the virtual hosts were finding the correct web root and loading the correct site, but browsers were consistently giving an error to the effect that the domain name in the certificate and the domain name the browser was pointed to were not the same.

    Someone on the cacert.org e-mail list[1] set me straight:

    From: Pete Stephenson
    To: cacert-support@lists.cacert.org
    Subject: Re: Certificate somehow associated with wrong sub-domain?
    
    Both subdomains share the same IP address.
    
    SSL is IP-based, rather than name-based. Specifically, when a client
    connects to a server, it establishes the SSL connection prior to
    sending the HTTP Host header, so the server has no idea which specific
    certificate to send. Depending on the server, it may send the first
    certificate mentioned in the configuration file or do something else
    entirely.
    
    You can solve this by adding multiple SubjectAltNames to a certificate
    (e.g. you'd have a SAN for apps.vancouversolidcomputing.com and
    another one for vsc.vancouversolidcomputing.com all in a single
    certificate) and telling your server to use the same certificate for
    both subdomains.
    
    More details, including a handy shell script which can generate the
    required CSR (some options, like the RSA key length are manually
    configurable in the shell script; it doesn't prompt the user for the
    keylength), are available here:
    http://wiki.cacert.org/wiki/VhostTaskForce
    
    Cheers!
    -Pete
    

    So what I take from this is:

    This page[2] talks about the issue in general, and the various somewhat fuzzy and partially supported options -- "Currently the different browsers, servers and CAs all implement different and incompatible ways to use SSL certificates for several VHosts on the same server" -- this situation has not been entirely standardized yet!

    This page[3] seems to recommend the cacert.org way to setup Apache with the right kind of multiple SubjectAltName certificate, complete with a script[4] for generating an appropriate Certificate Request and associated key. I used the script to generate the request, and sure enough:

    # openssl req -noout -text -in vancouversolidcomputing_csr.pem Certificate Request: Data: Version: 0 (0x0) Subject: CN=www.vancouversolidcomputing.com <snip> Requested Extensions: X509v3 Subject Alternative Name: DNS:www.vancouversolidcomputing.com, DNS:vancouversolidcomputing.com, DNS:printshopdemo.vancouversolidcomputing.com, DNS:vsc.vancouversolidcomputing.com , DNS:solid.vancouversolidcomputing.com, DNS:apps.vancouversolidcomputing.com, DNS:ofri.vancouversolidcomputing.com <snip>

    out comes a Certificate Request with multiple SubjectAltNames.

    I then replaced *all* certificates in my Apache virtual hosts with this new certificate, ie.

    SSLEngine on
    SSLCertificateFile /etc/apache2/ssl/vancouversolidcomputing_crt.pem
    SSLCertificateKeyFile /etc/apache2/ssl/vancouversolidcomputing_privatekey.pem

    in each virtual host block for each sub-domain / web root.

    The certificate now works flawlessly in Iceape (which apparently contains the cacert.org Certificate Authority information) and Internet Explorer still complains about an untrusted Certificate Authority. Neither complains about domain names not matching, which was happening before.

    [3] contained several other directives in each of the SSL virtual host blocks:

    UseCanonicalName On
    SSLCipherSuite HIGH
    SSLProtocol all -SSLv2

    but I have so far found these unnecessary.

    [1] https://lists.cacert.org/wws/info/cacert-support
    [2] http://wiki.cacert.org/wiki/VhostTaskForce
    [3] http://wiki.cacert.org/wiki/CSRGenerator
    [4] http://svn.cacert.org/CAcert/Software/CSRGenerator/csr

    posted at: 00:30 | path: /Admin/Apache/HTTPS-SSL | permanent link to this entry