Expat-IT Tech Bits

Home

Contact

Links

Search this site:

Categories:

/ (287)
  Admin/ (122)
    Apache/ (10)
      HTTPS-SSL/ (4)
      PHP/ (3)
      performance/ (2)
    Cherokee/ (1)
    LAN/ (4)
    LVM/ (6)
    Monitoring/ (2)
      munin/ (2)
    SSH/ (6)
    SSL/ (1)
    Samba/ (1)
    VPN-options/ (6)
      OpenVPN/ (1)
      SSH-Proxy/ (3)
      Tinc/ (1)
      sshuttle/ (1)
    backups/ (17)
      SpiderOak/ (1)
      backuppc/ (5)
      dirvish/ (1)
      misc/ (6)
      rdiff-backup/ (1)
      rsync/ (1)
      unison/ (2)
    commandLine/ (24)
      files/ (8)
      misc/ (10)
      network/ (6)
    crontab/ (1)
    databases/ (15)
      MSSQL/ (2)
      MySQL/ (8)
      Oracle/ (3)
      PostgreSQL/ (1)
    dynamicDNS/ (2)
    email/ (11)
      Dovecot/ (1)
      deliverability/ (1)
      misc/ (1)
      postfix/ (7)
      puppet/ (1)
    iptables/ (3)
    tripwire/ (1)
    virtualization/ (9)
      VMware/ (1)
      virtualBox/ (8)
  Coding/ (14)
    bash/ (1)
    gdb/ (1)
    git/ (3)
    php/ (5)
    python/ (4)
      Django/ (2)
  Education/ (1)
  Hosting/ (27)
    Amazon/ (18)
      EBS/ (3)
      EC2/ (10)
      S3/ (1)
      commandline/ (4)
    Godaddy/ (2)
    NearlyFreeSpeech/ (3)
    Rackspace/ (1)
    vpslink/ (3)
  Linux/ (30)
    Android/ (1)
    Awesome/ (3)
    CPUfreq/ (1)
    China/ (2)
    Debian/ (8)
      APT/ (3)
      WPA/ (1)
    audio/ (1)
    encryption/ (3)
    fonts/ (1)
    misc/ (6)
    remoteDesktop/ (1)
    router-bridge/ (3)
  SW/ (45)
    Micro$soft/ (1)
    browser/ (2)
      Chrome/ (1)
      Firefox/ (1)
    business/ (28)
      Drupal/ (9)
      KnowledgeTree/ (6)
      Redmine/ (2)
      SugarCRM/ (7)
      WebERP/ (2)
      WordPress/ (1)
      eGroupware/ (1)
    chat/ (1)
    email/ (1)
    fileSharing/ (2)
      btsync/ (1)
      mldonkey/ (1)
    graphics/ (2)
    research/ (2)
    website/ (6)
      blog/ (6)
        blosxom/ (3)
        rss2email/ (1)
        webgen/ (1)
  Security/ (15)
    IMchat/ (2)
    circumvention/ (2)
    cryptoCurrency/ (1)
    e-mail/ (4)
    greatFirewall/ (1)
    hacking/ (1)
    password/ (1)
    privacy/ (2)
    skype/ (1)
  Services/ (1)
    fileSharing/ (1)
  TechWriting/ (1)
  xHW/ (14)
    Lenovo/ (1)
    Motorola_A1200/ (2)
    Thinkpad_600e/ (1)
    Thinkpad_a21m/ (3)
    Thinkpad_i1300/ (1)
    Thinkpad_x24/ (1)
    USB_audio/ (1)
    scanner/ (1)
    wirelessCards/ (2)
  xLife/ (17)
    China/ (9)
      Beijing/ (5)
        OpenSource/ (3)
    Expatriation/ (1)
    Vietnam/ (7)

Archives:

  • 2016/07
  • 2016/05
  • 2016/02
  • 2016/01
  • 2015/12
  • 2015/11
  • 2015/06
  • 2015/01
  • 2014/12
  • 2014/11
  • 2014/10
  • 2014/09
  • 2014/07
  • 2014/04
  • 2014/02
  • 2014/01
  • 2013/12
  • 2013/10
  • 2013/08
  • 2013/07
  • 2013/06
  • 2013/05
  • 2013/04
  • 2013/02
  • 2013/01
  • 2012/12
  • 2012/10
  • 2012/09
  • 2012/08
  • 2012/07
  • 2012/06
  • 2012/05
  • 2012/04
  • 2012/03
  • 2012/01
  • 2011/12
  • 2011/11
  • 2011/10
  • 2011/09
  • 2011/08
  • 2011/07
  • 2011/06
  • 2011/05
  • 2011/04
  • 2011/02
  • 2010/12
  • 2010/11
  • 2010/10
  • 2010/09
  • 2010/08
  • 2010/07
  • 2010/06
  • 2010/05
  • 2010/04
  • 2010/03
  • 2010/02
  • 2010/01
  • 2009/12
  • 2009/11
  • 2009/10
  • 2009/09
  • 2009/08
  • 2009/07
  • 2009/06
  • 2009/05
  • 2009/04
  • 2009/03
  • 2009/02
  • 2009/01
  • 2008/12
  • 2008/11
  • 2008/10
  • 2008/09
  • Subscribe XML RSS Feed

    Creative Commons License
    This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
    PyBlosxom

    This site has no ads. To help with hosting, crypto donations are accepted:
    Bitcoin: 1JErV8ga9UY7wE8Bbf1KYsA5bkdh8n1Bxc
    Zcash: zcLYqtXYFEWHFtEfM6wg5eCV8frxWtZYkT8WyxvevzNC6SBgmqPS3tkg6nBarmzRzWYAurgs4ThkpkD5QgiSwxqoB7xrCxs

    Fri, 04 Jul 2014


    /Hosting/Amazon/S3: Uploading Static Website Content to S3 with s3cmd

    Hosting a static website on S3 is fairly straight-forward[1]. Unless you have a big site containing one or more nested subdirectories with hundreds or thousands of small files. The Amazon console's upload function cannot handle this elegantly. s3cmd[2] to the rescue!:

    s3cmd --acl-public --guess-mime-type -r put * s3://your-bucket-name

    The --guess-mime-type option was particularly obscure, if this is not set for each file then the default mime-type is "binary" and the browser just wants to download anything you point it to on the site.

    [1] http://docs.aws.amazon.com/gettingstarted/latest/swh/website-hosting-intro.html
    [2] http://s3tools.org/s3cmd

    posted at: 03:53 | path: /Hosting/Amazon/S3 | permanent link to this entry

    Fri, 20 Jul 2012


    /Hosting/Amazon/commandline: An Easy Way to Snapshot Your AWS Server

    ec2-create-image --region ap-northeast-1 <instanceID> -n "name" -d "description" --no-reboot

    This basically does the same thing as the "Create Image" menu option in the Amazon console, with the additional option of NOT taking the server down while the image is being taken. Something I used to use a big honking Python script to do before.

    posted at: 09:43 | path: /Hosting/Amazon/commandline | permanent link to this entry

    Thu, 19 Jul 2012


    /Hosting/Amazon/commandline: Changing the Size/Type of an AWS Instance

    First stop the instance. Then run this command:

    ec2-modify-instance-attribute --region ap-northeast-1 <instanceID> -t m1.large

    This will change the type of instanceID from whatever it was, to m1.large. Then start the server, and observe the new size.

    posted at: 21:41 | path: /Hosting/Amazon/commandline | permanent link to this entry

    Thu, 12 Jul 2012


    /Hosting/Amazon/commandline: Adding Ephemeral Storage to your EBS-backed EC2 AWS Machine

    As of this writing, most EBS-backed images available do not have ephemeral storage already baked in. It can be done[1].

    Take an image of your current machine, to which you wish to add ephemeral storage. Let's call it . Then start up a new replacement machine, as follows:

    ec2-run-instances --region <regionid> <new-image-id> -k <keyid> -g <groupid> -t <image-type> -b '/dev/sdb=ephemeral0'

    Note that "ephemeral0" is the important part, you can assign it to whatever mount point (/dev/sdb) you wish. There is apparently also an "ephemeral1" for swap for those who feel a need for that. At least if you are using an Ubuntu image, you should find this in the /etc/fstab of your new machine:

    /dev/sdb /mnt auto defaults,nobootwait,comment=cloudconfig 0 0

    and a whole bunch of extra space under /mnt. In Ubuntu at least, this is preserved through reboots and new AMIs.

    [1] http://theagileadmin.com/2010/03/23/amazon-ec2-ebs-instances-and-ephemeral-storage/

    posted at: 03:43 | path: /Hosting/Amazon/commandline | permanent link to this entry

    Sun, 06 Nov 2011


    /Hosting/Amazon/EC2: Increase EBS Root Volume Size of an EC2 Server (Complicated)

    Complicated because this is a RightScale image with multiple partitions, so resize2fs did not work the first time per[1].

    Pick the image you want to resize (this one has an 8G root volume):

    $ ec2-describe-images ami-e00df089
    IMAGE ami-e00df089 944964708905/rightimage_debian_6.0.1_amd64_20110406.1_ebs 944964708905 available public x86_64 machine aki-4e7d9527 ebs paravirtual xen
    BLOCKDEVICEMAPPING  /dev/sda   snap-b62f31da   8
    

    Start a server up with a 25G volume instead:

    $ ec2-run-instances -t t1.micro --key clayton --block-device-mapping /dev/sda=:25 ami-e00df089

    Log in and see (in part):

    # df -h
    Filesystem            Size  Used Avail Use% Mounted on
    /dev/xvda2            5.0G  1.2G  3.6G  24% /
    

    Here is where things get funky, as "resize2fs /dev/xvda2" will not work per [1] because there are two other partitions: xvda1 is /boot and xvda3 is swap. [2] to the rescue. Hoping that it is the swap partition that is in the way, I got rid of it and rebuilt the partition table as follows: deleting partitions 2 and 3, creating a new partition 2 (accepting the defaults) and then writing the new partition to disk(w):

    # fdisk /dev/xvda
    d 2
    d 3
    n p 2
    w
    

    Do not forget to remove the swap line from /etc/fstab, and reboot. Now:

    # resize2fs /dev/xvda2

    works! And all is well. Now, as we have done before, create a new image to save our work:

    ec2-create-snapshot vol-e41af089
    SNAPSHOT snap-697ad00b vol-e41af089 pending

    Once the snapshot is finished:

    ec2-register -a x86_64 -b '/dev/sda=snap-697ad00b:25:false' -n 'Squeeze_64' -d '64 bit Squeeze' --kernel="aki-4e7d9527"

    [1] http://alestic.com/2009/12/ec2-ebs-boot-resize
    [2] http://bioteam.net/2010/07/how-to-resize-an-amazon-ec2-ami-when-boot-disk-is-on-ebs/

    posted at: 01:10 | path: /Hosting/Amazon/EC2 | permanent link to this entry

    Wed, 02 Nov 2011


    /Hosting/Amazon/EC2: Sources of Official Amazon Images

    I seem to always be looking this stuff up. Time to right it down.

    Debian:

    http://wiki.debian.org/Cloud/AmazonEC2Image
    http://support.rightscale.com/21-Community/RightScale_OSS

    Ubuntu:

    http://uec-images.ubuntu.com/
    http://support.rightscale.com/21-Community/RightScale_OSS

    CentOS:

    http://support.rightscale.com/21-Community/RightScale_OSS

    posted at: 09:12 | path: /Hosting/Amazon/EC2 | permanent link to this entry


    /Hosting/Amazon/commandline: Accidental Server Termination Protection

    Especially if you work at the command line a lot, it seems frighteningly easy to accidentally terminate the wrong server. Not any more:

    For termination prevention[1]:

    ec2-modify-instance-attribute i-57e64936 --disable-api-termination true

    And to re-enable termination:

    ec2-modify-instance-attribute i-57e64936 --disable-api-termination false

    Note that a termination-protected server can still be stopped and started, it is just the totally destructive "termination" that is locked out.

    [1] http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/index.html?Using_ChangingDisableAPITermination.html

    posted at: 07:49 | path: /Hosting/Amazon/commandline | permanent link to this entry

    Sun, 02 Oct 2011


    /Hosting/Rackspace: Use Your Own Kernel on Rackspace

    Rackspace recently announced[1] that they have made it possible for Rackspace cloud server users to manage their own kernel, rather than being stuck with the Rackspace-provided kernel. Ie. it is now possible to upgrade your Rackspace cloud server, just like on Amazon AWS!!

    The plan is apparently to phase in this new capability for new servers, but existing servers can be retrofitted as well. However this process did not turn out to be quite as straight-forward as I had hoped, or these documents[2][3] lead me to believe. I will try to recall the necessary steps to get from Lucid to Maverick.

    I had an Ubuntu Lucid machine. Before turning it into a Maverick machine, the first step is to MAKE A BACKUP OF YOUR SERVER. This process is highly error-prone, and the result of any small error is a server that will not boot. In fact, it is advisable to make a new server backup after each of the major steps below.

    Next step is to switch over to pv-grub and get that working:

    mkdir /boot/grub
    cp /etc/init/tty1.conf /etc/init/hvc0.conf
    sed -i 's/tty1/hvc0/' /etc/init/hvc0.conf
    sed -i 's/sda/xvda/' /etc/fstab
    apt-get install linux-virtual
    

    And create a /boot/grub/menu.lst that, for Lucid, looks something like this:

    default=0
    timeout=5
     
    title=DISTRO-NAME "Ubuntu 2.6.32-33-server"
        root (hd0)
        kernel /boot/vmlinuz-2.6.32-33-server ro console=hvc0 root=/dev/xvda1
        initrd /boot/initrd.img-2.6.32-33-server
    

    Now open a ticket with Rackspace and ask them to switch your server over to pv-grub. Actually, it would probably be better to do this via chat, because they will reboot the server when they make the switch, and if something is wrong with your configuration the server will not come back. If and when it does, your still-Lucid server will be running with the Ubuntu repository-provided kernel, rather than the kernel provided by the Rackspace virtualization environment.

    The next step is the more-or-less standard Ubuntu upgrade from Lucid to Maverick. At the end, DO NOT REMOVE THE LUCID KERNEL, because after you reboot at the end of the upgrade you will find that your newly upgraded Maverick server is still running the Lucid kernel. Here is where it hit the fan for me. Up to this point, despite the less then perfect documentation, Rackspace support was very helpful. I got stuck here and they basically cut me loose, telling me they do not debug OS issues.

    ********* UPDATED SECTION **********************

    Do not linger at Maverick. Upgrade directly from Lucid, to Maverick, and then to Natty, WITHOUT REBOOTING. The Lucid kernel will work with the Natty userspace. If you do as I describe in the section marked "DON'T DO THE BELOW", yes you will get a functioning Maverick server, but your Rackspace snapshot backups will not work (will not boot) as the partition label disappears from the partition in the backup. Going directly to Natty is much cleaner, easier, and safer, as only the Maverick kernel is problematic and needs special handling. Once you have upgraded the userland to Natty, BACKUP YOUR SERVER. Then, again, modify /boot/grub/menu.lst:

    default=0
    timeout=5
     
    title=DISTRO-NAME "Ubuntu 2.6.38-11"
        root (hd0)
        kernel /boot/vmlinuz-2.6.38-11-virtual ro console=hvc0 root=/dev/xvda1
        initrd /boot/initrd.img-2.6.38-11-virtual
    

    to point to the Natty kernel, and reboot.

    (Note that I tried the Ubuntu grub-legacy-ec2 package to try to achieve automatic updates of /boot/grub/menu.lst, but the result was a non-booting server.)

    ********* DON'T DO THE BELOW *******************

    Next stop for me was to call on Canonical[4] for some help, and they were helpful as well. The obvious thing to do here is just edit /boot/grub/menu.lst to point to the Maverick kernel and initrd instead of Lucid's. This did not work, due to a quirk in the Maverick kernel. To get around it, label your root partition:

    e2label /dev/xvda1 MYROOT

    Now make the following change to /boot/grub/menu.lst:

    default=0
    timeout=5
     
    title=DISTRO-NAME "Ubuntu 2.6.35-30-virtual"
        root (hd0)
        kernel /boot/vmlinuz-2.6.35-30-virtual ro console=hvc0 root=LABEL=MYROOT
        initrd /boot/initrd.img-2.6.35-30-virtual
    

    Note the "root=LABEL=MYROOT" part. And to /etc/fstab:

    proc            /proc       proc    defaults    0 0
    LABEL=MYROOT  /     ext3  defaults,errors=remount-ro,noatime    0 1
    UUID=b9f62618-9dff-4bb4-9f58-3c2c5a95625d  none  swap  sw          0 0
    

    (you can get the UUID value for swap from /dev/disk/by-uuid/)

    And now when you reboot, it should all work: a Maverick server running the Maverick kernel.

    [1] http://www.rackspace.com/cloud/blog/2011/07/13/new-feature-manage-your-own-kernel-in-linux-cloud-servers/
    [2] http://www.rackspace.com/knowledge_center/linux_kernel_management
    [3] http://www.rackspace.com/knowledge_center/index.php/Using_a_Custom_Kernel_with_pv-grub
    [4] https://bugs.launchpad.net/ubuntu/+source/linux/+bug/821466

    posted at: 09:38 | path: /Hosting/Rackspace | permanent link to this entry

    Mon, 29 Aug 2011


    /Hosting/Amazon/EC2: Debian: Testing a Lenny to Squeeze Upgrade in Amazon AWS

    One of the things that is really lovely about cloud computing is the ability to test relatively quickly and easily test something new, before diving in and breaking a "real" server. In this case, an upgrade from Debian Lenny to Debian Squeeze. With about two years between the average Debian release, there is much opportunity for something to break when trying to jump such a large computing chasm.

    Unfortunately, Debian is not so well, and definitely not formally, supported in the Amazon AWS environment. Here is how I went about it....

    There are actually a handful (3?) of Debian Lenny EBS images in AWS us-east region at the moment. I had a look at them all and, as I recall, selected public ami-9a6b9af3 for my test. It has a 10G EBS root file system, and a very basic Debian install onboard. (Note that using an informal public AMI like this is a security risk, as there is no way to be sure nothing malicious has been installed in the image.)

    To kick off a instance and have a look at it:

    ec2-run-instances -k clayton -t t1.micro ami-9a6b9af3
    Login to the instance and make any desired modifications. (apt-get upgrade, install software, etc....) Then stop the instance:
    ec2-stop-instances i-27d7d246

    Snapshot it's volume and create a new private image (you can get the volume name from ec2-describe-instances):

    ec2-create-snapshot vol-ac46eac6
    SNAPSHOT snap-fa4ce59a vol-ac46eac6 pending

    Now register a new, private Debian Lenny AMI to work with going forward:

    ec2-register -a x86_64 -b '/dev/sda1=snap-fa4ce59a:10:false' -n 'Lenny_64_Lenny_kernel' -d '64 bit Lenny with Lenny kernel' --kernel="aki-68bb5901" --ramdisk="ari-6cbb5905"
    IMAGE ami-db13d3b2

    Now start up a new instance to make sure the image we made actually works:

    ec2-run-instances -k clayton -t t1.micro ami-db13d3b2
    INSTANCE i-35acaf54 ami-db13d3b2 pending

    Login:

    # uname -a Linux ip-10-244-177-141.ec2.internal 2.6.24-10-xen #1 SMP Tue Sep 8 18:30:05 UTC 2009 x86_64 GNU/Linux

    I went ahead at this point and verified that the above Lenny kernel WOULD NOT work for an upgrade to Squeeze (because of the new udev in Squeeze).

    So now create a new image based upon the previous image, this time with a Squeeze kernel which I will borrow from ami-80e915e9: aki-427d952b

    ec2-stop-instances i-35acaf54
    INSTANCE        i-35acaf54      running stopping
    
    ec2-create-snapshot vol-4ab61e20
    SNAPSHOT        snap-5e17be3e   vol-4ab61e20    pending
    
    ec2-register -a x86_64 -b '/dev/sda1=snap-5e17be3e:10:false' -n 'Lenny_64_Squeeze_kernel' -d '64 bit Lenny with Squeeze kernel' --kernel="aki-427d952b"
    IMAGE   ami-ab13d3c2
    
    ec2-run-instances -k clayton -t t1.micro ami-ab13d3c2
    INSTANCE        i-45b9ba24      ami-ab13d3c2                    pending
    
    # uname -a
    Linux ip-10-244-177-141.ec2.internal 2.6.26-2-xen-amd64 #1 SMP Mon Jun 13 18:44:16 UTC 2011 x86_64 GNU/Linux
    

    And it worked: kernel aki-427d952b is compatible with both Lenny and Squeeze.

    posted at: 05:04 | path: /Hosting/Amazon/EC2 | permanent link to this entry

    Mon, 04 Oct 2010


    /Hosting/vpslink: vpslink is too slow....

    I have had a couple of serious downtime incidents in the last year or two. The last one was almost two days, and I do not clearly recall the first one but I believe it was more then one day and was perhaps more forgivable because it was caused by a catastrophic total power failure in their data center.

    However, this last incident was just (as far as I am aware) my little server, and it took them more then ten hours to even acknowledge my trouble ticket. And a total of more then forty hours to get the server running again. That is just unacceptable, even on the weekend.

    I believe that in the past they were faster in responding to trouble tickets. Maybe it has something to do with a change in management, I do not know. Under the circumstances, they agreed to cancel my account without penalty and this week I moved my server to Rackspace.

    posted at: 06:26 | path: /Hosting/vpslink | permanent link to this entry

    Fri, 20 Aug 2010


    /Hosting/Amazon/EBS: Use python-boto to Snapshot Amazon EBS Volume

    The Amazon EC2 toolkit[1] works great, but it is a Java app which makes it an incredibly bloated way to gain access to a couple of command-line tools on a server. python-boto[2] takes up all of about 1M, and its minimal Python dependencies were already installed on my Amazon instance. The necessary python script is very simple:

    #!/usr/bin/env python thisVolume = 'vol-xxxxxxxx' waitSnapshot = 10 # wait increment (in seconds) while waiting for snapshot to complete print 'Logging into Amazon AWS....' from boto.ec2.connection import EC2Connection conn = EC2Connection('<aws_access_key_id>', '<aws_secret_access_key>') print '' print 'Stopping MySQL....' import os os.system("/etc/init.d/mysql stop") print '' print 'Beginning backup of ' + thisVolume snapshot = conn.create_snapshot(thisVolume) newSnapshot = snapshot.id print 'Created new volume snapshot:', newSnapshot import time waitSnapshotTotal = waitSnapshot snapshot = conn.get_all_snapshots(str(newSnapshot)) print '' while snapshot[0].status != 'completed': print 'Snapshot status is ' + snapshot[0].status + ', ' \ 'wait ', waitSnapshotTotal, ' secs for the snapshot to complete before re-starting MySQL.' time.sleep(waitSnapshot) waitSnapshotTotal = waitSnapshotTotal + waitSnapshot snapshot = conn.get_all_snapshots(str(newSnapshot)) print snapshot print '' print 'Restarting MySQL....' os.system("/etc/init.d/mysql start")

    The only gotcha was that the stale version of python-boto in Debian stable did not seem to have a "create_snapshot" function, so I just grabbed a newer version from Debian testing.

    And of course, just call this script from cron daily to make backups automatic.

    [1] http://developer.amazonwebservices.com/connect/entry.jspa?externalID=351&categoryID=88
    [2] http://code.google.com/p/boto/

    posted at: 09:27 | path: /Hosting/Amazon/EBS | permanent link to this entry

    Fri, 25 Jun 2010


    /Hosting/Amazon/EC2: How to Backup an Amazon EBS-Boot Server

    This is a script I wrote to backup several Amazon AWS EBS-boot servers, using the Python boto library[1]. Basically it takes a snapshot of the server's root EBS volume (defined in volumeIDs list) and then builds a new bootable Amazon Machine Image (AMI) for the server based upon that snapshot.

    #!/usr/bin/env python
    
    ##########################################################
    # Installation:
    # This script requires boto ('pip install boto') to communicate with Amazon AWS API,
    # and at least version 1.9a to handle an EBS boot instance
    # Script inspired by http://www.elastician.com/2009/12/creating-ebs-backed-ami-from-s3-backed.html
    #
    # As of early June, we need svn version of boto:
    #   pip install -e svn+http://boto.googlecode.com/svn/trunk/@1428#egg=boto
    # until next release, which should be soon per this forum post:
    #   http://groups.google.com/group/boto-users/browse_thread/thread/21dc3482ed7e49da
    ##########################################################
    
    ##########################################################
    # This script is for the backup of Amazon AWS EBS Boot servers, and will perform one backup
    # of one server, per script invocation. Note that there is nothing machine environment-specific
    # in this script at the moment, so it can be run on any machine with the correct environment,
    # to backup any other machine.
    # Usage:
    # * adjust the "Constants" section to reflect you current account & server environment.
    # * Choose which server you would like to backup in the "which server" section.
    # * Install the ElasticFox plugin in Firefox to observer the results of the backup process.
    # * Run the script to perform a backup of the chosen server.
    ##########################################################
    
    ##########################################################
    # Constants
    ##########################################################
    instances = {"master":"i-xxxxxxxx",   "staging":"i-xxxxxxxx",   "production":"i-xxxxxxxx"}
    volumeIDs = {"master":"vol-xxxxxxxx", "staging":"vol-xxxxxxxx", "production":"vol-xxxxxxxx"}
    ownerID = 'xxxxxxxxxxxx' # I got this from ElasticFox, should be AWS account specific
    waitSnapshot = 10 # wait increment (in seconds) while waiting for snapshot to complete
    
    ##########################################################
    # Which server is this script backing up?
    ##########################################################
    # thisServer = "master"
    thisServer = "staging"
    # thisServer = "production"
    
    thisInstance = instances[thisServer]
    thisVolume = volumeIDs[thisServer]
    
    ##########################################################
    print ''
    print '##########################################################'
    print 'Backup of ' + thisServer + ' server:'
    print 'Logging into Amazon AWS....'
    from boto.ec2.connection import EC2Connection
    conn = EC2Connection('xxxxxxxxxxxxxxxxxxxx', 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx')
    
    ##########################################################
    from datetime import datetime
    timeStamp = str(datetime.today())
    
    print ''
    print timeStamp
    print 'Snapshotting the ' + thisServer + ' servers boot volume....'
    comment = thisServer + ' ' + timeStamp
    name = comment.replace(":",".") # AWS does not like ":" in AMI names
    name = name.replace(" ","_") # make life easier when deleting stale images
    snapshot = conn.create_snapshot(thisVolume, comment)
    newSnapshot = snapshot.id
    print 'Created new volume snapshot:', newSnapshot
    
    ##########################################################
    import time
    waitSnapshotTotal = waitSnapshot
    snapshot = conn.get_all_snapshots(str(newSnapshot))
    print ''
    while snapshot[0].status != 'completed':
        print 'Snapshot status is ' + snapshot[0].status + ', ' \
              'wait ', waitSnapshotTotal, ' secs for the snapshot to complete before building the AMI.'
        time.sleep(waitSnapshot)
        waitSnapshotTotal = waitSnapshotTotal + waitSnapshot
        snapshot = conn.get_all_snapshots(str(newSnapshot))
    
    ##########################################################
    print ''
    print 'Building a bootable AMI based up this snapshot....'
    
    # setup for building an EBS boot snapshot"
    from boto.ec2.blockdevicemapping import EBSBlockDeviceType, BlockDeviceMapping
    ebs = EBSBlockDeviceType()
    ebs.snapshot_id = newSnapshot
    block_map = BlockDeviceMapping()
    block_map['/dev/sda1'] = ebs
    
    # use the same kernel & ramdisk from running server in the new AMI:
    attribute = conn.get_instance_attribute(thisInstance, 'kernel')
    kernelID = attribute['kernel']
    print 'kernel ID = ', kernelID
    attribute = conn.get_instance_attribute(thisInstance, 'ramdisk')
    ramdiskID = attribute['ramdisk']
    print 'ramdisk ID = ', ramdiskID
    
    # create the new AMI:
    result = conn.register_image(name=name,
        description=timeStamp,
        architecture='i386',
        kernel_id=kernelID,
        ramdisk_id=ramdiskID,
        root_device_name='/dev/sda1',
        block_device_map=block_map)
    print 'The new AMI ID = ', result
    

    [1] http://code.google.com/p/boto/

    posted at: 04:12 | path: /Hosting/Amazon/EC2 | permanent link to this entry

    Mon, 15 Feb 2010


    /Hosting/Amazon/EC2: Amazon AWS: Information You Need to Give Your System Administrator

    Amazon AWS is designed to be able to give someone else the necessary privileges to control one's Amazon servers, without giving up the password of your Amazon AWS account. Here are a couple of very thorough treatments on the subject of Amazon AWS credentials: [5][6].

    In order to broadly manage your account and its servers, there are two sets of keys your System Administrator is probably going to need to access and control your servers and data stores:

    1. AWS Access Key / Secret Access Key
    2. X.509 Certificate and Private Key

    These two methods of authentication are also explained in the "Authentication" section of [1], and both sets of keys can be obtained from "Your Account" --> "Access identifiers" in your Amazon AWS account.

    The "Access Key / Secret Access Key" is comprised of two long strings, much longer then what one typically thinks of as a "password". This is what a System Administrator needs most of the time for most Amazon AWS management tasks. The ElasticFox Firefox Extension[4], for instance, uses these for authentication. Following are examples of what these keys look like:

    Access key: AKIAJQXQL474IJIOJATA
    Secret Access Key: XQbln80m5ms8a4xUSxPd7xmyF/7IM9hM24bv9aez

    The "X.509 certificate" is a pair of encryption keys (each of them much longer then either elements of the "Access Key / Secret Access Key") primarily used by the Java-based Amazon EC2 API Tools[2], as explained here[3].

    The certificate looks like this:

    -----BEGIN CERTIFICATE-----
    MIICdzCCAeCgAwIBAgIGAOfo0EVXMA0GCSqGSIb3DQEBBQUAMFMxCzAJBgNVBAYT
    AlVTMRMwEQYDVQQKEwpBbWF6b24uY29tMQwwCgYDVQQLEwNBV1MxITAfBgNVBAMT
    GEFXUyBMaW1pdGVkLUFzc3VyYW5jZSBDQTAeFw0wODA5MjcyMzU3MDdaFw0wOTA5
    MjcyMzU3MDdaMFIxCzAJBgNVBAYTAlVTMRMwEQYDVQQKEwpBbWF6b24uY29tMRcw
    FQYDVQQLEw5BV1MtRGV2ZWxvcGVyczEVMBMGA1UEAxMMdWx3MTFzaTFjYzhrMIGf
    MA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCmtXexIvZGTtVvRaulv5ibeJR04W9L
    r1ET/hmfQDMrhojGURI+7HYWUtZwxBEUfU/L7JkSEgvtgpCpB4ulLAtzpNcd/aJ0
    lL7gF6B0szIx3LSNX/uidt9JkFUNeCyJygMbGMQsK/V496KqHIbwaHKvB4gqGM5r
    Tpxuqv1Tu6SvQwIDAQABo1cwVTAOBgNVHQ8BAf8EBAMCBaAwFgYDVR0lAQH/BAww
    CgYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAdBgNVHQ4EFgQUPWGfgV0fN+glJXzs
    VPxSI3IcI4UwDQYJKoZIhvcNAQEFBQADgYEAcC6rIJiRSwSSx4+pDo/xcXsqX6jD
    /w9gnE/BnAvAtPyR5sH5x3ksGgmH0Z3VFtFk0Zika/EYACCFVpA76dRQeszYamPJ
    gaPwAZo6g7DK4YhWWX9b3p2waTWASUxzbb0ivRiL1bC5zLwin2MfAzMcwI4oYx1B
    BCvS2d6fGxuuXrQ=
    -----END CERTIFICATE-----
    

    And the private key looks like this:

    -----BEGIN PRIVATE KEY-----
    MIICdgIBADANBgkqhkiG9w0BAQEFAASCAmAwggJcAgEAAoGBAMaAtxIVZslDohGnIIXJ/V8HTvzm
    w7/wROrIDIAN7QIGW4G14y7Sy3IHM56Y89pCFuvtzOwX7dAKjAIho8SE1IWiG4XxojGrXkA4Y8HS
    5rxUtj3DrAV+y60QEnwLQzICYPnSqG7w239J1TpPDBnCprec+qziUNu2iAhXMbbJCei9AgMBAAEC
    gYBrivykDXg8finmCneyRDbDL0B5/8P5zwBneq5bCjBnsm4NHi/RBF84jfJHcHJcwwWMGK+3EVfE
    KJKl7Pe+1oAUWd423ARd1AsPfjQhBZ/RXXhNpXovPz7PTFLOnzQbOmtkl59xPo67bIs2gWlu/0jj
    6MXqGLpEp1JI1Z2mnFI6OQJBAOfDLRdUGekgBz5ZKpu8skzSvnVGxL/YGRpXOPKm08RuTMqRPvhW
    cn39nQZcjb9UYzdq2Av6cqwXFdMjcXBZw4MCQQDbQxndNYWmwH9ATH8Bg/D8/U0ciDO22NMj/Yti
    ToLLC0xStt6KXWFjyD/aAwz+3dmVSyvJK1s6stE0xUKiuq6/AkEAmdiF5iZ9zLLmHA00q4znDvgW
    VeNUV8UrZMDhnLIBgTN25kDkfBVmixv/UGm/7nImKnNSVyE5XeM1KaMtelcb4QJAE1xyfTkLqzTW
    R7w5fs3CyuQnGfzg7CVrR4NM+opKPFmsDKW/MuKaBfCZyst4K001uFwh6qqcbKt7k7hTcQEhCwJA
    EdAIyKc80eU5KpkWNwbEL3AqK4MYdihXN2/qAt+KVNNUYROzudpDuW1K96p28CaoavV0n81BWX7p
    UvidCsHK+g==
    -----END PRIVATE KEY-----
    

    [1] http://clouddb.info/2009/05/17/using-and-managing-aws-part-3-aws-security/
    [2] http://developer.amazonwebservices.com/connect/entry.jspa?externalID=351
    [3] http://developer.amazonwebservices.com/connect/entry!default.jspa?categoryID=100&externalID=1791&printable=true
    [4] http://developer.amazonwebservices.com/connect/entry.jspa?externalID=609
    [5] http://alestic.com/2009/11/ec2-credentials
    [6] http://www.elastician.com/2009/06/managing-your-aws-credentials-part-1.html

    posted at: 06:31 | path: /Hosting/Amazon/EC2 | permanent link to this entry

    Wed, 11 Nov 2009


    /Hosting/Amazon/EC2: Transferring an Amazon EC2 Image to Another Account

    This[1] would be the document of record from Amazon....

    First, configure ElasticFox so that there is one window connected to each of the two accounts (sending and receiving). Also, setup two terminals, each one configured with the necessary environment variables to use the standard EC2 command line tools[2] for one of the accounts.

    Second, a logistical observation.... As I am not the owner of the Amazon accounts I am using, but rather a consultant using and operating the accounts on behalf of the owners, I do not have access to the "AWS Management Console". I "only" (this is actually quite a lot) have the privileges conferred by possessing the account keys. Using these privileges I can transfer a server AMI, but there appears to be no way to transfer an EBS store between accounts. So the first step is to login to the sending account, mount any EBS stores that are desired, copy there contents into the volatile area of the server, and create a new AMI that will contain both the server and the data.

    One can get the list of AMIs belonging to the sending account thusly:

    ec2-describe-images -o self

    Now (briefly) make the just-created AMI containing the data public. (One can make the AMI visible only to one account, but I am under time pressure and I do not at the moment know the AWS account number of the receiving account):

    ec2-modify-image-attribute ami-1573907c -launch-permission --add all

    Now in the receiving account start the AMI (point and click with ElasticFox), then back at the command line open SSH in the firewall (necessary if this is a new account):

    ec2-authorize default -p 22

    In a terminal login to your running server in the receiving account, using the keypair you used in ElasticFox when the server was started, and the "public DNS" for the server being reported by ElasticFox:

    ssh -i ~/ec2/id-keypair root@ec2-xx-xx-xx-xxx.compute-1.amazonaws.com

    Go back to the sending account and reset the transferred AMI to the private state:

    ec2-reset-image-attribute ami-1573907c -l

    Double-check the permissions on an AMI with:

    ec2-describe-image-attribute ami-1573907c -l

    (It would appear to return nothing when there are no non-private permissions set....)

    Back to the receiving account to: use ElasticFox to create a new EBS store, format it, then move the data from the server's volatile storage into the EBS. Also create and associate an ElasticIP to the new server. (This means that the server's "public DNS" will change, and the terminal login must be redone.) Open the http port (80).

    On the new server, edit the Apache configuration. Delete unneeded MySQL databases. Start Apache and MySQL. Test.

    Change the keys in the EBS snapshot script and test.

    Change the keys in the server snapshot script and make a bundle of this just-started AMI for the new account.

    [1] http://developer.amazonwebservices.com/connect/entry.jspa?entryID=530
    [2] http://developer.amazonwebservices.com/connect/entry.jspa?externalID=351&categoryID=88

    posted at: 08:05 | path: /Hosting/Amazon/EC2 | permanent link to this entry

    Thu, 15 Oct 2009


    /Hosting/Godaddy: Goodbye Godaddy!

    For a while now I have been somewhat unhappy with Godaddy. Living in China, their website is bloated and very slow. There user interface is bloated and complicated. And on top of that, in the slow process of tranferring my domains out over a period of months, I have found their domain transfer-out process to be extremely complicated, and seemingly designed to baffle and confuse casual users (read: discourage domain transfers....)

    May I recommend nearlyfreespeech.net[1], with a very fast, clean website, comparable domain name prices, and much cheaper website hosting prices.

    [1] http://blog.langex.net/index.cgi/Hosting/NearlyFreeSpeech/

    posted at: 00:04 | path: /Hosting/Godaddy | permanent link to this entry