Expat-IT Tech Bits

Home

Contact

Links

Search this site:

Categories:

/ (287)
  Admin/ (122)
    Apache/ (10)
      HTTPS-SSL/ (4)
      PHP/ (3)
      performance/ (2)
    Cherokee/ (1)
    LAN/ (4)
    LVM/ (6)
    Monitoring/ (2)
      munin/ (2)
    SSH/ (6)
    SSL/ (1)
    Samba/ (1)
    VPN-options/ (6)
      OpenVPN/ (1)
      SSH-Proxy/ (3)
      Tinc/ (1)
      sshuttle/ (1)
    backups/ (17)
      SpiderOak/ (1)
      backuppc/ (5)
      dirvish/ (1)
      misc/ (6)
      rdiff-backup/ (1)
      rsync/ (1)
      unison/ (2)
    commandLine/ (24)
      files/ (8)
      misc/ (10)
      network/ (6)
    crontab/ (1)
    databases/ (15)
      MSSQL/ (2)
      MySQL/ (8)
      Oracle/ (3)
      PostgreSQL/ (1)
    dynamicDNS/ (2)
    email/ (11)
      Dovecot/ (1)
      deliverability/ (1)
      misc/ (1)
      postfix/ (7)
      puppet/ (1)
    iptables/ (3)
    tripwire/ (1)
    virtualization/ (9)
      VMware/ (1)
      virtualBox/ (8)
  Coding/ (14)
    bash/ (1)
    gdb/ (1)
    git/ (3)
    php/ (5)
    python/ (4)
      Django/ (2)
  Education/ (1)
  Hosting/ (27)
    Amazon/ (18)
      EBS/ (3)
      EC2/ (10)
      S3/ (1)
      commandline/ (4)
    Godaddy/ (2)
    NearlyFreeSpeech/ (3)
    Rackspace/ (1)
    vpslink/ (3)
  Linux/ (30)
    Android/ (1)
    Awesome/ (3)
    CPUfreq/ (1)
    China/ (2)
    Debian/ (8)
      APT/ (3)
      WPA/ (1)
    audio/ (1)
    encryption/ (3)
    fonts/ (1)
    misc/ (6)
    remoteDesktop/ (1)
    router-bridge/ (3)
  SW/ (45)
    Micro$soft/ (1)
    browser/ (2)
      Chrome/ (1)
      Firefox/ (1)
    business/ (28)
      Drupal/ (9)
      KnowledgeTree/ (6)
      Redmine/ (2)
      SugarCRM/ (7)
      WebERP/ (2)
      WordPress/ (1)
      eGroupware/ (1)
    chat/ (1)
    email/ (1)
    fileSharing/ (2)
      btsync/ (1)
      mldonkey/ (1)
    graphics/ (2)
    research/ (2)
    website/ (6)
      blog/ (6)
        blosxom/ (3)
        rss2email/ (1)
        webgen/ (1)
  Security/ (15)
    IMchat/ (2)
    circumvention/ (2)
    cryptoCurrency/ (1)
    e-mail/ (4)
    greatFirewall/ (1)
    hacking/ (1)
    password/ (1)
    privacy/ (2)
    skype/ (1)
  Services/ (1)
    fileSharing/ (1)
  TechWriting/ (1)
  xHW/ (14)
    Lenovo/ (1)
    Motorola_A1200/ (2)
    Thinkpad_600e/ (1)
    Thinkpad_a21m/ (3)
    Thinkpad_i1300/ (1)
    Thinkpad_x24/ (1)
    USB_audio/ (1)
    scanner/ (1)
    wirelessCards/ (2)
  xLife/ (17)
    China/ (9)
      Beijing/ (5)
        OpenSource/ (3)
    Expatriation/ (1)
    Vietnam/ (7)

Archives:

  • 2016/07
  • 2016/05
  • 2016/02
  • 2016/01
  • 2015/12
  • 2015/11
  • 2015/06
  • 2015/01
  • 2014/12
  • 2014/11
  • 2014/10
  • 2014/09
  • 2014/07
  • 2014/04
  • 2014/02
  • 2014/01
  • 2013/12
  • 2013/10
  • 2013/08
  • 2013/07
  • 2013/06
  • 2013/05
  • 2013/04
  • 2013/02
  • 2013/01
  • 2012/12
  • 2012/10
  • 2012/09
  • 2012/08
  • 2012/07
  • 2012/06
  • 2012/05
  • 2012/04
  • 2012/03
  • 2012/01
  • 2011/12
  • 2011/11
  • 2011/10
  • 2011/09
  • 2011/08
  • 2011/07
  • 2011/06
  • 2011/05
  • 2011/04
  • 2011/02
  • 2010/12
  • 2010/11
  • 2010/10
  • 2010/09
  • 2010/08
  • 2010/07
  • 2010/06
  • 2010/05
  • 2010/04
  • 2010/03
  • 2010/02
  • 2010/01
  • 2009/12
  • 2009/11
  • 2009/10
  • 2009/09
  • 2009/08
  • 2009/07
  • 2009/06
  • 2009/05
  • 2009/04
  • 2009/03
  • 2009/02
  • 2009/01
  • 2008/12
  • 2008/11
  • 2008/10
  • 2008/09
  • Subscribe XML RSS Feed

    Creative Commons License
    This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
    PyBlosxom

    This site has no ads. To help with hosting, crypto donations are accepted:
    Bitcoin: 1JErV8ga9UY7wE8Bbf1KYsA5bkdh8n1Bxc
    Zcash: zcLYqtXYFEWHFtEfM6wg5eCV8frxWtZYkT8WyxvevzNC6SBgmqPS3tkg6nBarmzRzWYAurgs4ThkpkD5QgiSwxqoB7xrCxs

    Fri, 04 Jul 2014


    /Hosting/Amazon/S3: Uploading Static Website Content to S3 with s3cmd

    Hosting a static website on S3 is fairly straight-forward[1]. Unless you have a big site containing one or more nested subdirectories with hundreds or thousands of small files. The Amazon console's upload function cannot handle this elegantly. s3cmd[2] to the rescue!:

    s3cmd --acl-public --guess-mime-type -r put * s3://your-bucket-name

    The --guess-mime-type option was particularly obscure, if this is not set for each file then the default mime-type is "binary" and the browser just wants to download anything you point it to on the site.

    [1] http://docs.aws.amazon.com/gettingstarted/latest/swh/website-hosting-intro.html
    [2] http://s3tools.org/s3cmd

    posted at: 03:53 | path: /Hosting/Amazon/S3 | permanent link to this entry

    Fri, 20 Jul 2012


    /Hosting/Amazon/commandline: An Easy Way to Snapshot Your AWS Server

    ec2-create-image --region ap-northeast-1 <instanceID> -n "name" -d "description" --no-reboot

    This basically does the same thing as the "Create Image" menu option in the Amazon console, with the additional option of NOT taking the server down while the image is being taken. Something I used to use a big honking Python script to do before.

    posted at: 09:43 | path: /Hosting/Amazon/commandline | permanent link to this entry

    Thu, 19 Jul 2012


    /Hosting/Amazon/commandline: Changing the Size/Type of an AWS Instance

    First stop the instance. Then run this command:

    ec2-modify-instance-attribute --region ap-northeast-1 <instanceID> -t m1.large

    This will change the type of instanceID from whatever it was, to m1.large. Then start the server, and observe the new size.

    posted at: 21:41 | path: /Hosting/Amazon/commandline | permanent link to this entry

    Thu, 12 Jul 2012


    /Hosting/Amazon/commandline: Adding Ephemeral Storage to your EBS-backed EC2 AWS Machine

    As of this writing, most EBS-backed images available do not have ephemeral storage already baked in. It can be done[1].

    Take an image of your current machine, to which you wish to add ephemeral storage. Let's call it . Then start up a new replacement machine, as follows:

    ec2-run-instances --region <regionid> <new-image-id> -k <keyid> -g <groupid> -t <image-type> -b '/dev/sdb=ephemeral0'

    Note that "ephemeral0" is the important part, you can assign it to whatever mount point (/dev/sdb) you wish. There is apparently also an "ephemeral1" for swap for those who feel a need for that. At least if you are using an Ubuntu image, you should find this in the /etc/fstab of your new machine:

    /dev/sdb /mnt auto defaults,nobootwait,comment=cloudconfig 0 0

    and a whole bunch of extra space under /mnt. In Ubuntu at least, this is preserved through reboots and new AMIs.

    [1] http://theagileadmin.com/2010/03/23/amazon-ec2-ebs-instances-and-ephemeral-storage/

    posted at: 03:43 | path: /Hosting/Amazon/commandline | permanent link to this entry

    Sun, 06 Nov 2011


    /Hosting/Amazon/EC2: Increase EBS Root Volume Size of an EC2 Server (Complicated)

    Complicated because this is a RightScale image with multiple partitions, so resize2fs did not work the first time per[1].

    Pick the image you want to resize (this one has an 8G root volume):

    $ ec2-describe-images ami-e00df089
    IMAGE ami-e00df089 944964708905/rightimage_debian_6.0.1_amd64_20110406.1_ebs 944964708905 available public x86_64 machine aki-4e7d9527 ebs paravirtual xen
    BLOCKDEVICEMAPPING  /dev/sda   snap-b62f31da   8
    

    Start a server up with a 25G volume instead:

    $ ec2-run-instances -t t1.micro --key clayton --block-device-mapping /dev/sda=:25 ami-e00df089

    Log in and see (in part):

    # df -h
    Filesystem            Size  Used Avail Use% Mounted on
    /dev/xvda2            5.0G  1.2G  3.6G  24% /
    

    Here is where things get funky, as "resize2fs /dev/xvda2" will not work per [1] because there are two other partitions: xvda1 is /boot and xvda3 is swap. [2] to the rescue. Hoping that it is the swap partition that is in the way, I got rid of it and rebuilt the partition table as follows: deleting partitions 2 and 3, creating a new partition 2 (accepting the defaults) and then writing the new partition to disk(w):

    # fdisk /dev/xvda
    d 2
    d 3
    n p 2
    w
    

    Do not forget to remove the swap line from /etc/fstab, and reboot. Now:

    # resize2fs /dev/xvda2

    works! And all is well. Now, as we have done before, create a new image to save our work:

    ec2-create-snapshot vol-e41af089
    SNAPSHOT snap-697ad00b vol-e41af089 pending

    Once the snapshot is finished:

    ec2-register -a x86_64 -b '/dev/sda=snap-697ad00b:25:false' -n 'Squeeze_64' -d '64 bit Squeeze' --kernel="aki-4e7d9527"

    [1] http://alestic.com/2009/12/ec2-ebs-boot-resize
    [2] http://bioteam.net/2010/07/how-to-resize-an-amazon-ec2-ami-when-boot-disk-is-on-ebs/

    posted at: 01:10 | path: /Hosting/Amazon/EC2 | permanent link to this entry

    Wed, 02 Nov 2011


    /Hosting/Amazon/EC2: Sources of Official Amazon Images

    I seem to always be looking this stuff up. Time to right it down.

    Debian:

    http://wiki.debian.org/Cloud/AmazonEC2Image
    http://support.rightscale.com/21-Community/RightScale_OSS

    Ubuntu:

    http://uec-images.ubuntu.com/
    http://support.rightscale.com/21-Community/RightScale_OSS

    CentOS:

    http://support.rightscale.com/21-Community/RightScale_OSS

    posted at: 09:12 | path: /Hosting/Amazon/EC2 | permanent link to this entry


    /Hosting/Amazon/commandline: Accidental Server Termination Protection

    Especially if you work at the command line a lot, it seems frighteningly easy to accidentally terminate the wrong server. Not any more:

    For termination prevention[1]:

    ec2-modify-instance-attribute i-57e64936 --disable-api-termination true

    And to re-enable termination:

    ec2-modify-instance-attribute i-57e64936 --disable-api-termination false

    Note that a termination-protected server can still be stopped and started, it is just the totally destructive "termination" that is locked out.

    [1] http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/index.html?Using_ChangingDisableAPITermination.html

    posted at: 07:49 | path: /Hosting/Amazon/commandline | permanent link to this entry

    Mon, 29 Aug 2011


    /Hosting/Amazon/EC2: Debian: Testing a Lenny to Squeeze Upgrade in Amazon AWS

    One of the things that is really lovely about cloud computing is the ability to test relatively quickly and easily test something new, before diving in and breaking a "real" server. In this case, an upgrade from Debian Lenny to Debian Squeeze. With about two years between the average Debian release, there is much opportunity for something to break when trying to jump such a large computing chasm.

    Unfortunately, Debian is not so well, and definitely not formally, supported in the Amazon AWS environment. Here is how I went about it....

    There are actually a handful (3?) of Debian Lenny EBS images in AWS us-east region at the moment. I had a look at them all and, as I recall, selected public ami-9a6b9af3 for my test. It has a 10G EBS root file system, and a very basic Debian install onboard. (Note that using an informal public AMI like this is a security risk, as there is no way to be sure nothing malicious has been installed in the image.)

    To kick off a instance and have a look at it:

    ec2-run-instances -k clayton -t t1.micro ami-9a6b9af3
    Login to the instance and make any desired modifications. (apt-get upgrade, install software, etc....) Then stop the instance:
    ec2-stop-instances i-27d7d246

    Snapshot it's volume and create a new private image (you can get the volume name from ec2-describe-instances):

    ec2-create-snapshot vol-ac46eac6
    SNAPSHOT snap-fa4ce59a vol-ac46eac6 pending

    Now register a new, private Debian Lenny AMI to work with going forward:

    ec2-register -a x86_64 -b '/dev/sda1=snap-fa4ce59a:10:false' -n 'Lenny_64_Lenny_kernel' -d '64 bit Lenny with Lenny kernel' --kernel="aki-68bb5901" --ramdisk="ari-6cbb5905"
    IMAGE ami-db13d3b2

    Now start up a new instance to make sure the image we made actually works:

    ec2-run-instances -k clayton -t t1.micro ami-db13d3b2
    INSTANCE i-35acaf54 ami-db13d3b2 pending

    Login:

    # uname -a Linux ip-10-244-177-141.ec2.internal 2.6.24-10-xen #1 SMP Tue Sep 8 18:30:05 UTC 2009 x86_64 GNU/Linux

    I went ahead at this point and verified that the above Lenny kernel WOULD NOT work for an upgrade to Squeeze (because of the new udev in Squeeze).

    So now create a new image based upon the previous image, this time with a Squeeze kernel which I will borrow from ami-80e915e9: aki-427d952b

    ec2-stop-instances i-35acaf54
    INSTANCE        i-35acaf54      running stopping
    
    ec2-create-snapshot vol-4ab61e20
    SNAPSHOT        snap-5e17be3e   vol-4ab61e20    pending
    
    ec2-register -a x86_64 -b '/dev/sda1=snap-5e17be3e:10:false' -n 'Lenny_64_Squeeze_kernel' -d '64 bit Lenny with Squeeze kernel' --kernel="aki-427d952b"
    IMAGE   ami-ab13d3c2
    
    ec2-run-instances -k clayton -t t1.micro ami-ab13d3c2
    INSTANCE        i-45b9ba24      ami-ab13d3c2                    pending
    
    # uname -a
    Linux ip-10-244-177-141.ec2.internal 2.6.26-2-xen-amd64 #1 SMP Mon Jun 13 18:44:16 UTC 2011 x86_64 GNU/Linux
    

    And it worked: kernel aki-427d952b is compatible with both Lenny and Squeeze.

    posted at: 05:04 | path: /Hosting/Amazon/EC2 | permanent link to this entry

    Fri, 20 Aug 2010


    /Hosting/Amazon/EBS: Use python-boto to Snapshot Amazon EBS Volume

    The Amazon EC2 toolkit[1] works great, but it is a Java app which makes it an incredibly bloated way to gain access to a couple of command-line tools on a server. python-boto[2] takes up all of about 1M, and its minimal Python dependencies were already installed on my Amazon instance. The necessary python script is very simple:

    #!/usr/bin/env python thisVolume = 'vol-xxxxxxxx' waitSnapshot = 10 # wait increment (in seconds) while waiting for snapshot to complete print 'Logging into Amazon AWS....' from boto.ec2.connection import EC2Connection conn = EC2Connection('<aws_access_key_id>', '<aws_secret_access_key>') print '' print 'Stopping MySQL....' import os os.system("/etc/init.d/mysql stop") print '' print 'Beginning backup of ' + thisVolume snapshot = conn.create_snapshot(thisVolume) newSnapshot = snapshot.id print 'Created new volume snapshot:', newSnapshot import time waitSnapshotTotal = waitSnapshot snapshot = conn.get_all_snapshots(str(newSnapshot)) print '' while snapshot[0].status != 'completed': print 'Snapshot status is ' + snapshot[0].status + ', ' \ 'wait ', waitSnapshotTotal, ' secs for the snapshot to complete before re-starting MySQL.' time.sleep(waitSnapshot) waitSnapshotTotal = waitSnapshotTotal + waitSnapshot snapshot = conn.get_all_snapshots(str(newSnapshot)) print snapshot print '' print 'Restarting MySQL....' os.system("/etc/init.d/mysql start")

    The only gotcha was that the stale version of python-boto in Debian stable did not seem to have a "create_snapshot" function, so I just grabbed a newer version from Debian testing.

    And of course, just call this script from cron daily to make backups automatic.

    [1] http://developer.amazonwebservices.com/connect/entry.jspa?externalID=351&categoryID=88
    [2] http://code.google.com/p/boto/

    posted at: 09:27 | path: /Hosting/Amazon/EBS | permanent link to this entry

    Fri, 25 Jun 2010


    /Hosting/Amazon/EC2: How to Backup an Amazon EBS-Boot Server

    This is a script I wrote to backup several Amazon AWS EBS-boot servers, using the Python boto library[1]. Basically it takes a snapshot of the server's root EBS volume (defined in volumeIDs list) and then builds a new bootable Amazon Machine Image (AMI) for the server based upon that snapshot.

    #!/usr/bin/env python
    
    ##########################################################
    # Installation:
    # This script requires boto ('pip install boto') to communicate with Amazon AWS API,
    # and at least version 1.9a to handle an EBS boot instance
    # Script inspired by http://www.elastician.com/2009/12/creating-ebs-backed-ami-from-s3-backed.html
    #
    # As of early June, we need svn version of boto:
    #   pip install -e svn+http://boto.googlecode.com/svn/trunk/@1428#egg=boto
    # until next release, which should be soon per this forum post:
    #   http://groups.google.com/group/boto-users/browse_thread/thread/21dc3482ed7e49da
    ##########################################################
    
    ##########################################################
    # This script is for the backup of Amazon AWS EBS Boot servers, and will perform one backup
    # of one server, per script invocation. Note that there is nothing machine environment-specific
    # in this script at the moment, so it can be run on any machine with the correct environment,
    # to backup any other machine.
    # Usage:
    # * adjust the "Constants" section to reflect you current account & server environment.
    # * Choose which server you would like to backup in the "which server" section.
    # * Install the ElasticFox plugin in Firefox to observer the results of the backup process.
    # * Run the script to perform a backup of the chosen server.
    ##########################################################
    
    ##########################################################
    # Constants
    ##########################################################
    instances = {"master":"i-xxxxxxxx",   "staging":"i-xxxxxxxx",   "production":"i-xxxxxxxx"}
    volumeIDs = {"master":"vol-xxxxxxxx", "staging":"vol-xxxxxxxx", "production":"vol-xxxxxxxx"}
    ownerID = 'xxxxxxxxxxxx' # I got this from ElasticFox, should be AWS account specific
    waitSnapshot = 10 # wait increment (in seconds) while waiting for snapshot to complete
    
    ##########################################################
    # Which server is this script backing up?
    ##########################################################
    # thisServer = "master"
    thisServer = "staging"
    # thisServer = "production"
    
    thisInstance = instances[thisServer]
    thisVolume = volumeIDs[thisServer]
    
    ##########################################################
    print ''
    print '##########################################################'
    print 'Backup of ' + thisServer + ' server:'
    print 'Logging into Amazon AWS....'
    from boto.ec2.connection import EC2Connection
    conn = EC2Connection('xxxxxxxxxxxxxxxxxxxx', 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx')
    
    ##########################################################
    from datetime import datetime
    timeStamp = str(datetime.today())
    
    print ''
    print timeStamp
    print 'Snapshotting the ' + thisServer + ' servers boot volume....'
    comment = thisServer + ' ' + timeStamp
    name = comment.replace(":",".") # AWS does not like ":" in AMI names
    name = name.replace(" ","_") # make life easier when deleting stale images
    snapshot = conn.create_snapshot(thisVolume, comment)
    newSnapshot = snapshot.id
    print 'Created new volume snapshot:', newSnapshot
    
    ##########################################################
    import time
    waitSnapshotTotal = waitSnapshot
    snapshot = conn.get_all_snapshots(str(newSnapshot))
    print ''
    while snapshot[0].status != 'completed':
        print 'Snapshot status is ' + snapshot[0].status + ', ' \
              'wait ', waitSnapshotTotal, ' secs for the snapshot to complete before building the AMI.'
        time.sleep(waitSnapshot)
        waitSnapshotTotal = waitSnapshotTotal + waitSnapshot
        snapshot = conn.get_all_snapshots(str(newSnapshot))
    
    ##########################################################
    print ''
    print 'Building a bootable AMI based up this snapshot....'
    
    # setup for building an EBS boot snapshot"
    from boto.ec2.blockdevicemapping import EBSBlockDeviceType, BlockDeviceMapping
    ebs = EBSBlockDeviceType()
    ebs.snapshot_id = newSnapshot
    block_map = BlockDeviceMapping()
    block_map['/dev/sda1'] = ebs
    
    # use the same kernel & ramdisk from running server in the new AMI:
    attribute = conn.get_instance_attribute(thisInstance, 'kernel')
    kernelID = attribute['kernel']
    print 'kernel ID = ', kernelID
    attribute = conn.get_instance_attribute(thisInstance, 'ramdisk')
    ramdiskID = attribute['ramdisk']
    print 'ramdisk ID = ', ramdiskID
    
    # create the new AMI:
    result = conn.register_image(name=name,
        description=timeStamp,
        architecture='i386',
        kernel_id=kernelID,
        ramdisk_id=ramdiskID,
        root_device_name='/dev/sda1',
        block_device_map=block_map)
    print 'The new AMI ID = ', result
    

    [1] http://code.google.com/p/boto/

    posted at: 04:12 | path: /Hosting/Amazon/EC2 | permanent link to this entry

    Mon, 15 Feb 2010


    /Hosting/Amazon/EC2: Amazon AWS: Information You Need to Give Your System Administrator

    Amazon AWS is designed to be able to give someone else the necessary privileges to control one's Amazon servers, without giving up the password of your Amazon AWS account. Here are a couple of very thorough treatments on the subject of Amazon AWS credentials: [5][6].

    In order to broadly manage your account and its servers, there are two sets of keys your System Administrator is probably going to need to access and control your servers and data stores:

    1. AWS Access Key / Secret Access Key
    2. X.509 Certificate and Private Key

    These two methods of authentication are also explained in the "Authentication" section of [1], and both sets of keys can be obtained from "Your Account" --> "Access identifiers" in your Amazon AWS account.

    The "Access Key / Secret Access Key" is comprised of two long strings, much longer then what one typically thinks of as a "password". This is what a System Administrator needs most of the time for most Amazon AWS management tasks. The ElasticFox Firefox Extension[4], for instance, uses these for authentication. Following are examples of what these keys look like:

    Access key: AKIAJQXQL474IJIOJATA
    Secret Access Key: XQbln80m5ms8a4xUSxPd7xmyF/7IM9hM24bv9aez

    The "X.509 certificate" is a pair of encryption keys (each of them much longer then either elements of the "Access Key / Secret Access Key") primarily used by the Java-based Amazon EC2 API Tools[2], as explained here[3].

    The certificate looks like this:

    -----BEGIN CERTIFICATE-----
    MIICdzCCAeCgAwIBAgIGAOfo0EVXMA0GCSqGSIb3DQEBBQUAMFMxCzAJBgNVBAYT
    AlVTMRMwEQYDVQQKEwpBbWF6b24uY29tMQwwCgYDVQQLEwNBV1MxITAfBgNVBAMT
    GEFXUyBMaW1pdGVkLUFzc3VyYW5jZSBDQTAeFw0wODA5MjcyMzU3MDdaFw0wOTA5
    MjcyMzU3MDdaMFIxCzAJBgNVBAYTAlVTMRMwEQYDVQQKEwpBbWF6b24uY29tMRcw
    FQYDVQQLEw5BV1MtRGV2ZWxvcGVyczEVMBMGA1UEAxMMdWx3MTFzaTFjYzhrMIGf
    MA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCmtXexIvZGTtVvRaulv5ibeJR04W9L
    r1ET/hmfQDMrhojGURI+7HYWUtZwxBEUfU/L7JkSEgvtgpCpB4ulLAtzpNcd/aJ0
    lL7gF6B0szIx3LSNX/uidt9JkFUNeCyJygMbGMQsK/V496KqHIbwaHKvB4gqGM5r
    Tpxuqv1Tu6SvQwIDAQABo1cwVTAOBgNVHQ8BAf8EBAMCBaAwFgYDVR0lAQH/BAww
    CgYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAdBgNVHQ4EFgQUPWGfgV0fN+glJXzs
    VPxSI3IcI4UwDQYJKoZIhvcNAQEFBQADgYEAcC6rIJiRSwSSx4+pDo/xcXsqX6jD
    /w9gnE/BnAvAtPyR5sH5x3ksGgmH0Z3VFtFk0Zika/EYACCFVpA76dRQeszYamPJ
    gaPwAZo6g7DK4YhWWX9b3p2waTWASUxzbb0ivRiL1bC5zLwin2MfAzMcwI4oYx1B
    BCvS2d6fGxuuXrQ=
    -----END CERTIFICATE-----
    

    And the private key looks like this:

    -----BEGIN PRIVATE KEY-----
    MIICdgIBADANBgkqhkiG9w0BAQEFAASCAmAwggJcAgEAAoGBAMaAtxIVZslDohGnIIXJ/V8HTvzm
    w7/wROrIDIAN7QIGW4G14y7Sy3IHM56Y89pCFuvtzOwX7dAKjAIho8SE1IWiG4XxojGrXkA4Y8HS
    5rxUtj3DrAV+y60QEnwLQzICYPnSqG7w239J1TpPDBnCprec+qziUNu2iAhXMbbJCei9AgMBAAEC
    gYBrivykDXg8finmCneyRDbDL0B5/8P5zwBneq5bCjBnsm4NHi/RBF84jfJHcHJcwwWMGK+3EVfE
    KJKl7Pe+1oAUWd423ARd1AsPfjQhBZ/RXXhNpXovPz7PTFLOnzQbOmtkl59xPo67bIs2gWlu/0jj
    6MXqGLpEp1JI1Z2mnFI6OQJBAOfDLRdUGekgBz5ZKpu8skzSvnVGxL/YGRpXOPKm08RuTMqRPvhW
    cn39nQZcjb9UYzdq2Av6cqwXFdMjcXBZw4MCQQDbQxndNYWmwH9ATH8Bg/D8/U0ciDO22NMj/Yti
    ToLLC0xStt6KXWFjyD/aAwz+3dmVSyvJK1s6stE0xUKiuq6/AkEAmdiF5iZ9zLLmHA00q4znDvgW
    VeNUV8UrZMDhnLIBgTN25kDkfBVmixv/UGm/7nImKnNSVyE5XeM1KaMtelcb4QJAE1xyfTkLqzTW
    R7w5fs3CyuQnGfzg7CVrR4NM+opKPFmsDKW/MuKaBfCZyst4K001uFwh6qqcbKt7k7hTcQEhCwJA
    EdAIyKc80eU5KpkWNwbEL3AqK4MYdihXN2/qAt+KVNNUYROzudpDuW1K96p28CaoavV0n81BWX7p
    UvidCsHK+g==
    -----END PRIVATE KEY-----
    

    [1] http://clouddb.info/2009/05/17/using-and-managing-aws-part-3-aws-security/
    [2] http://developer.amazonwebservices.com/connect/entry.jspa?externalID=351
    [3] http://developer.amazonwebservices.com/connect/entry!default.jspa?categoryID=100&externalID=1791&printable=true
    [4] http://developer.amazonwebservices.com/connect/entry.jspa?externalID=609
    [5] http://alestic.com/2009/11/ec2-credentials
    [6] http://www.elastician.com/2009/06/managing-your-aws-credentials-part-1.html

    posted at: 06:31 | path: /Hosting/Amazon/EC2 | permanent link to this entry

    Wed, 11 Nov 2009


    /Hosting/Amazon/EC2: Transferring an Amazon EC2 Image to Another Account

    This[1] would be the document of record from Amazon....

    First, configure ElasticFox so that there is one window connected to each of the two accounts (sending and receiving). Also, setup two terminals, each one configured with the necessary environment variables to use the standard EC2 command line tools[2] for one of the accounts.

    Second, a logistical observation.... As I am not the owner of the Amazon accounts I am using, but rather a consultant using and operating the accounts on behalf of the owners, I do not have access to the "AWS Management Console". I "only" (this is actually quite a lot) have the privileges conferred by possessing the account keys. Using these privileges I can transfer a server AMI, but there appears to be no way to transfer an EBS store between accounts. So the first step is to login to the sending account, mount any EBS stores that are desired, copy there contents into the volatile area of the server, and create a new AMI that will contain both the server and the data.

    One can get the list of AMIs belonging to the sending account thusly:

    ec2-describe-images -o self

    Now (briefly) make the just-created AMI containing the data public. (One can make the AMI visible only to one account, but I am under time pressure and I do not at the moment know the AWS account number of the receiving account):

    ec2-modify-image-attribute ami-1573907c -launch-permission --add all

    Now in the receiving account start the AMI (point and click with ElasticFox), then back at the command line open SSH in the firewall (necessary if this is a new account):

    ec2-authorize default -p 22

    In a terminal login to your running server in the receiving account, using the keypair you used in ElasticFox when the server was started, and the "public DNS" for the server being reported by ElasticFox:

    ssh -i ~/ec2/id-keypair root@ec2-xx-xx-xx-xxx.compute-1.amazonaws.com

    Go back to the sending account and reset the transferred AMI to the private state:

    ec2-reset-image-attribute ami-1573907c -l

    Double-check the permissions on an AMI with:

    ec2-describe-image-attribute ami-1573907c -l

    (It would appear to return nothing when there are no non-private permissions set....)

    Back to the receiving account to: use ElasticFox to create a new EBS store, format it, then move the data from the server's volatile storage into the EBS. Also create and associate an ElasticIP to the new server. (This means that the server's "public DNS" will change, and the terminal login must be redone.) Open the http port (80).

    On the new server, edit the Apache configuration. Delete unneeded MySQL databases. Start Apache and MySQL. Test.

    Change the keys in the EBS snapshot script and test.

    Change the keys in the server snapshot script and make a bundle of this just-started AMI for the new account.

    [1] http://developer.amazonwebservices.com/connect/entry.jspa?entryID=530
    [2] http://developer.amazonwebservices.com/connect/entry.jspa?externalID=351&categoryID=88

    posted at: 08:05 | path: /Hosting/Amazon/EC2 | permanent link to this entry

    Fri, 14 Aug 2009


    /Hosting/Amazon/EC2: How to Snapshot an Amazon EC2 Server to S3

    This is actually a script that will do the job automatically, if called periodically from cron:

    #!/usr/bin/env python print "Snapshot Amazon server:" import time bucketName = "server-" + time.strftime("%Y%m%d") print "First create the bundle:" import os os.system("ec2-bundle-vol -k /mnt/pk-xxx.pem -c /mnt/cert-xxx.pem -u {Amazon_user_id} -d /mnt --arch i386") print "Create an S3 bucket named " + bucketName command = "s3cmd mb s3://" + bucketName os.system(command) print "Now upload the server image to the bucket" command = "ec2-upload-bundle -b " + bucketName + " -m /mnt/image.manifest.xml -a {aws-access-key-id} -s {aws-secret-access-key-id}" os.system(command) # boto required here because "ec2-register" does not exist on the server print "Now register the image as an official (private) AMI" from boto.ec2.connection import EC2Connection conn = EC2Connection('<aws_access_key_id>', '<aws_secret_access_key>') manifest = bucketName + "/image.manifest.xml" response = conn.register_image(manifest) print response print "cleanup /mnt (otherwise next server snapshot will break)" os.system("rm -r /mnt/im*")

    Note that /mnt/pk-xxx.pem & /mnt/cert-xxx.pem are your Amazon account keys, which need to be uploaded to the server. Other keys are embedded in this script. Make your file permissions restrictive. In the end, this is a Python script only because of the

    response = conn.register_image(manifest)

    line. I am using a Debian AMI published by alestic.com[1] and it comes equipped with a small set of Ruby tools capable of bundling a server and uploading it to S3. And it is those tools that my script is calling up to the point of the above line. However, these Ruby tools do not have the capability of registering the uploaded bundle as an AMI, which is necessary to have it show up ready for activation in the MyAMIs list of ElasticFox, for instance.

    My script does not as yet weed out old backup images. I will probably continue to do that manually, as I only plan on snapshotting the server once or twice a month (volatile data is all stored in an EBS volume which is backed-up daily, as it should be). There is a wee bit of a gotcha in the process of deleting an old AMI: one cannot delete an S3 bucket that is not empty. and the s3cmd delete command used to delete bucket contents does not seem to accept wild cards. The solution is not well documented:

    s3cmd del --recursive --force s3://bucket-name/
    s3cmd rb s3://bucket-name/

    and the AMI named "bucket-name" is gone forever.

    [1] http://alestic.com/

    posted at: 05:09 | path: /Hosting/Amazon/EC2 | permanent link to this entry

    Tue, 28 Jul 2009


    /Hosting/Amazon/EC2: My Migration Process: An Old (Crashed) Amazon EC2 Server to a Newer Version

    (Note: these are abbreviated notes with some detail deliberately left out.)

    When one updates the software on an Amazon EC2 server, not everything gets upgraded (the kernel, for one very important example). When one starts an EC2 server for the first time, the original bundle usually comes from someone or some organization that is actively maintaining it, and periodically issuing updates. Therefore it is good practice to occasionally move one's server to a more current bundle ("AMI", in Amazon parlance).

    Amazon's EC2 servers have some other unique characteristics, one of them being that data and the OS are kept in separate "partitions", and backed up by different processes. The server backup is quite a bit more onerous in terms of time and steps, if it has not yet been scripted (and I have not....) so it tends to not happen as often and the current server can become significantly different from the last backup. And when an EC2 server stops or crashes for whatever reason, the delta from the last backup is irrevocably lost.

    All this to say, sometimes when there is a server problem, it is more expedient to create a new server then to restart an old backup:

    Setup a new server:

    On the new server, /vol now contains my MySQL databases and /var/www. Sym link these into the standard locations in the new server's file structure.

    /vol also contains a /vol/etc copy of /etc (no more then 24 hours old). 'cp -a' the backed up versions of the apache2, php5, and mysql directories into /etc.

    'apt-get install apache2 libapache2-mod-fastcgi mysql-server php5 php5-cli pyblosxom postfix'
    (I prefer postfix over exim.)

    Correct any ownership issues in /vol with the www and mysql directories (they were just mounted on another server).

    Move Elastic IP from old server to new server.

    Install ntpdate and add this to root crontab:
    11 */6 * * * /usr/sbin/ntpdate us.pool.ntp.org

    Install etckeeper:

    Now add this to root crontab:
    15 1 * * * cd /vol/etc && git pull

    Create a user account:

    [1] http://alestic.com/

    posted at: 10:15 | path: /Hosting/Amazon/EC2 | permanent link to this entry

    Mon, 30 Mar 2009


    /Hosting/Amazon/EBS: Amazon Backups: Snapshotting EBS to S3

    Amazon "Compute Cloud" services are setup such that storage for a running server is kept on an EBS ("Elastic Block Storage") volume mounted on the server. Long-term, offsite (geographic redundancy across multiple data centers) storage is kept in S3. There is a function to snapshot an EBS volume into S3.

    List the volumes associated with your account:

    ec2-describe-volumes

    This will also show what volumes are mounted on which servers. Stop the services running on the server that might be writing to the EBS (databases in particular, if you wish to have a usable backup):

    /etc/init.d/apache2 stop
    /etc/init.d/mysql stop
    Create a snapshot:
    ec2-create-snapshot vol-3159bd58
    Check status of snapshot:
    ec2-describe-snapshots
    SNAPSHOT snap-e309e38a vol-3159bd58 completed
    Restart services:
    /etc/init.d/mysql start
    /etc/init.d/apache2 start

    To restore file(s) from one of the backup snapshots, simply create a new volume from the chosen snapshot. Attach it to instance of your running server. Mount the volume temporarily on the server and grab your files. Then unmount the volume, detach it, and delete it. A process that takes only a couple minutes, especially if you use the extremely convenient Firefox ElasticFox plugin[1][2].

    [1] http://developer.amazonwebservices.com/connect/entry.jspa?externalID=609
    [2] http://s3.amazonaws.com/ec2-downloads/elasticfox.xpi

    posted at: 02:33 | path: /Hosting/Amazon/EBS | permanent link to this entry