Expat-IT Tech Bits

Home

Contact

Links

Search this site:

Categories:

/ (287)
  Admin/ (122)
    Apache/ (10)
      HTTPS-SSL/ (4)
      PHP/ (3)
      performance/ (2)
    Cherokee/ (1)
    LAN/ (4)
    LVM/ (6)
    Monitoring/ (2)
      munin/ (2)
    SSH/ (6)
    SSL/ (1)
    Samba/ (1)
    VPN-options/ (6)
      OpenVPN/ (1)
      SSH-Proxy/ (3)
      Tinc/ (1)
      sshuttle/ (1)
    backups/ (17)
      SpiderOak/ (1)
      backuppc/ (5)
      dirvish/ (1)
      misc/ (6)
      rdiff-backup/ (1)
      rsync/ (1)
      unison/ (2)
    commandLine/ (24)
      files/ (8)
      misc/ (10)
      network/ (6)
    crontab/ (1)
    databases/ (15)
      MSSQL/ (2)
      MySQL/ (8)
      Oracle/ (3)
      PostgreSQL/ (1)
    dynamicDNS/ (2)
    email/ (11)
      Dovecot/ (1)
      deliverability/ (1)
      misc/ (1)
      postfix/ (7)
      puppet/ (1)
    iptables/ (3)
    tripwire/ (1)
    virtualization/ (9)
      VMware/ (1)
      virtualBox/ (8)
  Coding/ (14)
    bash/ (1)
    gdb/ (1)
    git/ (3)
    php/ (5)
    python/ (4)
      Django/ (2)
  Education/ (1)
  Hosting/ (27)
    Amazon/ (18)
      EBS/ (3)
      EC2/ (10)
      S3/ (1)
      commandline/ (4)
    Godaddy/ (2)
    NearlyFreeSpeech/ (3)
    Rackspace/ (1)
    vpslink/ (3)
  Linux/ (30)
    Android/ (1)
    Awesome/ (3)
    CPUfreq/ (1)
    China/ (2)
    Debian/ (8)
      APT/ (3)
      WPA/ (1)
    audio/ (1)
    encryption/ (3)
    fonts/ (1)
    misc/ (6)
    remoteDesktop/ (1)
    router-bridge/ (3)
  SW/ (45)
    Micro$soft/ (1)
    browser/ (2)
      Chrome/ (1)
      Firefox/ (1)
    business/ (28)
      Drupal/ (9)
      KnowledgeTree/ (6)
      Redmine/ (2)
      SugarCRM/ (7)
      WebERP/ (2)
      WordPress/ (1)
      eGroupware/ (1)
    chat/ (1)
    email/ (1)
    fileSharing/ (2)
      btsync/ (1)
      mldonkey/ (1)
    graphics/ (2)
    research/ (2)
    website/ (6)
      blog/ (6)
        blosxom/ (3)
        rss2email/ (1)
        webgen/ (1)
  Security/ (15)
    IMchat/ (2)
    circumvention/ (2)
    cryptoCurrency/ (1)
    e-mail/ (4)
    greatFirewall/ (1)
    hacking/ (1)
    password/ (1)
    privacy/ (2)
    skype/ (1)
  Services/ (1)
    fileSharing/ (1)
  TechWriting/ (1)
  xHW/ (14)
    Lenovo/ (1)
    Motorola_A1200/ (2)
    Thinkpad_600e/ (1)
    Thinkpad_a21m/ (3)
    Thinkpad_i1300/ (1)
    Thinkpad_x24/ (1)
    USB_audio/ (1)
    scanner/ (1)
    wirelessCards/ (2)
  xLife/ (17)
    China/ (9)
      Beijing/ (5)
        OpenSource/ (3)
    Expatriation/ (1)
    Vietnam/ (7)

Archives:

  • 2016/07
  • 2016/05
  • 2016/02
  • 2016/01
  • 2015/12
  • 2015/11
  • 2015/06
  • 2015/01
  • 2014/12
  • 2014/11
  • 2014/10
  • 2014/09
  • 2014/07
  • 2014/04
  • 2014/02
  • 2014/01
  • 2013/12
  • 2013/10
  • 2013/08
  • 2013/07
  • 2013/06
  • 2013/05
  • 2013/04
  • 2013/02
  • 2013/01
  • 2012/12
  • 2012/10
  • 2012/09
  • 2012/08
  • 2012/07
  • 2012/06
  • 2012/05
  • 2012/04
  • 2012/03
  • 2012/01
  • 2011/12
  • 2011/11
  • 2011/10
  • 2011/09
  • 2011/08
  • 2011/07
  • 2011/06
  • 2011/05
  • 2011/04
  • 2011/02
  • 2010/12
  • 2010/11
  • 2010/10
  • 2010/09
  • 2010/08
  • 2010/07
  • 2010/06
  • 2010/05
  • 2010/04
  • 2010/03
  • 2010/02
  • 2010/01
  • 2009/12
  • 2009/11
  • 2009/10
  • 2009/09
  • 2009/08
  • 2009/07
  • 2009/06
  • 2009/05
  • 2009/04
  • 2009/03
  • 2009/02
  • 2009/01
  • 2008/12
  • 2008/11
  • 2008/10
  • 2008/09
  • Subscribe XML RSS Feed

    Creative Commons License
    This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
    PyBlosxom

    This site has no ads. To help with hosting, crypto donations are accepted:
    Bitcoin: 1JErV8ga9UY7wE8Bbf1KYsA5bkdh8n1Bxc
    Zcash: zcLYqtXYFEWHFtEfM6wg5eCV8frxWtZYkT8WyxvevzNC6SBgmqPS3tkg6nBarmzRzWYAurgs4ThkpkD5QgiSwxqoB7xrCxs

    Sun, 06 Nov 2011


    /Hosting/Amazon/EC2: Increase EBS Root Volume Size of an EC2 Server (Complicated)

    Complicated because this is a RightScale image with multiple partitions, so resize2fs did not work the first time per[1].

    Pick the image you want to resize (this one has an 8G root volume):

    $ ec2-describe-images ami-e00df089
    IMAGE ami-e00df089 944964708905/rightimage_debian_6.0.1_amd64_20110406.1_ebs 944964708905 available public x86_64 machine aki-4e7d9527 ebs paravirtual xen
    BLOCKDEVICEMAPPING  /dev/sda   snap-b62f31da   8
    

    Start a server up with a 25G volume instead:

    $ ec2-run-instances -t t1.micro --key clayton --block-device-mapping /dev/sda=:25 ami-e00df089

    Log in and see (in part):

    # df -h
    Filesystem            Size  Used Avail Use% Mounted on
    /dev/xvda2            5.0G  1.2G  3.6G  24% /
    

    Here is where things get funky, as "resize2fs /dev/xvda2" will not work per [1] because there are two other partitions: xvda1 is /boot and xvda3 is swap. [2] to the rescue. Hoping that it is the swap partition that is in the way, I got rid of it and rebuilt the partition table as follows: deleting partitions 2 and 3, creating a new partition 2 (accepting the defaults) and then writing the new partition to disk(w):

    # fdisk /dev/xvda
    d 2
    d 3
    n p 2
    w
    

    Do not forget to remove the swap line from /etc/fstab, and reboot. Now:

    # resize2fs /dev/xvda2

    works! And all is well. Now, as we have done before, create a new image to save our work:

    ec2-create-snapshot vol-e41af089
    SNAPSHOT snap-697ad00b vol-e41af089 pending

    Once the snapshot is finished:

    ec2-register -a x86_64 -b '/dev/sda=snap-697ad00b:25:false' -n 'Squeeze_64' -d '64 bit Squeeze' --kernel="aki-4e7d9527"

    [1] http://alestic.com/2009/12/ec2-ebs-boot-resize
    [2] http://bioteam.net/2010/07/how-to-resize-an-amazon-ec2-ami-when-boot-disk-is-on-ebs/

    posted at: 01:10 | path: /Hosting/Amazon/EC2 | permanent link to this entry

    Wed, 02 Nov 2011


    /Hosting/Amazon/EC2: Sources of Official Amazon Images

    I seem to always be looking this stuff up. Time to right it down.

    Debian:

    http://wiki.debian.org/Cloud/AmazonEC2Image
    http://support.rightscale.com/21-Community/RightScale_OSS

    Ubuntu:

    http://uec-images.ubuntu.com/
    http://support.rightscale.com/21-Community/RightScale_OSS

    CentOS:

    http://support.rightscale.com/21-Community/RightScale_OSS

    posted at: 09:12 | path: /Hosting/Amazon/EC2 | permanent link to this entry

    Mon, 29 Aug 2011


    /Hosting/Amazon/EC2: Debian: Testing a Lenny to Squeeze Upgrade in Amazon AWS

    One of the things that is really lovely about cloud computing is the ability to test relatively quickly and easily test something new, before diving in and breaking a "real" server. In this case, an upgrade from Debian Lenny to Debian Squeeze. With about two years between the average Debian release, there is much opportunity for something to break when trying to jump such a large computing chasm.

    Unfortunately, Debian is not so well, and definitely not formally, supported in the Amazon AWS environment. Here is how I went about it....

    There are actually a handful (3?) of Debian Lenny EBS images in AWS us-east region at the moment. I had a look at them all and, as I recall, selected public ami-9a6b9af3 for my test. It has a 10G EBS root file system, and a very basic Debian install onboard. (Note that using an informal public AMI like this is a security risk, as there is no way to be sure nothing malicious has been installed in the image.)

    To kick off a instance and have a look at it:

    ec2-run-instances -k clayton -t t1.micro ami-9a6b9af3
    Login to the instance and make any desired modifications. (apt-get upgrade, install software, etc....) Then stop the instance:
    ec2-stop-instances i-27d7d246

    Snapshot it's volume and create a new private image (you can get the volume name from ec2-describe-instances):

    ec2-create-snapshot vol-ac46eac6
    SNAPSHOT snap-fa4ce59a vol-ac46eac6 pending

    Now register a new, private Debian Lenny AMI to work with going forward:

    ec2-register -a x86_64 -b '/dev/sda1=snap-fa4ce59a:10:false' -n 'Lenny_64_Lenny_kernel' -d '64 bit Lenny with Lenny kernel' --kernel="aki-68bb5901" --ramdisk="ari-6cbb5905"
    IMAGE ami-db13d3b2

    Now start up a new instance to make sure the image we made actually works:

    ec2-run-instances -k clayton -t t1.micro ami-db13d3b2
    INSTANCE i-35acaf54 ami-db13d3b2 pending

    Login:

    # uname -a Linux ip-10-244-177-141.ec2.internal 2.6.24-10-xen #1 SMP Tue Sep 8 18:30:05 UTC 2009 x86_64 GNU/Linux

    I went ahead at this point and verified that the above Lenny kernel WOULD NOT work for an upgrade to Squeeze (because of the new udev in Squeeze).

    So now create a new image based upon the previous image, this time with a Squeeze kernel which I will borrow from ami-80e915e9: aki-427d952b

    ec2-stop-instances i-35acaf54
    INSTANCE        i-35acaf54      running stopping
    
    ec2-create-snapshot vol-4ab61e20
    SNAPSHOT        snap-5e17be3e   vol-4ab61e20    pending
    
    ec2-register -a x86_64 -b '/dev/sda1=snap-5e17be3e:10:false' -n 'Lenny_64_Squeeze_kernel' -d '64 bit Lenny with Squeeze kernel' --kernel="aki-427d952b"
    IMAGE   ami-ab13d3c2
    
    ec2-run-instances -k clayton -t t1.micro ami-ab13d3c2
    INSTANCE        i-45b9ba24      ami-ab13d3c2                    pending
    
    # uname -a
    Linux ip-10-244-177-141.ec2.internal 2.6.26-2-xen-amd64 #1 SMP Mon Jun 13 18:44:16 UTC 2011 x86_64 GNU/Linux
    

    And it worked: kernel aki-427d952b is compatible with both Lenny and Squeeze.

    posted at: 05:04 | path: /Hosting/Amazon/EC2 | permanent link to this entry

    Fri, 25 Jun 2010


    /Hosting/Amazon/EC2: How to Backup an Amazon EBS-Boot Server

    This is a script I wrote to backup several Amazon AWS EBS-boot servers, using the Python boto library[1]. Basically it takes a snapshot of the server's root EBS volume (defined in volumeIDs list) and then builds a new bootable Amazon Machine Image (AMI) for the server based upon that snapshot.

    #!/usr/bin/env python
    
    ##########################################################
    # Installation:
    # This script requires boto ('pip install boto') to communicate with Amazon AWS API,
    # and at least version 1.9a to handle an EBS boot instance
    # Script inspired by http://www.elastician.com/2009/12/creating-ebs-backed-ami-from-s3-backed.html
    #
    # As of early June, we need svn version of boto:
    #   pip install -e svn+http://boto.googlecode.com/svn/trunk/@1428#egg=boto
    # until next release, which should be soon per this forum post:
    #   http://groups.google.com/group/boto-users/browse_thread/thread/21dc3482ed7e49da
    ##########################################################
    
    ##########################################################
    # This script is for the backup of Amazon AWS EBS Boot servers, and will perform one backup
    # of one server, per script invocation. Note that there is nothing machine environment-specific
    # in this script at the moment, so it can be run on any machine with the correct environment,
    # to backup any other machine.
    # Usage:
    # * adjust the "Constants" section to reflect you current account & server environment.
    # * Choose which server you would like to backup in the "which server" section.
    # * Install the ElasticFox plugin in Firefox to observer the results of the backup process.
    # * Run the script to perform a backup of the chosen server.
    ##########################################################
    
    ##########################################################
    # Constants
    ##########################################################
    instances = {"master":"i-xxxxxxxx",   "staging":"i-xxxxxxxx",   "production":"i-xxxxxxxx"}
    volumeIDs = {"master":"vol-xxxxxxxx", "staging":"vol-xxxxxxxx", "production":"vol-xxxxxxxx"}
    ownerID = 'xxxxxxxxxxxx' # I got this from ElasticFox, should be AWS account specific
    waitSnapshot = 10 # wait increment (in seconds) while waiting for snapshot to complete
    
    ##########################################################
    # Which server is this script backing up?
    ##########################################################
    # thisServer = "master"
    thisServer = "staging"
    # thisServer = "production"
    
    thisInstance = instances[thisServer]
    thisVolume = volumeIDs[thisServer]
    
    ##########################################################
    print ''
    print '##########################################################'
    print 'Backup of ' + thisServer + ' server:'
    print 'Logging into Amazon AWS....'
    from boto.ec2.connection import EC2Connection
    conn = EC2Connection('xxxxxxxxxxxxxxxxxxxx', 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx')
    
    ##########################################################
    from datetime import datetime
    timeStamp = str(datetime.today())
    
    print ''
    print timeStamp
    print 'Snapshotting the ' + thisServer + ' servers boot volume....'
    comment = thisServer + ' ' + timeStamp
    name = comment.replace(":",".") # AWS does not like ":" in AMI names
    name = name.replace(" ","_") # make life easier when deleting stale images
    snapshot = conn.create_snapshot(thisVolume, comment)
    newSnapshot = snapshot.id
    print 'Created new volume snapshot:', newSnapshot
    
    ##########################################################
    import time
    waitSnapshotTotal = waitSnapshot
    snapshot = conn.get_all_snapshots(str(newSnapshot))
    print ''
    while snapshot[0].status != 'completed':
        print 'Snapshot status is ' + snapshot[0].status + ', ' \
              'wait ', waitSnapshotTotal, ' secs for the snapshot to complete before building the AMI.'
        time.sleep(waitSnapshot)
        waitSnapshotTotal = waitSnapshotTotal + waitSnapshot
        snapshot = conn.get_all_snapshots(str(newSnapshot))
    
    ##########################################################
    print ''
    print 'Building a bootable AMI based up this snapshot....'
    
    # setup for building an EBS boot snapshot"
    from boto.ec2.blockdevicemapping import EBSBlockDeviceType, BlockDeviceMapping
    ebs = EBSBlockDeviceType()
    ebs.snapshot_id = newSnapshot
    block_map = BlockDeviceMapping()
    block_map['/dev/sda1'] = ebs
    
    # use the same kernel & ramdisk from running server in the new AMI:
    attribute = conn.get_instance_attribute(thisInstance, 'kernel')
    kernelID = attribute['kernel']
    print 'kernel ID = ', kernelID
    attribute = conn.get_instance_attribute(thisInstance, 'ramdisk')
    ramdiskID = attribute['ramdisk']
    print 'ramdisk ID = ', ramdiskID
    
    # create the new AMI:
    result = conn.register_image(name=name,
        description=timeStamp,
        architecture='i386',
        kernel_id=kernelID,
        ramdisk_id=ramdiskID,
        root_device_name='/dev/sda1',
        block_device_map=block_map)
    print 'The new AMI ID = ', result
    

    [1] http://code.google.com/p/boto/

    posted at: 04:12 | path: /Hosting/Amazon/EC2 | permanent link to this entry

    Mon, 15 Feb 2010


    /Hosting/Amazon/EC2: Amazon AWS: Information You Need to Give Your System Administrator

    Amazon AWS is designed to be able to give someone else the necessary privileges to control one's Amazon servers, without giving up the password of your Amazon AWS account. Here are a couple of very thorough treatments on the subject of Amazon AWS credentials: [5][6].

    In order to broadly manage your account and its servers, there are two sets of keys your System Administrator is probably going to need to access and control your servers and data stores:

    1. AWS Access Key / Secret Access Key
    2. X.509 Certificate and Private Key

    These two methods of authentication are also explained in the "Authentication" section of [1], and both sets of keys can be obtained from "Your Account" --> "Access identifiers" in your Amazon AWS account.

    The "Access Key / Secret Access Key" is comprised of two long strings, much longer then what one typically thinks of as a "password". This is what a System Administrator needs most of the time for most Amazon AWS management tasks. The ElasticFox Firefox Extension[4], for instance, uses these for authentication. Following are examples of what these keys look like:

    Access key: AKIAJQXQL474IJIOJATA
    Secret Access Key: XQbln80m5ms8a4xUSxPd7xmyF/7IM9hM24bv9aez

    The "X.509 certificate" is a pair of encryption keys (each of them much longer then either elements of the "Access Key / Secret Access Key") primarily used by the Java-based Amazon EC2 API Tools[2], as explained here[3].

    The certificate looks like this:

    -----BEGIN CERTIFICATE-----
    MIICdzCCAeCgAwIBAgIGAOfo0EVXMA0GCSqGSIb3DQEBBQUAMFMxCzAJBgNVBAYT
    AlVTMRMwEQYDVQQKEwpBbWF6b24uY29tMQwwCgYDVQQLEwNBV1MxITAfBgNVBAMT
    GEFXUyBMaW1pdGVkLUFzc3VyYW5jZSBDQTAeFw0wODA5MjcyMzU3MDdaFw0wOTA5
    MjcyMzU3MDdaMFIxCzAJBgNVBAYTAlVTMRMwEQYDVQQKEwpBbWF6b24uY29tMRcw
    FQYDVQQLEw5BV1MtRGV2ZWxvcGVyczEVMBMGA1UEAxMMdWx3MTFzaTFjYzhrMIGf
    MA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCmtXexIvZGTtVvRaulv5ibeJR04W9L
    r1ET/hmfQDMrhojGURI+7HYWUtZwxBEUfU/L7JkSEgvtgpCpB4ulLAtzpNcd/aJ0
    lL7gF6B0szIx3LSNX/uidt9JkFUNeCyJygMbGMQsK/V496KqHIbwaHKvB4gqGM5r
    Tpxuqv1Tu6SvQwIDAQABo1cwVTAOBgNVHQ8BAf8EBAMCBaAwFgYDVR0lAQH/BAww
    CgYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAdBgNVHQ4EFgQUPWGfgV0fN+glJXzs
    VPxSI3IcI4UwDQYJKoZIhvcNAQEFBQADgYEAcC6rIJiRSwSSx4+pDo/xcXsqX6jD
    /w9gnE/BnAvAtPyR5sH5x3ksGgmH0Z3VFtFk0Zika/EYACCFVpA76dRQeszYamPJ
    gaPwAZo6g7DK4YhWWX9b3p2waTWASUxzbb0ivRiL1bC5zLwin2MfAzMcwI4oYx1B
    BCvS2d6fGxuuXrQ=
    -----END CERTIFICATE-----
    

    And the private key looks like this:

    -----BEGIN PRIVATE KEY-----
    MIICdgIBADANBgkqhkiG9w0BAQEFAASCAmAwggJcAgEAAoGBAMaAtxIVZslDohGnIIXJ/V8HTvzm
    w7/wROrIDIAN7QIGW4G14y7Sy3IHM56Y89pCFuvtzOwX7dAKjAIho8SE1IWiG4XxojGrXkA4Y8HS
    5rxUtj3DrAV+y60QEnwLQzICYPnSqG7w239J1TpPDBnCprec+qziUNu2iAhXMbbJCei9AgMBAAEC
    gYBrivykDXg8finmCneyRDbDL0B5/8P5zwBneq5bCjBnsm4NHi/RBF84jfJHcHJcwwWMGK+3EVfE
    KJKl7Pe+1oAUWd423ARd1AsPfjQhBZ/RXXhNpXovPz7PTFLOnzQbOmtkl59xPo67bIs2gWlu/0jj
    6MXqGLpEp1JI1Z2mnFI6OQJBAOfDLRdUGekgBz5ZKpu8skzSvnVGxL/YGRpXOPKm08RuTMqRPvhW
    cn39nQZcjb9UYzdq2Av6cqwXFdMjcXBZw4MCQQDbQxndNYWmwH9ATH8Bg/D8/U0ciDO22NMj/Yti
    ToLLC0xStt6KXWFjyD/aAwz+3dmVSyvJK1s6stE0xUKiuq6/AkEAmdiF5iZ9zLLmHA00q4znDvgW
    VeNUV8UrZMDhnLIBgTN25kDkfBVmixv/UGm/7nImKnNSVyE5XeM1KaMtelcb4QJAE1xyfTkLqzTW
    R7w5fs3CyuQnGfzg7CVrR4NM+opKPFmsDKW/MuKaBfCZyst4K001uFwh6qqcbKt7k7hTcQEhCwJA
    EdAIyKc80eU5KpkWNwbEL3AqK4MYdihXN2/qAt+KVNNUYROzudpDuW1K96p28CaoavV0n81BWX7p
    UvidCsHK+g==
    -----END PRIVATE KEY-----
    

    [1] http://clouddb.info/2009/05/17/using-and-managing-aws-part-3-aws-security/
    [2] http://developer.amazonwebservices.com/connect/entry.jspa?externalID=351
    [3] http://developer.amazonwebservices.com/connect/entry!default.jspa?categoryID=100&externalID=1791&printable=true
    [4] http://developer.amazonwebservices.com/connect/entry.jspa?externalID=609
    [5] http://alestic.com/2009/11/ec2-credentials
    [6] http://www.elastician.com/2009/06/managing-your-aws-credentials-part-1.html

    posted at: 06:31 | path: /Hosting/Amazon/EC2 | permanent link to this entry

    Wed, 11 Nov 2009


    /Hosting/Amazon/EC2: Transferring an Amazon EC2 Image to Another Account

    This[1] would be the document of record from Amazon....

    First, configure ElasticFox so that there is one window connected to each of the two accounts (sending and receiving). Also, setup two terminals, each one configured with the necessary environment variables to use the standard EC2 command line tools[2] for one of the accounts.

    Second, a logistical observation.... As I am not the owner of the Amazon accounts I am using, but rather a consultant using and operating the accounts on behalf of the owners, I do not have access to the "AWS Management Console". I "only" (this is actually quite a lot) have the privileges conferred by possessing the account keys. Using these privileges I can transfer a server AMI, but there appears to be no way to transfer an EBS store between accounts. So the first step is to login to the sending account, mount any EBS stores that are desired, copy there contents into the volatile area of the server, and create a new AMI that will contain both the server and the data.

    One can get the list of AMIs belonging to the sending account thusly:

    ec2-describe-images -o self

    Now (briefly) make the just-created AMI containing the data public. (One can make the AMI visible only to one account, but I am under time pressure and I do not at the moment know the AWS account number of the receiving account):

    ec2-modify-image-attribute ami-1573907c -launch-permission --add all

    Now in the receiving account start the AMI (point and click with ElasticFox), then back at the command line open SSH in the firewall (necessary if this is a new account):

    ec2-authorize default -p 22

    In a terminal login to your running server in the receiving account, using the keypair you used in ElasticFox when the server was started, and the "public DNS" for the server being reported by ElasticFox:

    ssh -i ~/ec2/id-keypair root@ec2-xx-xx-xx-xxx.compute-1.amazonaws.com

    Go back to the sending account and reset the transferred AMI to the private state:

    ec2-reset-image-attribute ami-1573907c -l

    Double-check the permissions on an AMI with:

    ec2-describe-image-attribute ami-1573907c -l

    (It would appear to return nothing when there are no non-private permissions set....)

    Back to the receiving account to: use ElasticFox to create a new EBS store, format it, then move the data from the server's volatile storage into the EBS. Also create and associate an ElasticIP to the new server. (This means that the server's "public DNS" will change, and the terminal login must be redone.) Open the http port (80).

    On the new server, edit the Apache configuration. Delete unneeded MySQL databases. Start Apache and MySQL. Test.

    Change the keys in the EBS snapshot script and test.

    Change the keys in the server snapshot script and make a bundle of this just-started AMI for the new account.

    [1] http://developer.amazonwebservices.com/connect/entry.jspa?entryID=530
    [2] http://developer.amazonwebservices.com/connect/entry.jspa?externalID=351&categoryID=88

    posted at: 08:05 | path: /Hosting/Amazon/EC2 | permanent link to this entry

    Fri, 14 Aug 2009


    /Hosting/Amazon/EC2: How to Snapshot an Amazon EC2 Server to S3

    This is actually a script that will do the job automatically, if called periodically from cron:

    #!/usr/bin/env python print "Snapshot Amazon server:" import time bucketName = "server-" + time.strftime("%Y%m%d") print "First create the bundle:" import os os.system("ec2-bundle-vol -k /mnt/pk-xxx.pem -c /mnt/cert-xxx.pem -u {Amazon_user_id} -d /mnt --arch i386") print "Create an S3 bucket named " + bucketName command = "s3cmd mb s3://" + bucketName os.system(command) print "Now upload the server image to the bucket" command = "ec2-upload-bundle -b " + bucketName + " -m /mnt/image.manifest.xml -a {aws-access-key-id} -s {aws-secret-access-key-id}" os.system(command) # boto required here because "ec2-register" does not exist on the server print "Now register the image as an official (private) AMI" from boto.ec2.connection import EC2Connection conn = EC2Connection('<aws_access_key_id>', '<aws_secret_access_key>') manifest = bucketName + "/image.manifest.xml" response = conn.register_image(manifest) print response print "cleanup /mnt (otherwise next server snapshot will break)" os.system("rm -r /mnt/im*")

    Note that /mnt/pk-xxx.pem & /mnt/cert-xxx.pem are your Amazon account keys, which need to be uploaded to the server. Other keys are embedded in this script. Make your file permissions restrictive. In the end, this is a Python script only because of the

    response = conn.register_image(manifest)

    line. I am using a Debian AMI published by alestic.com[1] and it comes equipped with a small set of Ruby tools capable of bundling a server and uploading it to S3. And it is those tools that my script is calling up to the point of the above line. However, these Ruby tools do not have the capability of registering the uploaded bundle as an AMI, which is necessary to have it show up ready for activation in the MyAMIs list of ElasticFox, for instance.

    My script does not as yet weed out old backup images. I will probably continue to do that manually, as I only plan on snapshotting the server once or twice a month (volatile data is all stored in an EBS volume which is backed-up daily, as it should be). There is a wee bit of a gotcha in the process of deleting an old AMI: one cannot delete an S3 bucket that is not empty. and the s3cmd delete command used to delete bucket contents does not seem to accept wild cards. The solution is not well documented:

    s3cmd del --recursive --force s3://bucket-name/
    s3cmd rb s3://bucket-name/

    and the AMI named "bucket-name" is gone forever.

    [1] http://alestic.com/

    posted at: 05:09 | path: /Hosting/Amazon/EC2 | permanent link to this entry

    Tue, 28 Jul 2009


    /Hosting/Amazon/EC2: My Migration Process: An Old (Crashed) Amazon EC2 Server to a Newer Version

    (Note: these are abbreviated notes with some detail deliberately left out.)

    When one updates the software on an Amazon EC2 server, not everything gets upgraded (the kernel, for one very important example). When one starts an EC2 server for the first time, the original bundle usually comes from someone or some organization that is actively maintaining it, and periodically issuing updates. Therefore it is good practice to occasionally move one's server to a more current bundle ("AMI", in Amazon parlance).

    Amazon's EC2 servers have some other unique characteristics, one of them being that data and the OS are kept in separate "partitions", and backed up by different processes. The server backup is quite a bit more onerous in terms of time and steps, if it has not yet been scripted (and I have not....) so it tends to not happen as often and the current server can become significantly different from the last backup. And when an EC2 server stops or crashes for whatever reason, the delta from the last backup is irrevocably lost.

    All this to say, sometimes when there is a server problem, it is more expedient to create a new server then to restart an old backup:

    Setup a new server:

    On the new server, /vol now contains my MySQL databases and /var/www. Sym link these into the standard locations in the new server's file structure.

    /vol also contains a /vol/etc copy of /etc (no more then 24 hours old). 'cp -a' the backed up versions of the apache2, php5, and mysql directories into /etc.

    'apt-get install apache2 libapache2-mod-fastcgi mysql-server php5 php5-cli pyblosxom postfix'
    (I prefer postfix over exim.)

    Correct any ownership issues in /vol with the www and mysql directories (they were just mounted on another server).

    Move Elastic IP from old server to new server.

    Install ntpdate and add this to root crontab:
    11 */6 * * * /usr/sbin/ntpdate us.pool.ntp.org

    Install etckeeper:

    Now add this to root crontab:
    15 1 * * * cd /vol/etc && git pull

    Create a user account:

    [1] http://alestic.com/

    posted at: 10:15 | path: /Hosting/Amazon/EC2 | permanent link to this entry

    Fri, 06 Feb 2009


    /Hosting/Amazon/EC2: Managing Amazon EC2 Instances

    The Amazon EC2 Compute Cloud provides some very powerful capabilities, but a fair bit of complexity comes with that power. This article is a brief description of the process of setting up an Amazon account, getting an instance running, making changes to that instance, and then storing the changed instance / server as a new instance. (This new instance could be a backup for an existing server, or the starting point for one or more new servers based upon it....) Here are some good references[1][2][3].

    First register for an Amazon account here[4]. Then install Amazon's (Java-based) EC2 command line tools[5]. (I will write a separate article on this at another time.)

    Choose the instance that you would like to run:

    This is surprisingly not straight-forward. Amazon seems to have a number of more or less official instance images. If you run the command

    ec2-describe-images -o self -o amazon | grep machine

    you will see a list of image candidates which seem to be almost exclusively Fedora- or Windows-based. What about Debian users like myself? Here[6] there seems to be a large collection of user-submitted images. I resorted to Google, and settled upon this[7] Debian Lenny-based image that seems to have an ongoing community around it, complete with a mailing list.

    Start Your Instance / Server:

    By default the root account of a new instance is accessed via SSH using a pre-defined "key pair" rather then the usual root password. First generate this key pair:

    ec2-add-keypair {key_pair_name}

    and copy the output lines into a private key file, which will later be referenced by the -k option in EC2 commands. Run "chmod 600" on this file to make it readable only by root, otherwise the EC2 commands will reject it.

    Now start your server:

    ec2-run-instances ami-115db978 -k {path_to_above_key_file}

    where ami-115db978 is the instance ID of my previously chosen Debian image[7]. Note the instance ID of your new running instance: i-c1a320a8. Check on the status of the server:

    ec2-describe-instances --or--
    ec2-describe-instances {above_noted_instance_id}

    Once the server's status is "running", you can login. First open the SSH port:

    ec2-authorize default -p 22
    Now login to the server:
    ssh -i {path_to_above_key_file} root@{hostname}

    where the hostname is the long URL output by ec2-describe-instances, of the form ec2-{IP_address}.*.amazonaws.com. (There should be no password prompt....)

    Save a snapshot of your changed server:

    After making some changes to the server, it is natural to want to save a copy. Note that Amazon EC2 instances by default are not persistent. If you shut down the server, or turn it off with ec2-terminate-instances, all changes you made since it was started from the original instance image will be lost. (This is one striking difference from the behavior of a typical Virtual Private Server.) So one simple way to backup is to create a new instance image based upon your running server[8].

    Since the following operations will be performed on the running server, and not your desktop, the Amazon account keys need to be uploaded to the server, ie.:

    scp -i ~/.ec2/keys/id_key pk-XXX.pem cert-xxx.pem root@ec2-45-105-235-35.compute-1.amazonaws.com:/mnt/

    Now on the server, create an image of the running server (a new instance):

    ec2-bundle-vol -k /mnt/pk-xxx.pem -c /mnt/cert-xxx.pem -u {Amazon_user_id} -d /mnt

    This will take a while. Note that ec2-bundle-vol will by default exclude a number of sensible directories like /sys, /proc, /mnt, etc. Now, using the Debian package s3cmd (more detail forthcoming in another post on the subject of Amazon S3) create a "bucket" in the Amazon S3 service in which to store the generated instance:

    s3cmd mb s3://xxxxx-snapshots

    Now upload the image to the created bucket:

    ec2-upload-bundle -b {bucket_name} -m /mnt/image.manifest.xml -a {aws-access-key-id} -s {aws-secret-access-key-id}

    where aws-access-key-id and aws-secret-access-key-id are your Amazon accounts 20- and 40-character access key and secret access key, respectively. This will also take a little while. Now on your desktop machine (ec2-register is not on my server, at least) register the image:

    ec2-register {bucket_name}/image.manifest.xml

    and record the instance ID generated: ami-20ef0849. Now look at bucket contents:

    s3cmd ls s3://xxxxx-snapshots

    and see that all the image.* files in the server's /mnt directory are now in the S3 bucket. Ie. logically there will be one instance snapshot per bucket.

    Test the Newly Created Instance:

    This would seem extremely prudent, especially the first time through the process:

    ec2-run-instances ami-20ef0849 -k {key_pair_name}

    Note that, if reusing key_pair_name from the original server that we have just made a backup copy of, the name specified has to be the same as originally passed to the ec2-add-keypair command. This may or may not be the same name as the file it is saved in. On my desktop, for instance, the keypair name is ktdebian, and it is saved in a file called id-rsa-ktdebian.

    Login to the test server and verify that it is working, then stop it:

    ec2-terminate-instances {test_server_instance_id}

    Note that our newly created image should still be private, no one else should be able to use and start it. To make it publicly available requires explicit action of the form:

    ec2-modify-image-attribute --launch-permission -a all

    (Note that I have not tested ec2-modify-image-attribute....)

    [1] http://docs.amazonwebservices.com/AWSEC2/2008-02-01/GettingStartedGuide/
    [2] http://wiki.rpath.com/wiki/rBuilder_Online:Amazon_Elastic_Compute_Cloud
    [3] http://paulstamatiou.com/2008/04/05/how-to-getting-started-with-amazon-ec2
    [4] http://aws.amazon.com/
    [5] http://developer.amazonwebservices.com/connect/entry.jspa?externalID=351&categoryID=88
    [6] http://developer.amazonwebservices.com/connect/kbcategory.jspa?categoryID=101
    [7] http://developer.amazonwebservices.com/connect/entry.jspa?externalID=1615&categoryID=101
    [8] http://afkham.org/2008/10/how-to-create-ec2-ami.html

    posted at: 17:23 | path: /Hosting/Amazon/EC2 | permanent link to this entry


    /Hosting/Amazon/EC2: The Amazon Elastic Compute Cloud (Amazon EC2) web service

    I have been hearing a lot of buzz about "cloud computing"[1] and particularly about Amazon's version[2][6]. Finally, I have been hired by a client to build a server using the Amazon service, and get to try it out.

    I am still in the early stages, but I am not seeing yet how Amazon's "Compute Cloud" is fundamentally different from your average "Virtual Private Server" (VPS) service. They do provide a capability of saving spanshots of a machine, and then redeploying copies of that snapshot to multiple other machines. I guess that is kind of special....

    The "Compute Cloud" is also not as user-friendly as your average VPS service. With the latter, setup is a matter of button pushing. With the former, you have to download and install a command line application, and jump through quite a few command line hoops to get things running. Not particularly difficult for an experience Linux sysadmin, but GUI addicts definitely will find themselves challenged. This article[3] should provide a good appreciation, and most or all of the details, of getting an Amazon server up and running.

    Note that Amazon seems to really like Fedora, and most of their ready-made images seem to be Fedora-based. As a Debian user, I had to go a little further afield[4] to find a suitable Debian image[5] to install.

    So far so good, it is running as I write, and was really not that hard to figure out.

    [1] http://en.wikipedia.org/wiki/Cloud_computing
    [2] http://docs.amazonwebservices.com/AWSEC2/2008-02-01/GettingStartedGuide/index.html?introduction.html
    [3] http://paulstamatiou.com/2008/04/05/how-to-getting-started-with-amazon-ec2
    [4] http://alestic.com/
    [5] http://developer.amazonwebservices.com/connect/entry.jspa?externalID=1615&categoryID=101
    [6] http://www.geekzone.co.nz/foobar/5654

    posted at: 14:08 | path: /Hosting/Amazon/EC2 | permanent link to this entry