Expat-IT Tech Bits




Search this site:


/ (289)
  Admin/ (123)
    Apache/ (10)
      HTTPS-SSL/ (4)
      PHP/ (3)
      performance/ (2)
    Cherokee/ (1)
    LAN/ (4)
    LVM/ (6)
    Monitoring/ (2)
      munin/ (2)
    SSH/ (6)
    SSL/ (1)
    Samba/ (1)
    VPN-options/ (7)
      OpenVPN/ (1)
      SSH-Proxy/ (3)
      Tinc/ (1)
      sshuttle/ (1)
    backups/ (17)
      SpiderOak/ (1)
      backuppc/ (5)
      dirvish/ (1)
      misc/ (6)
      rdiff-backup/ (1)
      rsync/ (1)
      unison/ (2)
    commandLine/ (24)
      files/ (8)
      misc/ (10)
      network/ (6)
    crontab/ (1)
    databases/ (15)
      MSSQL/ (2)
      MySQL/ (8)
      Oracle/ (3)
      PostgreSQL/ (1)
    dynamicDNS/ (2)
    email/ (11)
      Dovecot/ (1)
      deliverability/ (1)
      misc/ (1)
      postfix/ (7)
      puppet/ (1)
    iptables/ (3)
    tripwire/ (1)
    virtualization/ (9)
      VMware/ (1)
      virtualBox/ (8)
  Coding/ (14)
    bash/ (1)
    gdb/ (1)
    git/ (3)
    php/ (5)
    python/ (4)
      Django/ (2)
  Education/ (1)
  Hosting/ (27)
    Amazon/ (18)
      EBS/ (3)
      EC2/ (10)
      S3/ (1)
      commandline/ (4)
    Godaddy/ (2)
    NearlyFreeSpeech/ (3)
    Rackspace/ (1)
    vpslink/ (3)
  Linux/ (31)
    Android/ (1)
    Awesome/ (3)
    CPUfreq/ (1)
    China/ (2)
    Debian/ (8)
      APT/ (3)
      WPA/ (1)
    audio/ (1)
    encryption/ (3)
    fonts/ (1)
    misc/ (6)
    remoteDesktop/ (1)
    router-bridge/ (3)
  SW/ (45)
    Micro$soft/ (1)
    browser/ (2)
      Chrome/ (1)
      Firefox/ (1)
    business/ (28)
      Drupal/ (9)
      KnowledgeTree/ (6)
      Redmine/ (2)
      SugarCRM/ (7)
      WebERP/ (2)
      WordPress/ (1)
      eGroupware/ (1)
    chat/ (1)
    email/ (1)
    fileSharing/ (2)
      btsync/ (1)
      mldonkey/ (1)
    graphics/ (2)
    research/ (2)
    website/ (6)
      blog/ (6)
        blosxom/ (3)
        rss2email/ (1)
        webgen/ (1)
  Security/ (15)
    IMchat/ (2)
    circumvention/ (2)
    cryptoCurrency/ (1)
    e-mail/ (4)
    greatFirewall/ (1)
    hacking/ (1)
    password/ (1)
    privacy/ (2)
    skype/ (1)
  Services/ (1)
    fileSharing/ (1)
  TechWriting/ (1)
  xHW/ (14)
    Lenovo/ (1)
    Motorola_A1200/ (2)
    Thinkpad_600e/ (1)
    Thinkpad_a21m/ (3)
    Thinkpad_i1300/ (1)
    Thinkpad_x24/ (1)
    USB_audio/ (1)
    scanner/ (1)
    wirelessCards/ (2)
  xLife/ (17)
    China/ (9)
      Beijing/ (5)
        OpenSource/ (3)
    Expatriation/ (1)
    Vietnam/ (7)


  • 2019/06
  • 2016/07
  • 2016/05
  • 2016/02
  • 2016/01
  • 2015/12
  • 2015/11
  • 2015/06
  • 2015/01
  • 2014/12
  • 2014/11
  • 2014/10
  • 2014/09
  • 2014/07
  • 2014/04
  • 2014/02
  • 2014/01
  • 2013/12
  • 2013/10
  • 2013/08
  • 2013/07
  • 2013/06
  • 2013/05
  • 2013/04
  • 2013/02
  • 2013/01
  • 2012/12
  • 2012/10
  • 2012/09
  • 2012/08
  • 2012/07
  • 2012/06
  • 2012/05
  • 2012/04
  • 2012/03
  • 2012/01
  • 2011/12
  • 2011/11
  • 2011/10
  • 2011/09
  • 2011/08
  • 2011/07
  • 2011/06
  • 2011/05
  • 2011/04
  • 2011/02
  • 2010/12
  • 2010/11
  • 2010/10
  • 2010/09
  • 2010/08
  • 2010/07
  • 2010/06
  • 2010/05
  • 2010/04
  • 2010/03
  • 2010/02
  • 2010/01
  • 2009/12
  • 2009/11
  • 2009/10
  • 2009/09
  • 2009/08
  • 2009/07
  • 2009/06
  • 2009/05
  • 2009/04
  • 2009/03
  • 2009/02
  • 2009/01
  • 2008/12
  • 2008/11
  • 2008/10
  • 2008/09
  • Subscribe XML RSS Feed

    Creative Commons License
    This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

    This site has no ads. To help with hosting, crypto donations are accepted:
    Bitcoin: 1JErV8ga9UY7wE8Bbf1KYsA5bkdh8n1Bxc
    Zcash: zcLYqtXYFEWHFtEfM6wg5eCV8frxWtZYkT8WyxvevzNC6SBgmqPS3tkg6nBarmzRzWYAurgs4ThkpkD5QgiSwxqoB7xrCxs

    Mon, 25 Jun 2012

    /Admin/backups/unison: IT infrastructure for us little guys:

    I usually buy a newer used laptop once every year or so, alternating between the big one and the little one. I budget about $300 (thankyou, Linux!) for each increment, with each being a little faster then the last, and get a very satisfying sense of progress.

    When run, Unison will by default copy files that have changed only on one machine to the other machine, and present a list of files that[1] have changed on both machines for resolution (this list can be very, very short if you only use one machine between syncs, which I recommend). For my stuff (in Unix lingo, my "home" directory) a sync usually takes less then five minutes. Sometimes a lot less.

    Unison runs on pretty much all Unix/Linux flavors, Windows, and Macs, so it is also very easy to move from one TYPE of machine to another. (I used to move between Linux and NetBSD, for instance.)


    [1] http://www.cis.upenn.edu/~bcpierce/unison/

    posted at: 05:12 | path: /Admin/backups/unison | permanent link to this entry

    Fri, 23 Mar 2012

    /Admin/backups/dirvish: Simple, Elegant, Small-Enterprise Backups with Dirvish

    I do not like Bacula, Backuppc has some issues, rdiff-backup is too crude, where to turn for rock-solid backups? Rsync-based Dirvish[1] is making me very happy at the moment:

    As a layer on top of rdiff-backup, Dirvish's basic operation is the same: the first backup rsyncs all non-excluded files and directories. (Obviously, this may be quite a chunk of time and bandwidth if we are talking about a remote server with a large file system.) Thereafter, each succeeding backup is an rsync of just the changes since the last backup. Adjacent backups appear as directories with the entire contents of the backup within, BUT files that are the same are hard linked, ie. only the delta is taking up actual storage space. This means that pruning old backups simply consists of deleting the directories they are contained in, with hard-linking automatically taking care of the house-keeping for remaining adjacent backups. And it means that maintaining a complete backup of a remote server is reasonable, since after the first backup only incremental changes go over the network on subsequent backups.

    What Dirvish adds on top of rdiff-backup is:

    On Debian-based systems the framework is already setup with a cronjob that will, by default, fire at 2200 and prune/refresh all backups. Configuration involves creating an /etc/dirvish/master.conf that orchestrates the whole process, and creating a set of directories to house your Dirvish "banks" (groupings of backups) and "vaults" (each "vault" is a directory tree on a specific machine). And then inside each "vault" adding a dirvish/default.conf file that gives specific direction for that particular backup.

    For instance, /etc/dirvish/master.conf:

            local-root      22:00
            ibmFull         22:00
            officeEtc       22:00
            officeWWW       22:00
            officeMySQL     22:00
    expire-default: +15 days
    #       MIN HR    DOM MON       DOW  STRFTIME_FMT
            *   *     *   *         1    +3 months
    #       *   *     1-7 *         1    +1 year
            *   *     1-7 1,4,7,10  1    +1 year
            *   10-20 *   *         *    +4 days
    #       *   *     *   *         2-7  +30 days

    In the above, for instance, officeEtc / officeWWW / officeMySQL are all "vaults" for which I have created subdirectories under /home/backups/dirvish-officeServer. This file defines what is backed up, and a set of defaults for all "vaults". For instance, the above expire rules will keep 15 days of daily backups, three months of weekly backups, and one year of quarterly backups for all vaults, unless there are local vault rules that over-ride this behavior.

    An example vault config file, /home/backups/dirvish-ibmProductionServer/ibmFull/dirvish/default.conf:

    client: prodServer
    tree: /
    xdev: 0
    index: gzip
    log: gzip
    image-default: %Y%m%d
    speed-limit: 100

    "prodServer" must appear in /etc/hosts or resolve via DNS. "tree: /" tells us we are backing up the whole server. "speed-limit: 100" uses the same syntax as rsync, and limits bandwidth to 100 kB/s.

    Normally after setting up all the configuration files, one tests the configuration by setting the "tree" parameter to a relatively small subdirectory in each vault in turn, and running:

    dirvish --vault <vault-name> --init

    for each vault. This causes that tree to be backed up immediately, and you can see and correct any errors. After all is working, reset "tree" to the desired value, and check a couple days later to see if everything is working as expected.

    [1] http://www.dirvish.org/
    [2] http://www.dirvish.org/docs.html
    [3] http://wiki.edseek.com/howto:dirvish

    posted at: 03:31 | path: /Admin/backups/dirvish | permanent link to this entry

    Thu, 27 May 2010

    /Admin/backups/backuppc: Using rysnc-over-SSH and BackupPC

    The goal here, of course, is to protect your login credentials and the data transferred for backup with encryption. The downside, however, is that you must give your backup server the right to SSH into the client being backed up without a password (configured thusly[1]). One must carefully consider the actual security of the backup server, and whether the degraded security of the client being backed-up is acceptable.

    Assuming passwordless authentication has been configured, test that everything is setup on both ends to do rysnc-over-SSH by running this command on the backup server:

    rsync -avz -e ssh username@client-domain.com:/path/to/testdirectory testing/
    The contents of testdirectory on the client should be copied to testing on the server. Note that one of the advantages of rysnc-over-SSH is that there *is* no other client-side configuration, other then making sure that SSH and rsync are working on that end, and installing the public key of the backup server to enable passwordless authentication.

    If that worked, go ahead and configure BackupPC. First create your /etc/backuppc/client.pl file (borrowed from [2]):

    $Conf{XferMethod} = 'rsync'; $Conf{RsyncClientPath} = '/usr/bin/rsync'; $Conf{RsyncClientCmd} = '$sshPath -q -x -l root $hostIP $rsyncPath $argList+'; $Conf{RsyncClientRestoreCmd} = '$sshPath -q -x -l root $hostIP $rsyncPath $argList+'; $Conf{RsyncShareName} = ['/etc', '/home', '/var/www'];
    Assuming your client has been added to /etc/hosts as "clienthost", one now just needs to add clienthost to /etc/backuppc/hosts thusly:
    clienthost 0 backuppc
    and restart backuppc. Now "client" should show up in BackupPC's list, and you can start the first backup.

    Should the client be using a non-standard SSH port, the easiest solution is to use an SSH alias. I have this working with the following:

    $ cat .ssh/config
    Host olmserver
    Hostname olmserver
    Port 123
    $ cat /etc/hosts | grep olmserver olmserver
    $ cat /etc/backuppc/hosts | grep olmserver
    olmserver 0 backuppc
    Note that there is no need to change the backuppc configuration for this to work, or in fact even to change the port of the client SSH server. All of the SSH port configuration is handled by the SSH configuration.

    [1] http://blog.langex.net/index.cgi/Admin/SSH-SSL/passwordless-ssh-authentication.html
    [2] http://www.howtoforge.com/linux_backuppc_p3

    posted at: 06:37 | path: /Admin/backups/backuppc | permanent link to this entry

    Fri, 16 Apr 2010

    /Admin/backups/rsync: Backups with rsync

    rsync[1] is a very efficient and secure means of copying / mirroring a set of files between two different servers.

    Here is an example of how to mirror the contents of a directory on a remote machine to the local machine:

    rsync -avz --delete -e ssh root@remote-server.com:/full/path/ /var/www/directoryname/

    With just a little bit of simple scripting[2] it is easy to add an system of hard-linked (storage efficient) rotating daily incremental backups to your rsync backup strategy.

    For instance, say I wish to backup a directory /home/user/data on my local server or hosting account to another hosting account, in this example nearlyfreespeech.net[3]. This is the script I run on my local server:

    ssh user_site@ssh.phx.nearlyfreespeech.net "/home/protected/data/rotate.sh"

    rsync -avz --delete -e ssh /home/user/data user_site@ssh.phx.nearlyfreespeech.net:/home/protected/

    The first line executes a script on the REMOTE SERVER which first rotates the backup sets. Then the second line uses rsync to push the current state of the directory from the local server to the remote server. The "rotate.sh" script executed in the first line above is also in the directory being backed up, and contains the following:

    rm -rf /home/protected/data.5
    mv /home/protected/data.4 /home/protected/data.5
    mv /home/protected/data.3 /home/protected/data.4
    mv /home/protected/data.2 /home/protected/data.3
    mv /home/protected/data.1 /home/protected/data.2
    cp -al /home/protected/data /home/protected/data.1

    The key component is probably "cp -al", which makes a full archival copy of the current directory /home/protected/data to the first incremental backup using HARDLINKS. This makes the whole process very storage efficient (and therefore cheap on nearlyfreespeech.net, which charges for storage by the Megabyte) as there is only one copy of the contents of any file kept in the backup setup. All identical files are hardlinks to this copy.

    [1] http://rsync.samba.org/
    [2] http://www.mikerubel.org/computers/rsync_snapshots/
    [3] http://blog.langex.net/index.cgi/Hosting/NearlyFreeSpeech/

    posted at: 01:08 | path: /Admin/backups/rsync | permanent link to this entry

    Sat, 20 Mar 2010

    /Admin/backups/backuppc: Localhost Backup Broken on Ubuntu Desktop Backuppc

    This post[1] was helpful, but not quite enough, to get backuppc working on localhost. (Note that this works out of the box on any Debian install I have ever tried....)

    Below is my /etc/backuppc/localhost.pl to get backup of /etc on localhost working on a Karmic Ubuntu Desktop machine:

    $Conf{XferMethod} = 'tar'; $Conf{TarShareName} = ['/etc']; # with some help from https://help.ubuntu.com/community/BackupPC $Conf{TarClientCmd} = '/usr/bin/env LC_ALL=C sudo $tarPath -c -v -f - -C $shareName' . ' --totals'; $Conf{TarClientRestoreCmd} = '/usr/bin/env LC_ALL=C sudo $tarPath -x -p --numeric-owner --same-owner' . ' -v -f - -C $shareName+'; # remove extra shell escapes ($fileList+ etc.) that are # needed for remote backups but may break local ones $Conf{TarFullArgs} = '$fileList'; $Conf{TarIncrArgs} = '--newer=$incrDate $fileList';

    Basically I took the existing TarClientCmd and TarClientRestoreCmd settings and prefixed them with "sudo". And added:

    backuppc ALL=NOPASSWD: ALL

    to the bottom after invoking "visudo".

    [1] https://help.ubuntu.com/community/BackupPC

    posted at: 01:38 | path: /Admin/backups/backuppc | permanent link to this entry

    Mon, 08 Jun 2009

    /Admin/backups/backuppc: Backuppc Server

    I have chosen backuppc[1] as my backup server software. It is powerful, flexible, has a web-based GUI, and yes, it does take a little bit of study to get it working. And documentation seems to be missing our favorite section: the "Quick Start". I will try to provide enough of a tutorial for a "Quick Start" here. Note that a more verbose tutorial exists here www.debianhelp.co.uk/backuppc.htm.

    First, when you install backuppc, make sure that you also install libfile-rsyncp-perl. On my Debian box, this lib is "suggested" so it does not get installed automatically. You probably need to note your GUI login id (backuppc?) and password generated during the install. And another item that may also be Debian-specific is that it installs by default already setup to backup your localhost /etc directory.

    Once installed, if you are sitting at the machine to which backuppc has been installed, point your web browser to localhost/backuppc/ then enter the userid and password noted above at the prompt. Choose "localhost" from the host drop down menu, then click the "Start Full Backup". A couple minutes later your /etc should be backed up. Then click the "Browse Backups" link on the upper left. That should give a general idea of useage.

    You can modify your backuppc password by running the following command: "htpasswd /etc/backuppc/htpasswd backuppc"

    To setup backup for another machine, you need to go to /etc/backuppc/. The main config file is config.pl, which I am trying really hard not to change so as to preserve default behavior through future upgrades. That may not work for you if you have a lot of machines to backup and want to do a lot of customization.

    To add another backup machine, first create a name.pl file, where "name" is the name of the machine in your /etc/hosts file. Sample content to use rsyncd to backup /etc/ and /home on the remote machine might be the following:

    $Conf{XferMethod} = 'rsyncd';
    $Conf{RsyncdUserName} = 'userid';
    $Conf{RsyncdPasswd} = 'password';
    $Conf{RsyncShareName} = ['etc', 'home'];
    Note that you might want to add something like
    $Conf{BackupFilesExclude} = ['/sys', '/proc', '/dev', '/cdrom', '/media', '/floppy', '/mnt', '/var/lib/backuppc', '/lost+found'];

    to the above config if you are backing up the root partition of an entire Linux system, for instance.

    Then edit /etc/backuppc/hosts to contain the following two lines:

    localhost 0 backuppc
    nameOfMachineToBeBackedUp 0 backuppc
    where "nameOfMachineToBeBackedUp" is the same as "name" from name.pl.

    [1] http://backuppc.sourceforge.net/

    posted at: 04:48 | path: /Admin/backups/backuppc | permanent link to this entry

    Sun, 05 Apr 2009

    /Admin/backups/misc: Semi-Automating My Monthly Backup

    Boring repetitive tasks should be scripted. Backups *really* should be automated. So here is a first step down that path for the tarball that I send to my hosted server every month:

    #!/bin/sh cd /path/to/script/directory echo "My monthly backup:" echo "First archive mail trash" ./archivemail.sh echo "Now build the tar file." FILENAME="Backup`date +%Y%m%d`.tar" PATHFILE="/scratch/"$FILENAME echo "Will backup to " $PATHFILE echo "Archive /home/userid..." tar -cf $PATHFILE /home/userid echo "Add /etc..." tar -rf $PATHFILE /etc /etc/init.d/apache2 stop /etc/init.d/mysql stop echo "add /var/www..." tar -rf $PATHFILE /var/www echo "add /var/lib/mysql/" tar -rf $PATHFILE /var/lib/mysql/ /etc/init.d/apache2 start /etc/init.d/mysql start echo "Backup complete, list contents of archive" tar -tvf $PATHFILE

    and then I get an e-mail telling me its all done, and there is a huge tarball waiting for me in /scratch. I run this script on the 1st of every month from cron. archivemail.sh uses archivemail[1] to clean out my Mail trash folder. I split it out in a separate script because I run it more often (once a week).

    [1] http://blog.langex.net/index.cgi/SW/email/

    posted at: 02:26 | path: /Admin/backups/misc | permanent link to this entry

    Sat, 17 Jan 2009

    /Admin/backups/misc: The Problem with a Hardlink Backup Strategy

    Most of the Open Source backup software that I am aware of that does incremental backups (including backuppc, which I use) re-creates the entire directory structure for each increment. Copies of files with mulitple identical copies are then hard-linked together so that there is only one copy on disk, with obvious savings in disk usage.

    This post[1] points out that as the amount of stuff being backed up increases, an fsck on the partition in question can start taking a *very* long time, perhaps even running out of memory.

    I have not even noticed, because I did not place my backuppc archive on the root partition. And it is only the root partition that occasionally experiences an automatic fsck on boot. Generally speaking, for better or worse, I simply never fsck non-root partitions without good reason, and Linux seems to give me very little of such "good reason". I am inclined to say this seems to be more of a hypothetical then a real problem, and until I actually see evidence of breakage I am not going to worry about it.

    [1] http://feeds.feedburner.com/~r/ThoughtsByTed/~3/510209390/

    posted at: 08:19 | path: /Admin/backups/misc | permanent link to this entry

    Sun, 23 Nov 2008

    /Admin/backups/misc: What Works on a Slower Machine

    I have this thing about keeping older machines usable for as long as possible. In other words, I resist bloat-ware that just assumes any computer more then two years old should be placed in a dumpster. So I currently own about a half a dozen laptops, and none of them are faster then a late-model Pentium III. And this works fine for me, as long as I make judicious choices about what software should run on what machine, and when.

    Getting backups done painlessly has caused just such a "judicious choice"....

    As it turns out, Spideroak[1] has a lot going for it, but fast it is not. Unsurprisingly, Spideroak is a Python app, and Python is also a language that "has a lot going for it, but fast it is not". Its not just that Spideroak is just slow, but like its Python sibling, Miro[4], it tends to bog down my whole system and reduce responsiveness. For the moment, I will resist the urge to add Spideroak to my list[2] of open source resource hogs, as I have not yet experimented with running it "niced".

    This brings backuppc[3] back into favor for me. And I have found a partial fix for the fact that backuppc also bogs down the server it is running on: put this is the root cron:

    1 * * * * /usr/bin/renice 15 -u backuppc > /dev/null 2>&1

    backuppc starts backups right on the hour. This cron job reduces the priority of all running backuppc processes one minute after every hour. Much better. And no operator intervention required, unless I am watching a really CPU-intensive video on that box and need to stop backuppc entirely.

    [1] http://blog.langex.net/index.cgi/Admin/backups/spideroak.html
    [2] http://blog.langex.net/index.cgi/Linux/memory-hogs.html
    [3] http://blog.langex.net/index.cgi/Admin/backups/backuppc/
    [4] http://www.getmiro.com/

    posted at: 09:51 | path: /Admin/backups/misc | permanent link to this entry

    Sun, 26 Oct 2008

    /Admin/backups/rdiff-backup: Using rdiff-backup for Secure Unattended Backups

    rdiff-backup[1] is basically a wrapper around rsync and ssh that by default mirrors a specified directory on the two machines, as well as providing incremental backups for files that have been modified.

    Say I have a server and I just want to backup some directory (/etc) on that server to a directory on my desktop machine (/backup):

    rdiff-backup root@server.com::/etc /backup/etc

    Note that rdiff-backup must be installed on both machines, and that in this case, "root" is required on the server because some files in /etc will certainly only be readable by root.

    Backups that are not automated tend to not happen reliably, so we must run this periodically in cron. However, in running the above command, there was a prompt for the root password on server.com, which will not work with cron.

    We need to setup password-less authentication from desktop to server.com. Set this up using this guide: http://blog.langex.net/index.cgi/Admin/SSH-SSL/passwordless-ssh-authentication.html

    Now add rdiff to crontab by executing "crontab -e" in a terminal, and adding the following line:

    16 16 * * * rdiff-backup root@server.com::/etc /backup/etc && rdiff-backup --remove-older-than 6M /backup/etc

    which will execute the backup from your desktop every day at 4:16pm, and then delete backups older then six months old.

    See this excellent summary for more information[2]. Honestly, if I had started using rdiff-backup rather then backuppc first, I might never have gotten around to trying backuppc.

    [1] http://rdiff-backup.nongnu.org/
    [2] http://debaday.debian.net/2008/10/26/rdiff-backup-easy-incremental-backups-from-the-command-line/

    posted at: 05:36 | path: /Admin/backups/rdiff-backup | permanent link to this entry

    Sat, 25 Oct 2008

    /Admin/backups/SpiderOak: Offsite backup killer app: https://spideroak.com/

    Spider Oaks good qualities as an offsite backup service are too numerous to list (please do have a look around their website to catch anything I might have missed):

    The security aspects of this service are what really sets them apart. They have set it up so no one at their company has any way of accessing user files, which are stored encrypted on their servers. You can read the technical details on their site, but the only one on the whole planet who has all the necessary information to decrypt your files, is you. A corollary of that is that if you lose your password, there is no recourse. You lose your files on their server. They cannot "reset" your password.

    There is only one hole in the design of their security, and that is that they have not Open Sourced the backup client you run on your own computer, so we must trust them when they say that our passwords are never sent back to the server, and that there are no other back doors. (Just like any other closed commercial application, for that matter.... But until an Open Source competitor appears, Spider Oak effectively has no competition.) They have been around for a while, and they have some quite significant endorsements on their website, so I am inclined to believe them and entrust them with my own personal files. This is the first service I have found, ever, that I can say that for.

    I have been running their Linux client for several days now, and it is both highly polished and very stable. And, for server administrators and cron users, the client can be run headless from the command line.

    posted at: 08:57 | path: /Admin/backups/SpiderOak | permanent link to this entry

    Sat, 04 Oct 2008

    /Admin/backups/misc: Backing up a MySQL Database[1]

    Simply making a copy of the files in /var/lib/mysql/ while the database is running is not guaranteed to work, as MySQL *might* complain about corruption and refuse to start with such "hot" copies. Of course, if you can afford to stop MySQL while you are taking a snapshot of /var/lib/mysql/, then it should work fine.... The simplest way to grab a copy of a running database is with 'mysqldump'. I use the following, run from cron a couple of times a week:

    mysqldump --user=**** --password=**** name-of-database | bzip2 > /var/www/name-of-database/db-backup/name-of-database-backup-`date +%Y-%m-%d`.sql.bz2

    backuppc, running on another machine, makes daily backups of the whole /var/www/ directory. If the security of the contents of the database is a concern, do not put the dump in /var/www/.

    To delete files that are older then 20 days on a Linux system, add this to your cron:

    find /var/www/name-of-database/db-backup/name-of-database-backup* -mtime +20 -exec rm {} \;

    [1] http://dev.mysql.com/doc/refman/4.1/en/backup.html

    posted at: 09:46 | path: /Admin/backups/misc | permanent link to this entry

    /Admin/backups/backuppc: Prepping an Offsite Backup

    Backuppc has a builtin method (called "archiving") for generating a set of files from the backup archive that are CD/DVD burn-ready. I do something different.

    In the Backuppc GUI, to extract a directory from the backup archive in the form of a tar file, click on "Browse backups", select a directory, then click on "Restore selected files". On the next page select "Download tar archive". Do this for each directory you want to move offsite, naming the saved gtar files appropriately.

    Rename one of the files to "backup.gtar", then merge each of the other archives into backup.gtar with the command:

    tar -Af backup.gtar www.gtar
    If you then do a
    tar -tvf backup.gtar | less
    you will see that all of your directories from the original tar files are now in the same compressed gtar file.

    Now encrypt the file:

    gpg -c backup.gtar (you will be prompted for a password)
    To decrypt at a later date:
    gpg backup.gtar.gpg
    and then extract the contents of the resulting backup.gtar with
    tar -xvf backup.gtar

    posted at: 05:59 | path: /Admin/backups/backuppc | permanent link to this entry

    /Admin/backups/backuppc: rysncd on the client to be backed up

    Note that unlike rsync over ssh, transfers using rsyncd are not encrypted, so rsyncd use is recommended only within a secure local network.

    On the machine to be backed up, install rysnc and open the rsync port (873) in your firewall.

    Create a /etc/rsyncd.secrets file with the following content:

    Edit /etc/default/rsync to contain the following:

    (A higher value of RSYNC_NICE reduces the priority of rsync activities if this machine is being used for other things, which is highly probable.)

    Create an /etc/rsyncd.conf file with the following content:

        pid file=/var/run/rsyncd.pid
        transfer logging = no
        timeout = 600
        refuse options = checksum dry-run
        dont compress = *.gz *.tgz *.zip *.z *.rpm *.deb *.iso *.bz2 *.tbz
        use chroot = yes
        lock file = /var/lock/rsyncd
        read only = yes
        list = yes
            comment = /etc directory
            path = /etc
            uid = root
            gid = root
            auth users = yourUserid
            secrets file = /etc/rsyncd.secrets
            strict modes = yes
            ignore errors = no
            ignore nonreadable = yes
            comment = /home directory
            path = /home
            uid = root
            gid = root
            auth users = yourUserid
            secrets file = /etc/rsyncd.secrets
            strict modes = yes
            ignore errors = no
            ignore nonreadable = yes

    posted at: 05:53 | path: /Admin/backups/backuppc | permanent link to this entry

    Wed, 24 Sep 2008

    /Admin/backups/misc: Easy Linux Off-site Backups

    Probably the lowest-tech route is to use tar, gpg, and some free file storage service. For instance, at the root prompt (since we will be backing up some priveleged files) lets gather all the files up into one tar archive file, beginning with the /home directory:

    tar -cvf Backup20080901.tar /home
    Append the /etc directory:
    tar -rvf Backup20080901.tar /etc
    Now encrypt the result with gpg (you will be prompted for a password):
    gpg -c Backup20080901.tar
    Now upload the file to your favorite file storage service.

    Some storage options:

    1. Should you have access to an off-site server:

    scp Backup20080901.tar.gpg www.urltoserver.com:
    This may be a very big file and a very long transfer. If there is an interruption, don't start over again from scratch. We can use rsync to resume an interrupted scp transfer. Just replace "scp" in the last command with "rsync --partial --progress --rsh=ssh", ie.
    rsync --partial --progress --rsh=ssh Backup20080901.tar.gpg www.urltoserver.com:

    2. Exchange encrypted backups with a friend:

    Since both ends encrypt, trust is not even an issue. But how to exchange potentially very large files?

    If both of you have access to a UNIX environment where you can unblock / forward ports, sendfile[1] sounds REALLY cool.

    If one of you has root on a UNIX server, F*EX[2] also looks like an option.

    [1] http://fex.rus.uni-stuttgart.de/saft/sendfile.html
    [2] http://fex.rus.uni-stuttgart.de/

    posted at: 00:47 | path: /Admin/backups/misc | permanent link to this entry