Skip to content

Updating my backups

So I recently mentioned that I stopped myself from registering all the domains, but I did aquire

Today I'll be moving my last few backups over to using it.

In terms of backups I'm pretty well covered; I have dumps of databases and I have filesystem dumps too. The only niggles are the "random" directories I want to backup from my home desktop.

My home desktop is too large to backup completely, but I do want to archive some of it:

  • ~/Images/ - My archive of photos.
  • ~/Books/ - My calibre library.
  • ~/.bigv/ - My BigV details.
  • etc.

In short I have a random assortment of directories I want to backup, with pretty much no rhyme or reason to it.

I've scripted the backup right now, using SSH keys + rsync. But it feels like a mess.

PS. Proving my inability to concentrate I couldn't even keep up with a daily blog for more than two days. Oops.

Comments On This Entry

  1. [gravitar] me

    "But it feels like a mess."

    Try rsnapshot. It's soooo easy to use and works like a charm. I've set it up to do hourly backups, which usually don't take more than 15 seconds.

  2. [author] Steve Kemp

    I've configured this to run for myself now, writing to ~/.rsnapshot/, but already I'm annoyed.

    As you say I'm wasting space due to having a second local copy and incremental updates. Agreed this isn't large, but it is annoying.

    Additionally this is really sucky:

    ERROR: snapshot_root $HOME/.rsnapshot/ - snapshot_root must be a full path 
    ERROR: lockfile ~/.trash.d/ - lockfile must be a full path 
    shelob ~ $ rsnapshot  -c ~/rsnapshot.conf hourly
    ERROR: backup ~/Books localhost/ - Source directory "~/Books" doesn't exist 

    Basically it doesn't like per-user config, and I cannot use either ~ or $HOME.

    At this point I'd be tempted to just write:

    tar -cvf ~/.backup/$(date +%A).tar.gz ~/Text ~/.bigv ~/Books

    Sure I'd waste space with non-incremental, but at least that is portable, simple, and understandable.

  3. [gravitar] Iñigo

    At home I used to use rsnapshot like backups (rsync+ssh, and rotation of hard links, really a few lines of shell).

    But finally I did change to use rrsync. It's included with rsync but not in the path on some distros (/usr/share/doc/rsync/...). Basically, you get rid of be editing the force_command all the time, and the remote root is chrooted to the defined local path.

    Yes, with rrsync, you change the backup triggering to pull. The backup machine does not need anymore access to the systems as root, but just hold a few authorized keys with force_command="rrsync /backups/remotehost".

    Still you need the local rotation with cp and hard links, but that is 10 lines of bash to rotate all hosts.

    I use to create/delete a /backups/hostname/.backup_did_run from the remote and local crons (the rotation one). That way I can get alerts if a backup is failing.

    A txt to overview all remote crons is recommended. And ntp :)

    rrsync works for me (better than the model I used previously)

  4. [gravitar] me

    I've got a 2TB drive mounted as /home, another 2TB drive mounted as /backup. This is my rsnapshot.conf (relevant parts):
    snapshot_root /backup/

    exclude .thumbnails
    exclude Recycled
    exclude Trash
    exclude cache
    exclude Cache

    backup /home/ localhost/
    backup /etc/ localhost/
    backup /srv/ localhost/

    1. du -sh /home
      547G /home
    1. rsnapshot du
      541G /backup/hourly.0/
      57M /backup/hourly.1/
      55M /backup/hourly.2/
      50M /backup/hourly.3/
      46M /backup/hourly.4/
      61M /backup/hourly.5/
      105M /backup/daily.0/
      412M /backup/daily.1/
      763M /backup/daily.2/
      68M /backup/daily.3/
      99M /backup/daily.4/
      1.5G /backup/daily.5/
      1.2G /backup/daily.6/
      1.2G /backup/weekly.0/
      21G /backup/weekly.1/
      1.3G /backup/weekly.2/
      4.7G /backup/weekly.3/
      573G total

    If you ignore the 21G increase at weekly.1 (restructured some directories), the incremental updates take virtually any more space. In my case, yes, that's all local space I'm using but have a look at the default rsnapshot.rc file, there's examples of how to backup from and to another machine via ssh.

    And btw: /home's dying right now, already bought a replacement drive and I'm sooooooo glad I don't have to worry about losing data this time.

  5. [author] Steve Kemp

    I have two drives as a RAID array, which I realize is not a backup. Everything I care about on this system is either on stored under revision control, or mirrored elsewhere.

    For example my Images are stored at ~/Images and they are duplicated on a second system in the flat.

    i.e. I rsync "shelob:~/Images -> precious:~/Images", ditto for ~/Books, etc.

    That gives me a local copy that won't be lost if this host explodes, but doesn't give me a remote copy. That's what the remote backups I'm talking about here are for. I always have one master copy of images/books, those that live on this system. Then I have two backup copies - the one in the flat, and the one elsewhere.

    I'm rambling. I should look at bup, etc, etc. I will do so. For the moment I'm just running:

    shelob ~/Books $ make localbackup remotebackup 

    etc. (Local==within lan, remote == offiste.)

  6. [gravitar] Paulo Almeida

    You could use dar instead of tar, to do incremental backups, but it is a bit more complex. It has some nice features though, like selective compression (based on file extension and size), honoring dump flags and optional slicing.