Yet Another Backup Script

Ya, BS. I keep changing stuff. I chalk it up to not knowing what I’m doing combined with 4 A.M. thoughts lying in bed. But one was so damn critical…I actually did it before getting caffeinated this morning.

And I thought I didn’t know WTF I was doing…

So you can go read a previous post about the backup script or I can sum it up. You choose, I’ll just type:

  • make full backup of my sql database in to www root
  • tar nginx-conf file to www root
  • tar-gzip www root directory
  • upload tar.gz to home server
  • profit

It occurred to me that steps two and three were a problem. My WWW directory will get big; in fact it’s quadrupled in size already. There’s a lot of media files and a lot of junk I put up there. Doing a complete full transfer, while redundant; is not the way to do it. So why don’t I do my hacky backups the right way:


#!/bin/bash

mysqldump -u root --all-databases > /var/www/full.sql
tar -czf /var/www/etc-nginx.tar.gz /etc/nginx
rsync -arvz -e 'ssh -p <port>' /var/www user@server:/media/to/backup
rm /var/www/etc-nginx.tar
rm /var/www/full.sql

Rather than tar /var/www; it is literally better in every way to rsync it over SSH. This naturally avoids copying every file every single time…it’s only going to copy files that have been added or changed; exactly what we need. But it also allows me to save some bandwidth and copy large files in to the backup directory where they should already be; allowing me to pre-sync large files.

I already generated the initial rsync while debugging the command to use; so even the initial sync is already done. With this modified code; it should net me the same result…except it will continually update the remote backup rather than just redundantly transferring it. If my WWW directory did something like hit 10 gigs; I’d blow through my transfer in a month.

Site comments disabled. Please use Twitter.