Even if you consistently back up your filesystem, you should also back up your databases separately, using an appropriate tool to maintain their relational integrity and to provide for recovery independent of the filesystem.
MySQL is a pervasive, lightweight RDBMS that is used with popular applications such as WordPress and Bugzilla. The mysqldump utility which comes with MySQL provides a quick and simple way to create database backups.
While some situations my require more elaborate techniques, these scripts, using mysqldump, should be fine for the most common uses. They are run nightly via cron and keep a week’s worth of backups. Note: these are just examples. If you use them, you do so at your own risk. You’ll want to adjust them for your own situation.
This first script (text) runs on my laptop. Because it accesses the local server, has full permissions, and the databases are small, it locks and backs up everything at once.
#!/bin/bash
# @(#) $Id$
BCKDIR=$HOME/backups/mysql
BCKFILE="mysql-backup-`date +%a`.sql"
[ ! -d $BCKDIR ] && mkdir -p $BCKDIR
echo "`date` : starting backup"
mysqldump > $BCKDIR/$BCKFILE \
--opt \
--allow-keywords \
--comments \
--create-options \
--disable-keys \
--flush-logs \
--flush-privileges \
--lock-all-tables \
--all-databases
echo "`date` : completed"
ls -lh $BCKDIR/$BCKFILE
# end
This second script (text) runs on my blog host; the actual database is on another host. It differs from the previous script in that the flush-logs and lock-all-tables can’t be used, only a single database is backed up, and a password is required. I’ve also added the uptime command so that I can see how the server is performing when backups are run. Note: I don’t recommend embedding passwords in scripts, but it makes the example simpler.
#!/bin/bash
# @(#) $Id$
DBNAME=cutedbname
DBHOST=mysql.example.com
DBPASS=noway
BCKDIR=$HOME/backups/mysql
BCKFILE="$DBNAME-`date +%a`.sql"
uptime
[ ! -d $BCKDIR ] && mkdir -p $BCKDIR
echo "`date` : starting backup"
# I don't have RELOAD privs, so can't use
# --flush-logs
# --lock-all-tables
mysqldump > $BCKDIR/$BCKFILE \
--host=$DBHOST \
--opt \
--allow-keywords \
--comments \
--create-options \
--disable-keys \
--password=$DBPASS \
$DBNAME
echo "`date` : completed"
chmod u=rw,g=,o= $BCKDIR/$BCKFILE
ls -lh $BCKDIR/$BCKFILE
uptime
# end
In both cases, cron emails the log file to me. A typical message looks like this:
04:13:02 up 7 days, 19:24, 1 user, load average: 1.68, 2.60, 2.87
Fri Jul 4 04:13:02 CDT 2008 : starting backup
Fri Jul 4 04:13:02 CDT 2008 : completed
-rw------- 1 xuser xgroup 572K 2008-07-04 04:13 /home/xuser/backups/mysql/cutedbname-Fri.sql
04:13:02 up 7 days, 19:24, 1 user, load average: 1.68, 2.60, 2.87
I periodically rsync my entire remote site to local storage — having the backups on the same remote host as your live data won’t do you any good if your host loses a disk (and have stale or corrupted backups), or falls off the ‘Net. Of course my own filesystem backups take care of having long term, off-the-machine backups of my local stuff. I’ll discuss the filesystem backups in a future post.
I hope you find these scripts useful, and welcome comments, critiques, or suggestions for alternative methods.
Update (7/7): added link to filesystem backups