当前位置:Linux教程 - Linux综合 - 使用Rsync和SSH实现Snapshot型增量备份

使用Rsync和SSH实现Snapshot型增量备份

Author: Stephan Jau <tutorials [at] roleplayer [dot] org>

Based upon the works of: Falko Timme <ft [at] falkotimme [dot] com> & Mike Rubel <webmaster [at] www [dot] mikerubel [dot] org>

IntrodUCtion

As neither human nor computers are perfect (humans err / computers may fail) it is quite obvious that a good backup system will prevent too much damage once the computer may go down. This could be either because the harddrive is failing, because of hackers, because you accidentally deleted something important, ...

In this tutorial I will show you how to automate backups in an incremental snapshot-style way by using rSync.

1. Setting up rSync over SSH

First of all you need a running rsync server and client that connect to each other without being required to enter a passWord. More suitable even to have it run through SSH (you might transfer sensitive data). For this, Falko Timme has already written an Excellen howto. You can find it here Mirror Your Web Site With rsyncSince that howto is already excellent there's no point in writing another one about this subject. Follow this howto until Step 6 (6 Test rsync On mirror.example.com) and test whether your setup works.As I will use two different methods of making this incremental snapshot-style backups it is necessary for one that that the backup server can Access to production server without being prompted for a password and for the other one it's vice-versa.Note: In my case I do backup my data on a friends server and he backs up his data on mine. So in my case I needed to set both anyway.

2. Non-Rotating Backups

In this setup I will tell you how you just keep making backups without rotating them hence never delete anything. For this setup it is mandatory, that the production server can access the backup server without being prompted for a password.

Once you have ensured, that your production server can connect to your backup server without being asked for a password then all you need is a small shell script and a cronjob to actually accomplish the backup.

backup.sh (backup shell script)

#!/bin/bash unset PATH # USER VARIABLES BACKUPDIR=/backup # Folder on the backup server KEY=/root/.ssh/id_rsa mysqlUSER=root MYSQLPWD=********************** MYSQLHOST=localhost MYSQLBACKUPDIR=/mysql_backup [email protected] EXCLUDES=/backup/backup_exclude # File containing exludes # PATH VARIABLES CP=/bin/cp; MK=/bin/mkdir; SSH=/usr/bin/ssh; DATE=/bin/date; RM=/bin/rm; GREP=/bin/grep; MYSQL=/usr/bin/mysql; MYSQLDUMP=/usr/bin/mysqldump; RSYNC=/usr/bin/rsync; TOUCH=/bin/touch; ## ## ## -- DO NOT EDIT BELOW THIS HERE -- ## ## ## # CREATING CURRENT DATE / TIME NOW=`$DATE '+%Y-%m'-%d_%H:%M` MKDIR=$BACKUPDIR/$NOW/ # CREATE MYSQL BACKUP # Remove existing backup dir $RM -Rf $MYSQLBACKUPDIR # Create new backup dir $MK $MYSQLBACKUPDIR #Dump new files for i in $(echo 'SHOW DATABASES;' $MYSQL -u$MYSQLUSER -p$MYSQLPWD -h$MYSQLHOST$GREP -v '^Database$'); do $MYSQLDUMP \ -u$MYSQLUSER -p$MYSQLPWD -h$MYSQLHOST \ -Q -c -C --add-drop-table --add-locks --quick --lock-tables \ $i > $MYSQLBACKUPDIR/$i.sql; done; # CREATE NEW BACKUPDIR $SSH -i $KEY $BACKUP_USER "$MK $MKDIR" # RUN RSYNC INTO CURRENT $RSYNC \ -avz --delete --delete-excluded \ --exclude-from="$EXCLUDES" \ -e "$SSH -i $KEY" \ / $BACKUP_USER:/$BACKUPDIR/current ; # UPDATE THE MTIME TO REFELCT THE SNAPSHOT TIME $SSH -I $KEY $BACKUP_USER "$TOUCH $$BACKUPDIR/current" # MAKE HARDLINK COPY $SSH -i $KEY $BACKUP_USER "$CP -al $BACKUPDIR/current/* $MKDIR"


[1] [2] [3] [4] [5] 下一页 

EXPlanations:

#!/bin/bash unset PATH # USER VARIABLES BACKUPDIR=/backup # Folder on the backup server KEY=/root/.ssh/id_rsa MYSQLUSER=root MYSQLPWD=********************** MYSQLHOST=localhost MYSQLBACKUPDIR=/mysql_backup [email protected] EXCLUDES=/backup/backup_exclude # File containing exludes # PATH VARIABLES CP=/bin/cp; MK=/bin/mkdir; SSH=/usr/bin/ssh; DATE=/bin/date; RM=/bin/rm; GREP=/bin/grep; MYSQL=/usr/bin/mysql; MYSQLDUMP=/usr/bin/mysqldump; RSYNC=/usr/bin/rsync; TOUCH=/bin/touch;

Just set the according variables above. No much explanation needed I think

# CREATING CURRENT DATE / TIME NOW=`$DATE '+%Y-%m'-%d_%H:%M` MKDIR=$BACKUPDIR/$NOW/ [...] # CREATE NEW BACKUPDIR $SSH -i $KEY $BACKUP_USER "$MK $MKDIR"

This will create a current folder for the backup YYYY-MM-DD_HH:MM - if you want to you can alter the format of this... I just think this ist easy to read.

# CREATE MYSQL BACKUP # Remove existing backup dir $RM -Rf $MYSQLBACKUPDIR # Create new backup dir $MK $MYSQLBACKUPDIR #Dump new files for i in $(echo 'SHOW DATABASES;' $MYSQL -u$MYSQLUSER -p$MYSQLPWD -h$MYSQLHOST$GREP -v '^Database$'); do $MYSQLDUMP \ -u$MYSQLUSER -p$MYSQLPWD -h$MYSQLHOST \ -Q -c -C --add-drop-table --add-locks --quick --lock-tables \ $i > $MYSQLBACKUPDIR/$i.sql; done;

This will first remove all files in your previous mysql-backup-dir. Then it will re-create it (I chose to do it this way because one does not have to worry about an existing folder or not...). Then it will loop (as root) through all the databases and create an own .sql file for each database. You may want to adjust the parameters for the backup of the databases or you may just want to use a mysqldump --all-databases which is probably quicker than the looping. However I prefer having single .sql files for all DBs

# RUN RSYNC INTO CURRENT $RSYNC \ -avz --delete --delete-excluded \ --exclude-from="$EXCLUDES" \ -e "$SSH -i $KEY" \ / $BACKUP_USER:/$BACKUPDIR/current ; # UPDATE THE TIME TO REFLECT THE SNAPSHOT TIME $SSH -I $KEY $BACKUP_USER "$TOUCH $$BACKUPDIR/current" # MAKE HARDLINK COPY $SSH -i $KEY $BACKUP_USER "$CP -al $BACKUPDIR/current/* $MKDIR"

This now makes a an incremental sync of the files of your production server to the backup server. It will all be stored in the "current" folder, afterwards it will create a hardlink copy to the previously created new "timestamp" folder.

--exclude-from="$EXCLUDES"

EXCLUDES=/backup/backup_exclude

This will act as exclusion for the backup. I attach here my current content of this file.

/backup/ /bin/ /boot/ /dev/ /lib/ /lost+found/ /mnt/ /opt/ /proc/ /sbin/ /sys/ /tmp/ /usr/ /var/log/ /var/spool/ /var/lib/PHP4/ /var/lib/mysql/


上一页 [1] [2] [3] [4] [5] 下一页 

The last thing now needed is a cron that will do all the backups. You can use something like this:

cron.txt (cron control file)

# Make Backups 0 0,6,12,18 * * * sh /backup/backup.sh

The above would make a backup every 6 hours.

 

3. Rotating Backups

In this setup I will tell you how can make rotation backups, so that old ones will be deleted eventually. For this setup it is mandatory, that the backup server can access the production server without being prompted for a password

Once you have ensured, that your backup server can connect to your production server without being asked for a password then all you need is are a couple of small shell scripts and cronjobs to actually accomplish the backups.

In detail this howto will make 4 backups per day, then 7 backups per week (1 per day) and 4 backups per month (1 per week)

my_backup.sh (mysql backup shell script)

This file needs to be on the remote (production) server that you want to backup from! All other files in this section need to be on the backup server!

I could somehow integrate this shell script in the hourly backup script that will follow below but I just didn't work out yet how to do. Instead of including mysql backup script into the hourly backup script I just make an own shell script out of it and I will just call that script from the hourly backup script. In the end it is the same thing.

#!/bin/bash unset PATH # USER VARIABLES MYSQLUSER=root MYSQLPWD=********************** MYSQLHOST=localhost MYSQLBACKUPDIR=/mysql_backup # PATH VARIABLES MK=/bin/mkdir; RM=/bin/rm; GREP=/bin/grep; MYSQL=/usr/bin/mysql; MYSQLDUMP=/usr/bin/mysqldump; # CREATE MYSQL BACKUP # Remove existing backup dir $RM -Rf $MYSQLBACKUPDIR # Create new backup dir $MK $MYSQLBACKUPDIR #Dump new files for i in $(echo 'SHOW DATABASES;' $MYSQL -u$MYSQLUSER -p$MYSQLPWD -h$MYSQLHOST$GREP -v '^Database$'); do $MYSQLDUMP \ -u$MYSQLUSER -p$MYSQLPWD -h$MYSQLHOST \ -Q -c -C --add-drop-table --add-locks --quick --lock-tables \ $i > $MYSQLBACKUPDIR/$i.sql; done;

As you can see this is the same procedure as before.

backup_hourly.sh (backup shell script)

This script is mostly based on Mikes handy rotating-filesystem-snapshot utility script

#!/bin/bash unset PATH # USER VARIABLES BACKUPDIR=/backup # Folder where the backups shall be saved to KEY=/root/.ssh/id_rsa MYSQL_BACKUPSCRIPT=/backup/my_backup.sh [email protected] EXCLUDES=/backup/backup_exclude # File containing exludes # PATH VARIABLES CP=/bin/cp; MK=/bin/mkdir; SSH=/usr/bin/ssh; DATE=/bin/date; RM=/bin/rm; RSYNC=/usr/bin/rsync; TOUCH=/bin/touch; SH=/bin/sh; # Path on the remote server MV=/bin/mv; ## ## ## -- DO NOT EDIT BELOW THIS HERE -- ## ## ## # CREATE MYSQL BACKUP # Run remote mysql backup script $SSH -i $KEY $PRODUCTION_USER "$SH $MYSQL_BACKUPSCRIPT" # Rotating the hourly snapshots # step 1: delete the oldest snapshot, if it exists: if [ -d $BACKUPDIR/hourly.3 ] ; then \ $RM -Rf $BACKUPDIR//hourly.3 ; \ fi; # step 2: shift the middle snapshots(s) back by one, if they exist if [ -d $BACKUPDIR/hourly.2 ] ; then \ $MV $BACKUPDIR/hourly.2 $BACKUPDIR/hourly.3 ; \ fi; # step 3: shift the middle snapshots(s) back by one, if they exist if [ -d $BACKUPDIR/hourly.1 ] ; then \ $MV $BACKUPDIR/hourly.1 $BACKUPDIR/hourly.2 ; \ fi; # step 4: make a hard-link-only (except for dirs) copy of the latest snapshot, # if that exists if [ -d $BACKUPDIR/hourly.0 ] ; then \ $CP -al $BACKUPDIR/hourly.0 $BACKUPDIR/hourly.1 ; \ fi; # step 5: rsync from the system $RSYNC \ -avz --delete --delete-excluded \ --exclude-from="$EXCLUDES" \ -e "$SSH -i $KEY" \ $PRODUCTION_USER:/ $BACKUPDIR/hourly.0 ; # step 6: update the mtime of hourly.0 to reflect the snapshot time $TOUCH $BACKUPDIR/hourly.0 ;


上一页 [1] [2] [3] [4] [5] 下一页 

Well, pretty much the script does the same as the first one, just it will rotate folders.... however this is now intended for a 6h-backup cycle (24 divided by 4 = 6)... however we want to have a few more backups. So the next thing is a script that cycles backups on a daily level.

backup_daily.sh (daily rotation shell script)

#!/bin/bash unset PATH # USER VARIABLES BACKUPDIR=/backup # Folder where the backups shall be saved to # PATH VARIABLES RM=/bin/rm; MV=/bin/mv; CP=/bin/cp; TOUCH=/bin/touch; # Rotating the daily snapshots # step 1: delete the oldest snapshot, if it exists: if [ -d $BACKUPDIR/daily.6 ] ; then \ $RM -Rf $BACKUPDIR/daily.6 ; \ fi; # step 2: shift the middle snapshots(s) back by one, if they exist if [ -d $BACKUPDIR/daily.5 ] ; then \ $MV $BACKUPDIR/daily.5 $BACKUPDIR/daily.6 ; \ fi; # step 3: shift the middle snapshots(s) back by one, if they exist if [ -d $BACKUPDIR/daily.4 ] ; then \ $MV $BACKUPDIR/daily.4 $BACKUPDIR/daily.5 ; \ fi; # step 4: shift the middle snapshots(s) back by one, if they exist if [ -d $BACKUPDIR/daily.3 ] ; then \ $MV $BACKUPDIR/daily.3 $BACKUPDIR/daily.4 ; \ fi; # step 5: shift the middle snapshots(s) back by one, if they exist if [ -d $BACKUPDIR/daily.2 ] ; then \ $MV $BACKUPDIR/daily.2 $BACKUPDIR/daily.3 ; \ fi; # step 6: shift the middle snapshots(s) back by one, if they exist if [ -d $BACKUPDIR/daily.1 ] ; then \ $MV $BACKUPDIR/daily.1 $BACKUPDIR/daily.2 ; \ fi; # step 7: shift the middle snapshots(s) back by one, if they exist if [ -d $BACKUPDIR/daily.0 ] ; then \ $MV $BACKUPDIR/daily.0 $BACKUPDIR/daily.1 ; \ fi; # step 8: make a hard-link-only (except for dirs) copy of the latest hourly snapshot, # if that exists if [ -d $BACKUPDIR/hourly.3 ] ; then \ $CP -al $BACKUPDIR/hourly.3 $BACKUPDIR/daily.0 ; \ fi; # step 9: update the mtime of daily.0 to reflect the snapshot time $TOUCH $BACKUPDIR/daily.0 ;

So, now we have a script that does 4 backups a day and 7 backups a week. Now the last one will make 4 backups a month.

backup_weekly.sh (weekly rotation shell script)

#!/bin/bash unset PATH # USER VARIABLES BACKUPDIR=/backup # Folder where the backups shall be saved to # PATH VARIABLES RM=/bin/rm; MV=/bin/mv; CP=/bin/cp; TOUCH=/bin/touch; # Rotating the weekly snapshots # step 1: delete the oldest snapshot, if it exists: if [ -d $BACKUPDIR/weekly.3 ] ; then \ $RM -Rf $BACKUPDIR/weekly.3 ; \ fi; # step 2: shift the middle snapshots(s) back by one, if they exist if [ -d $BACKUPDIR/weekly.2 ] ; then \ $MV $BACKUPDIR/weekly.2 $BACKUPDIR/weekly.3 ; \ fi; # step 3: shift the middle snapshots(s) back by one, if they exist if [ -d $BACKUPDIR/weekly.1 ] ; then \ $MV $BACKUPDIR/weekly.1 $BACKUPDIR/weekly.2 ; \ fi; # step 4: shift the middle snapshots(s) back by one, if they exist if [ -d $BACKUPDIR/weekly.0 ] ; then \ $MV $BACKUPDIR/weekly.0 $BACKUPDIR/weekly.1 ; \ fi; # step 5: make a hard-link-only (except for dirs) copy of the latest snapshot, # if that exists if [ -d $BACKUPDIR/daily.6 ] ; then \ $CP -al $BACKUPDIR/daily.6 $BACKUPDIR/weekly.0 ; \ fi; # step 4: update the mtime of weekly.0 to reflect the snapshot time $TOUCH $BACKUPDIR/weekly.0 ;


上一页 [1] [2] [3] [4] [5] 下一页 

So much to the scripts.

cron.txt (crontab control file)

The last thing that is missing now is a cron. I use this here:

# Make Backups 0 0 * * Sun sh /backup/backup_weekly.sh /usr/bin/mail -s "Weekly Cron" [email protected] 15 0 * * * sh /backup/backup_daily.sh /usr/bin/mail -s "Daily Cron" [email protected] 45 0,6,12,18 * * * sh /backup/backup_hourly.sh /usr/bin/mail -s "Hourly Cron" [email protected]

Well, I run the weekly cron at midnight on sundays and the daily cron everyday a quarter past midnight. The hourly crons I run a quarter to midnight, 6am, noon and 6pm. I chose this to give enough time to complete all the transfers if there are new additions. Of course you can set this all individually.

Don't forget to create the exclusion file (see explanations in Step 2)

You can add this cron simply by issuing the following command:

crontab cron.txt

Just make sure that you check first that you have no other crons running. If so, just add them to the cron control file. Listing the crons for the current user:

crontab -l

Well, now enjoy the backups.

(出处:http://www.sheup.com)


上一页 [1] [2] [3] [4] [5] 

So much to the scripts.

cron.txt (crontab control file)

The last thing that is missing now is a cron. I use this here:

# Make Backups 0 0 * * Sun sh /backup/backup_weekly.sh /usr/bin/mail -s "Weekly Cron" [email protected] 15 0 * * * sh /backup/backup_daily.sh /usr/bin/mail -s "Daily Cron" [email protected] 45 0,6,12,18 * * * sh /backup/backup_hourly.sh /usr/bin/mail -s "Hourly Cron" [email protected]

Well, I run the weekly cron at midnight on sundays and the daily cron everyday a quarter past midnight. The hourly crons I run a quarter to midnight, 6am, noon and 6pm. I chose this to give enough time to complete all the transfers if there are new additions. Of course you can set this all individually.

Don't forget to create the exclusion file (see explanations in Step 2)

You can add this cron simply by issuing the following command:

crontab cron.txt

Just make sure that you check first that you have no other crons running. If so, just add them to the cron control file. Listing the crons for the current user:

crontab -l

Well, now enjoy the backups.

(出处:http://www.sheup.com)


上一页 [1] [2] [3] [4] [5] [6] 

So much to the scripts.

cron.txt (crontab control file)

The last thing that is missing now is a cron. I use this here:

# Make Backups 0 0 * * Sun sh /backup/backup_weekly.sh /usr/bin/mail -s "Weekly Cron" [email protected] 15 0 * * * sh /backup/backup_daily.sh /usr/bin/mail -s "Daily Cron" [email protected] 45 0,6,12,18 * * * sh /backup/backup_hourly.sh /usr/bin/mail -s "Hourly Cron" [email protected]

Well, I run the weekly cron at midnight on sundays and the daily cron everyday a quarter past midnight. The hourly crons I run a quarter to midnight, 6am, noon and 6pm. I chose this to give enough time to complete all the transfers if there are new additions. Of course you can set this all individually.

Don't forget to create the exclusion file (see explanations in Step 2)

You can add this cron simply by issuing the following command:

crontab cron.txt

Just make sure that you check first that you have no other crons running. If so, just add them to the cron control file. Listing the crons for the current user:

crontab -l

Well, now enjoy the backups.

(出处:http://www.sheup.com/)


上一页 [1] [2] [3] [4] [5] [6] [7]