So you have a cpanel/WHM web server, you have it set to back up all of its accounts. Thats either costing you a lot of FTP bandwidth to send to a remote server, or you are being less than resiliant by only backing up to a local disk. Perhaps you have remote rsync SSH backup already, but havent really thought of the implications of running a SSH/rsync ‘push’ system. This howto is your solution. This howto is not just relevant for Cpanel/WHM servers and you can adapt it to any kind of Linux backups you make.
In Cpanel/.WHM, theres two options – backup to a local disk, or backup via FTP. While FTP is a great idea, as soon as you have a lot of customers and disk space in use, its a lot of data to transfer every night. You see, cpanel’s FTP backup option copies all of your backups, in their entirety, every night – highly inefficent and very bandwidth hungry. So if you have 150 gigabytes of customer data, thats 150 gigabytes you must transfer via FTP every night, which soon eats into your bandwidth allowances and can even end up costing you a lot of money in ‘overages.’ Contrastingly, backing up to a local drive seems like a much cheaper option – but what if your system gets hacked or defaced? What about a drive failure? Its very probable you will lose all that data and your customers will hang out out to dry in public for loosing everything.
So what can I do?
Well you could buy ‘off site remote backup’ space from someone. That would likely require you to set up cpanel/WHM (or your other hosting system) to backup to your local disk every night, then afterwards run a cron job to use Rsync (with SSH keys) to do whats called an ‘incremental’ backup. This means that every night after your local disk backup, rsync connects using a SSH key to a remote server and sends your backups over there. But its a bit clever, so it doesnt do what Cpanel/WHM ‘FTP backup’ does – rsync only sends over the network the data that has changed since your last backup. So while you may have 150 gigs of backups, if only 500 megabytes of data changed since the last backup, you only use 500 megabytes of data transfer to keep your remote backup ‘current’. This is a good solution, much better than transferring your entire server every night, but has its inherent weaknesses. For example, your Cpanel/WHM server has a SSH key to access the remote backup space automatically every night. If your Cpanel/WHM system is compromised, then the attacker could also log into your backup space, using that same key and simply erase all of your backups – leaving you with a dead system, dead local backup and dead remote backup – disaster scenario!
So whats the solution
SSH ‘pull’ is the safest bet. You set up your remote backup space (be it your local linux machine at home, or some backup space providers machine) to log into your server and ‘pull’ the data every night. This means that even if hackers gain entry to your hosting platform, they cannot delete your backups as they have no way of getting to them. In this guide, im going to walk you through setting it up.
Im going to assume you have Cpanel/WHM backups running already (if you dont, why the hell not?), that the backups go to a directory called /backups on your server(s) and that your server(s) have SSHd running. First thing to do is ensure you have enough space on the machine the backups are going to (‘df -h’ works great). Second thing to do is set up a new backup user, then some SSH keys to give your backup system permissions to get the backups from your cpanel/WHM server(s).
When the ssh-keygen process asks you for a passphrase, just hit enter (warning: using no passphrase is not very secure. Only do this if you are 100% sure your backup system is secured, preferably with SSHD disabled and that no other local use can access the backup user – if thats the case then doing it this way is not really a huge worry. You can passphrase the new SSH key but would need to run a ssh agent if you do and wanted the backups automated – google ssh-agent for more info – but even then, the agent will still be running so if the user was compromised its just as bad as having no key password in the first place)
$ sudo useradd -d /home/backup -m backup
$ sudo su - backup
$ ssh-keygen -t rsa -b 2048
Generating public/private rsa key pair.
Enter file in which to save the key (/home/backup/.ssh/id_rsa):
Created directory '/home/backup/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/backup/.ssh/id_rsa.
Your public key has been saved in /home/backup/.ssh/id_rsa.pub.
The key fingerprint is:
You now need to put the public key onto your server for the root user (or if you want, a user with sudo role – its more secure though you will need to change your rsync commands to take account of that)
$ scp .ssh/id_rsa.pub firstname.lastname@example.org:/root/.ssh/authorized_keys
Now once that done you can test out the key is working by SSH’ing in. If you dont get asked for a password, your SSH key is setup:
$ ssh email@example.com
Configuring the backup
So now you have SSH key access from your backup machine to the Cpanel/WHM server(s) its just a case of setting up a cron job to grab your data!
$ mkdir /home/backup/server1
$ crontab -e
In crontab, add the following entry (adjust the time the job runs to ensure that your Cpanel/WHM server(s) have enough time to do thier backups. for example, i know my cpanel backups finish around 3:30 am, so I set my rsync to run at 4.30 am). You can adjust bwlimit to something you prefer. I set it to 5000KB/sec (just under 50 mbps, so 50% of my available bandwdith) to ensure my regular users aren’t inconvenienced because something is chewing up all of the servers bandwidth. I also dont backup the spamassasin bloat. This should all be on one line:
30 4 * * * rsync -av --bwlimit=5000 --progress -e ssh --exclude '*spamass*' firstname.lastname@example.org:/backup/cpbackup /home/backup/server1/ > /home/backup/server1.results.txt 2>&1
That should be all you need. Check back the following day and look look in the /home/backup/server1.results.txt file, it should look something like this:
backup@host:~$ tail server1.results.txt
up 8 100% 0.04kB/s 0:00:00 (xfer#2755, to-check=32/437710)
3156258 100% 4.47MB/s 0:00:00 (xfer#2756, to-check=24/437710)
0 100% 0.00kB/s 0:00:00 (xfer#2757, to-check=20/437710)
0 100% 0.00kB/s 0:00:00 (xfer#2758, to-check=19/437710)
sent 3351898 bytes received 329706615 bytes 476137.97 bytes/sec
total size is 34722766009 speedup is 104.25
If it doesnt look like that, it will have any errors in there (thats what the 2>&1 does in the cron job – sends STDERR to the log file as well). Once you see what the errors are you can fix them. If it does look like that, congratulations – your SSH pull backups are now working!
I hope you found this useful