So you have a cpanel/WHM web server, you have it set to back up all of its accounts. Thats either costing you a lot of FTP bandwidth to send to a remote server, or you are being less than resiliant by only backing up to a local disk. Perhaps you have remote rsync SSH backup already, but havent really thought of the implications of running a SSH/rsync ‘push’ system. This howto is your solution. This howto is not just relevant for Cpanel/WHM servers and you can adapt it to any kind of Linux backups you make.

In Cpanel/.WHM, theres two options – backup to a local disk, or backup via FTP. While FTP is a great idea, as soon as you have a lot of customers and disk space in use, its a lot of data to transfer every night. You see, cpanel’s FTP backup option copies all of your backups, in their entirety, every night – highly inefficent and very bandwidth hungry. So if you have 150 gigabytes of customer data, thats 150 gigabytes you must transfer via FTP every night, which soon eats into your bandwidth allowances and can even end up costing you a lot of money in ‘overages.’ Contrastingly, backing up to a local drive seems like a much cheaper option – but what if your system gets hacked or defaced? What about a drive failure? Its very probable you will lose all that data and your customers will hang out out to dry in public for loosing everything.

So what can I do?
Well you could buy ‘off site remote backup’ space from someone. That would likely require you to set up cpanel/WHM (or your other hosting system) to backup to your local disk every night, then afterwards run a cron job to use Rsync (with SSH keys) to do whats called an ‘incremental’ backup. This means that every night after your local disk backup, rsync connects using a SSH key to a remote server and sends your backups over there. But its a bit clever, so it doesnt do what Cpanel/WHM ‘FTP backup’ does – rsync only sends over the network the data that has changed since your last backup. So while you may have 150 gigs of backups, if only 500 megabytes of data changed since the last backup, you only use 500 megabytes of data transfer to keep your remote backup ‘current’. This is a good solution, much better than transferring your entire server every night, but has its inherent weaknesses. For example, your Cpanel/WHM server has a SSH key to access the remote backup space automatically every night. If your Cpanel/WHM system is compromised, then the attacker could also log into your backup space, using that same key and simply erase all of your backups – leaving you with a dead system, dead local backup and dead remote backup – disaster scenario!

So whats the solution
SSH ‘pull’ is the safest bet. You set up your remote backup space (be it your local linux machine at home, or some backup space providers machine) to log into your server and ‘pull’ the data every night. This means that even if hackers gain entry to your hosting platform, they cannot delete your backups as they have no way of getting to them. In this guide, im going to walk you through setting it up.

Im going to assume you have Cpanel/WHM backups running already (if you dont, why the hell not?), that the backups go to a directory called /backups on your server(s) and that your server(s) have SSHd running. First thing to do is ensure you have enough space on the machine the backups are going to (‘df -h’ works great). Second thing to do is set up a new backup user, then some SSH keys to give your backup system permissions to get the backups from your cpanel/WHM server(s).

When the ssh-keygen process asks you for a passphrase, just hit enter (warning: using no passphrase is not very secure. Only do this if you are 100% sure your backup system is secured, preferably with SSHD disabled and that no other local use can access the backup user – if thats the case then doing it this way is not really a huge worry. You can passphrase the new SSH key but would need to run a ssh agent if you do and wanted the backups automated – google ssh-agent for more info – but even then, the agent will still be running so if the user was compromised its just as bad as having no key password in the first place)


$ sudo useradd -d /home/backup -m backup
$ sudo su - backup
$ ssh-keygen -t rsa -b 2048
Generating public/private rsa key pair.
Enter file in which to save the key (/home/backup/.ssh/id_rsa):
Created directory '/home/backup/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/backup/.ssh/id_rsa.
Your public key has been saved in /home/backup/.ssh/id_rsa.pub.
The key fingerprint is:
05:8c:df:24:18:a9:9e:22:87:08:49:5b:11:7c:2f:f1 backup@host

You now need to put the public key onto your server for the root user (or if you want, a user with sudo role – its more secure though you will need to change your rsync commands to take account of that)


$ scp .ssh/id_rsa.pub root@your.cpanel.server.com:/root/.ssh/authorized_keys

Now once that done you can test out the key is working by SSH’ing in. If you dont get asked for a password, your SSH key is setup:


$ ssh root@your.cpanel.server.com
root@your.cpanel.server.com:$

Configuring the backup
So now you have SSH key access from your backup machine to the Cpanel/WHM server(s) its just a case of setting up a cron job to grab your data!


$ mkdir /home/backup/server1
$ crontab -e

In crontab, add the following entry (adjust the time the job runs to ensure that your Cpanel/WHM server(s) have enough time to do thier backups. for example, i know my cpanel backups finish around 3:30 am, so I set my rsync to run at 4.30 am). You can adjust bwlimit to something you prefer. I set it to 5000KB/sec (just under 50 mbps, so 50% of my available bandwdith) to ensure my regular users aren’t inconvenienced because something is chewing up all of the servers bandwidth. I also dont backup the spamassasin bloat. This should all be on one line:


30 4 * * * rsync -av --bwlimit=5000 --progress -e ssh --exclude '*spamass*' root@your.cpanel.server.com:/backup/cpbackup /home/backup/server1/ > /home/backup/server1.results.txt 2>&1

Finishing up
That should be all you need. Check back the following day and look look in the /home/backup/server1.results.txt file, it should look something like this:


backup@host:~$ tail server1.results.txt
up 8 100% 0.04kB/s 0:00:00 (xfer#2755, to-check=32/437710)
cpbackup/daily/user/mysql/horde.sql
3156258 100% 4.47MB/s 0:00:00 (xfer#2756, to-check=24/437710)
cpbackup/daily/user/resellerconfig/resellers
0 100% 0.00kB/s 0:00:00 (xfer#2757, to-check=20/437710)
cpbackup/daily/user/resellerconfig/resellers-nameservers
0 100% 0.00kB/s 0:00:00 (xfer#2758, to-check=19/437710)
sent 3351898 bytes received 329706615 bytes 476137.97 bytes/sec
total size is 34722766009 speedup is 104.25

If it doesnt look like that, it will have any errors in there (thats what the 2>&1 does in the cron job – sends STDERR to the log file as well). Once you see what the errors are you can fix them. If it does look like that, congratulations – your SSH pull backups are now working!

I hope you found this useful

Share this post

20 Comments

  1. Harsh criticism ought to be taken lightly from strangers, however we ought to briefly ponder
    if there is any truth behind it if only to improve ourselves and avoid becoming a negative insecure bad person
    with too much effort on our hands. I immediately called
    her around the behavior and forced her to apologize. You
    can also add sites for a Trusted Sites manually by typing your entire domain URL to
    the field marked Add this site to the zone.

    May 6, 2014 Reply to this comment
  2. What a stuff of un-ambiguity and preserveness of
    valuable know-how about unpredicted emotions.

    April 1, 2014 Reply to this comment
  3. I think everything published made a bunch of sense.
    But, consider this, what if you wrote a catchier post title?
    I mean, I don’t want to tell you how to run your blog, however what if you added something that grabbed a person’s
    attention? I mean Easy cpanel WHM or linux remote
    backup – SSH pull rsync backups for security and integrity using incremental

    June 17, 2013 Reply to this comment
  4. Paragraph writing is also a excitement, if you know afterward you can write otherwise it is difficult to write.

    June 7, 2013 Reply to this comment
  5. Ivan

    Great tutorial, thanks so much. It’s very easy to follow and I like how it focuses on security, but I just want to discuss a bit the sentence below:

    ‘You now need to put the public key onto your server for the root user (or if you want, a user with sudo role – its more secure…’

    Wouldn’t you say, that for this particular purpose, having a user with sudo role wouldn’t really be more secure? since what we would try to prevent is someone doing damage to rest of the system if they were to gain access to the backup machine. Correct me if I’m wrong – I’m still a novice – but wouldn’t it be better to create a new user and give access only to the WHM backups folder? -and only that-. So if someone were to try to connect from our backup machine and was able to logon to our remote server they wouldn’t be able to do much?

    Happy to know your thoughts about this. Thanks again.

    May 3, 2013 Reply to this comment
    • admin

      Hi Ivan,
      A user with a sudo role would be more secure – you can limit the commands a sudo user can run. Most often, sudo is used to elevate the user privilege to UID 0 and run a command, but you can also make it so the user can only elevate to UID for specific commands. So my user could have sudo access to the rsync command, but if they tried to use sudo access to run some other command (say, echo a line to /etc/shadow) then it wont work and would be logged.

      Sure, giving a sudo user UID 0 for rsync is still pretty dangerous in itself, but by using a sudo role you automatically limit the ability for a root login and you then only allow that sudo user to use rsync as root. Its all down to personal preference and level of security desired.

      Giving access to the backups folder to a non UID 0 user is possible, but it requires a lot of chmod’ing (this is because the backups when set up this way are writing the files/folders with the UID/GID intact, and permissions intact). Sure you could chown/chmod everything but that makes things a lot harder when reverting a backup.

      May 4, 2013 Reply to this comment
      • Ivan

        Thanks so much for your response. That makes a lot of sense. I didn’t think of that and your explanation will help a lot in making it more secure.

        Thanks again!

        May 5, 2013 Reply to this comment
  6. how if once in rsync complete, files on the old server I delete it?

    April 24, 2013 Reply to this comment
  7. how if once rsync done, files on the old server i delete it?

    April 24, 2013 Reply to this comment
  8. Yaniv

    Hi,
    Great article.
    Can you please send the script ?

    April 8, 2012 Reply to this comment
    • Daniel

      “Great article.
      Can you please send the script ?”

      Really? If you read the article, and I assume you have (i know they are the mother of all…) then you’d not need a script he/she has told you exactly what to do. (Except create the backup user on the backup machine, or maybe my coffee hadn’t kicked in).

      Thanks this saved me a lot of time :).

      May 25, 2012 Reply to this comment
  9. thanks alot for the tips. u made me get through all these scary ideas that i cant backup to another server unless i do full daily backups. thanks again

    January 8, 2012 Reply to this comment
  10. Dragos

    Great Article!
    Thank you very much!

    October 27, 2011 Reply to this comment
  11. V

    You guys have any recommendations on a good backup space provider. Sure I can google it but would love to hear from some one with personal experience?.

    January 9, 2011 Reply to this comment
  12. Alex

    I’m assuming this requires WHM to be setup for incremental backups instead of tar/tar.gz backup to really take advantage of rsync?

    May 7, 2010 Reply to this comment
    • Alex

      Nevermind, I guess the compressed backups are created using GZIP’s rsyncable option!

      May 7, 2010 Reply to this comment
      • admin

        I run them incremental for ease of access, but yep the –rsyncable option on gzip should sort you out :)

        May 8, 2010 Reply to this comment
  13. johnsee

    Just thinking about it a little more myself, I assume putting the whole lot in a script and calling that script could take care of recording the start and finish time perhaps?

    April 22, 2010 Reply to this comment
    • admin

      Thanks for the positive feedback always good to hear Im helping others out. Im not sure why you get nothing in your log file – if you copied the command as noted it should work flawlessly, but i could be wrong. The 2>&1 just writes STDERR, its usually handy to have as if something goes wrong with the system, you get the error output in your log too (instead of it going to cron’s ‘console’)

      A hack to mail yourself the log could be as simple as adding this to the end of the cron job (note semicolon before mail):

      ;mail -s “Backup Report” you@you.com < /path/to/logfile.txt

      If you wanted start time, stop time, bytes transferred, a simple bash wrapper should be able to achieve this, write start time to a variable, run the rsync command and pipe output to log file, write stop time to variable and then ‘tail -1 /path/to/logfile’ to a variable. Throw those variables into a /tmp file and then have the script call the mail command as above, piping the /tmp file in as input. Script then cleans up the /tmp file and jobs done.

      Though there is probably a much easier way of doing it, just like all good things in linux you can do them a million ways and still be ‘right’ as long as it works well for you.

      April 22, 2010 Reply to this comment
  14. johnsee

    Firstly, a fantastic tutorial. I can’t believe I’m the first to comment.

    I’ve got everything running perfectly and am now wanting to take it to the next level.

    I’ve removed ’2>&1′ in order to recieve an email when the sync in complete, but all I’m getting through at the moment is: ‘stdin: is not a tty’ in the email body. The sync is completing properly however, and all the appropriate data is being written in the text file.

    What I’d like is the email to say something along the lines of:
    Time taken: x
    Data transfered: y

    and that’s about it. Any tips?

    April 22, 2010 Reply to this comment

Leave a Reply