Backup & Recovery strategy



Using the snapshot function, you can create an image of the contents of the storage box. This way, you can restore content in a short time when you need it.

Using this function, you can create a set of schedules for automatic snapshots.

Please note that when you restore an earlier snapshot, all the newer snapshots will be deleted.





When you create a new snapshot every month, and then back up your database and server configuration every 6th and 4th every week, then if the server is damaged, you will have more chances to get the data.

Snapshots
The snapshot is a complete image of the storage box. You can create them on the robot under "Storage Box". No space is required when the snapshot is created. It grows as you change or delete files. The newly added files in the snapshot do not require space. When making a snapshot, its storage space comes from the storage capacity of the storage box. (Each snapshot you make uses the space in the storage box.)

You can reset the storage box to the snapshot state. This will restore changed and deleted files, and delete new files. This will also delete all snapshots that are newer than the snapshot you just used to restore the storage box. And any older snapshots will be kept. Suppose you have snapshots A, B, C, D, and E. A is the oldest, and E is the youngest. If you restore snapshot C, it will delete snapshots D and E, while snapshots A, B, and C will remain.
Access to snapshotsIf "Show Snapshot Directory" is activated on the Robot, you can access the snapshot through the directory /.zfs/snapshot on the storage box. In this directory, each snapshot has a subfolder that maps the storage box at the time of the snapshot. You can download individual files or entire directories as usual. Cannot write to the /.zfs directory or its subfolders.
Automatic snapshots

You can use Robot to automatically create snapshots based on a schedule of your choice. You can create snapshots daily, weekly or monthly. You can choose a specific time (UTC time zone), day of the week and day of the month. When the snapshot limit is reached, and the next automatic snapshot is about to occur, Robot will automatically delete the oldest snapshot so that it can create a new snapshot. You can access the automatically created snapshots just like normal snapshots.


Use SSHFS to Mount Remote File Systems Over SSHFortunately, there is a way to mount the VPS file system to your local computer so that you can make changes at any time and use the storage box as local storage. In this article, we will show you how to do this and backup your system and database.


apt-get install sshfs


mkdir /media/backupbox


sshfs -o allow_other,default_permissions -p 23 u@u.your-storagebox.de:/home/ /media/backupbox/


Or you can use curlftpfs


apt install curlftpfs

curlftpfs ftp://user:pwd@host.your-backup.de/ /media/backupbox


@reboot /usr/bin/curlftpfs ftp://user:pwd@host.your-backup.de/ /media/ftp


Don’t forget to mount @reboot in crontab your file system.

of in /etc/fstab


is configured for login via ssh key authorization and mount.






Then change in your script path to your folder.


BACKUP_DIR=/media/backupbox/database/


On a web server you can use.. Rsync with the day of the week in the file system.

For example:


apt-get install rsync


mkdir /media/backupbox/daily

mkdir /media/backupbox/weekly

mkdir /media/backupbox/monthly


mkdir /home/sites/scripts


One example for script file

rsync -r --copy-links --update -D /home/sites /media/backupbox/daily



If you have some problems with method. For example operation not supported or connection is slow and not working properly.


failed: Operation not supported (95)


Then you can try activate





Samba support and mount your backupbox via SAMBA/CIFS

Access with SAMBA/CIFS
This is best working method in my experiences. rsync and copy working then more faster and connection not lost during file transfer.


apt-get install cifs-utils

mount.cifs -o user=<username>,pass=<password> //<username>.your-storagebox.de/backup /PATH/FOLDER



On Debian Wheezy based systems, edit the parameters as follows if you are having problems:

rsize=65536,wsize=130048



You should also add the following lines to /etc/rc.local:
modprobe cifs echo 0 > /proc/fs/cifs/OplockEnabled


Trouble shutting


This mounting method has obvious limitations. It is beter to use full backup sollution see BorgBackup for example.



I get IO errors on your-storagebox.de with files 41+ GB ... small files I can copy without any problem.

gzip: stdout: Input/output error

[!!ERROR!!] Failed to produce plain backup database


cp -r /media/backup/database/2021-05-29-daily/ /media/backupbox/database/

cp: failed to close '/media/backupbox/database/2021-05-29-daily/vindazo_de.sql.gz': Input/output error




rsync --partial --stats --progress -A -a -r -v --no-perms --update -D /media/backup/database/2021-05-29-daily/ /media/backupbox/database/2021-05-29-daily/

sending incremental file list

./

vindazo_de.sql.gz

44,335,290,384 100% 162.04MB/s 0:04:20 (xfr#1, to-chk=0/3)

rsync: [receiver] close failed on "/media/backupbox/database/2021-05-29-daily/.vindazo_de.sql.gz.a4Mq6h": Input/output error (5)

rsync error: error in file IO (code 11) at receiver.c(868) [receiver=3.2.3]


rsync --partial --stats --progress -A -a -r -v --no-perms --update -D /media/backup/database/2021-05-29-daily/ /media/backupbox/database/2021-05-29-daily/

rsync: [Receiver] getcwd(): No such file or directory (2)

rsync error: errors selecting input/output files, dirs (code 3) at util.c(1088) [Receiver=3.2.3]




Check netwerk connection

mtr --report -c 1000 host


Further details. As already stated and also in combination with your network trace and multiple systems/boxes you're experiencing this issue with this is most likely no issue with the boxes network infrastructure itself.


You seem to compress the files before transfer, depending on the way you're doing this and taking into account that it fails with larger files a likely issue is, that your client is caching the files in front of transfer which fails if there is no more space available on your client.


You might want to use a backup solution which uses chunked data transfer such as e.g. borg in this case:
In other words you have to do sql backup in file system and then transfer file via ftp to backup solution.



BorgBackup

(abbreviation: Borg) is a data deduplication backup program. Also supports compression and authentication encryption.

The main goal of Borg is to provide efficient and secure backup solutions. Due to the deduplication function, Borg's backup process is very fast, which makes Borg very interesting for daily backups. You may notice that Borg is much faster than some other methods, depending on the amount of data you need to back up and the number of changes. With Borg, all data has been encrypted on the client side, which makes Borg a good choice for hosting systems.

apt install borgbackup

More information see hetzner community docs.


https://community.hetzner.com/tutorials/install-and-configure-borgbackup

Comments