Skip to main content

Backup & Recovery strategy

Using the snapshot function, you can create an image of the contents of the storage box. This way, you can restore content in a short time when you need it.

Using this function, you can create a set of schedules for automatic snapshots.

Please note that when you restore an earlier snapshot, all the newer snapshots will be deleted.

When you create a new snapshot every month, and then back up your database and server configuration every 6th and 4th every week, then if the server is damaged, you will have more chances to get the data.


The snapshot is a complete image of the storage box. You can create them on the robot under "Storage Box". No space is required when the snapshot is created. It grows as you change or delete files. The newly added files in the snapshot do not require space. When making a snapshot, its storage space comes from the storage capacity of the storage box. (Each snapshot you make uses the space in the storage box.)

You can reset the storage box to the snapshot state. This will restore changed and deleted files, and delete new files. This will also delete all snapshots that are newer than the snapshot you just used to restore the storage box. And any older snapshots will be kept. Suppose you have snapshots A, B, C, D, and E. A is the oldest, and E is the youngest. If you restore snapshot C, it will delete snapshots D and E, while snapshots A, B, and C will remain.

Access to snapshots

If "Show Snapshot Directory" is activated on the Robot, you can access the snapshot through the directory /.zfs/snapshot on the storage box. In this directory, each snapshot has a subfolder that maps the storage box at the time of the snapshot. You can download individual files or entire directories as usual. Cannot write to the /.zfs directory or its subfolders.

Automatic snapshots

You can use Robot to automatically create snapshots based on a schedule of your choice. You can create snapshots daily, weekly or monthly. You can choose a specific time (UTC time zone), day of the week and day of the month. When the snapshot limit is reached, and the next automatic snapshot is about to occur, Robot will automatically delete the oldest snapshot so that it can create a new snapshot. You can access the automatically created snapshots just like normal snapshots.

Use SSHFS to Mount Remote File Systems Over SSH

Fortunately, there is a way to mount the VPS file system to your local computer so that you can make changes at any time and use the storage box as local storage. In this article, we will show you how to do this and backup your system and database.

apt-get install sshfs

mkdir /mnt/backupbox

sshfs -o allow_other,default_permissions -p 23 /mnt/backupbox/

Or you can use curlftpfs 

apt install curlftpfs

curlftpfs /mnt/backupbox

@reboot /usr/bin/curlftpfs /media/ftp 

Don’t forget to mount @reboot in crontab your file system. 

of in


 is configured for login via ssh key authorization and mount.

Then change  in your script path to your folder. 


On a web server you can use..  Rsync with the day of the week in the file system. 

For example:

apt-get install rsync

mkdir /mnt/backupbox/daily

mkdir /mnt/backupbox/weekly

mkdir /mnt/backupbox/monthly

mkdir /home/sites/scripts

One example for script file 

rsync  -r --copy-links --update -D /home/sites /mnt/backupbox/daily 

If you have some problems with method. For example operation not supported or connection is slow and not working properly.

failed: Operation not supported (95) 

Then you can try activate

Samba support and mount your backupbox via  SAMBA/CIFS

Access with SAMBA/CIFS

This is best working method in my experiences.  rsync and copy working then more faster and connection not lost during file transfer.

apt-get install cifs-utils

mount.cifs -o user=<username>,pass=<password> //<username> /PATH/FOLDER


On Debian Wheezy based systems, edit the parameters as follows if you are having problems:



You should also add the following lines to /etc/rc.local:
modprobe cifs
echo 0 > /proc/fs/cifs/OplockEnabled


Trouble shutting

This mounting method has obvious limitations. It is beter to use full backup sollution see BorgBackup for example.

I get IO errors on with files 41+ GB ... small files I can copy without any problem. 

gzip: stdout: Input/output error

[!!ERROR!!] Failed to produce plain backup database  

cp -r /media/backup/database/2021-05-29-daily/  /mnt/backupbox/database/

cp: failed to close '/mnt/backupbox/database/2021-05-29-daily/vindazo_de.sql.gz': Input/output error

rsync --partial --stats --progress -A -a -r -v --no-perms  --update -D /media/backup/database/2021-05-29-daily/ /mnt/backupbox/database/2021-05-29-daily/

sending incremental file list



 44,335,290,384 100%  162.04MB/s    0:04:20 (xfr#1, to-chk=0/3)

rsync: [receiver] close failed on "/mnt/backupbox/database/2021-05-29-daily/.vindazo_de.sql.gz.a4Mq6h": Input/output error (5)

rsync error: error in file IO (code 11) at receiver.c(868) [receiver=3.2.3]

rsync --partial --stats --progress -A -a -r -v --no-perms  --update -D /media/backup/database/2021-05-29-daily/ /mnt/backupbox/database/2021-05-29-daily/

rsync: [Receiver] getcwd(): No such file or directory (2)

rsync error: errors selecting input/output files, dirs (code 3) at util.c(1088) [Receiver=3.2.3]


 Check netwerk connection

mtr --report -c 1000 host
Further details. As already stated and also in combination with your network trace and multiple systems/boxes you're experiencing this issue with this is most likely no issue with the boxes  network infrastructure itself.

You seem to compress the files before transfer, depending on the way you're doing this and taking into account that it fails with larger files a likely issue is, that your client is caching the files in front of transfer which fails if there is no more space available on your client.

You might want to use a backup solution which uses chunked data transfer such as e.g. borg in this case:
In other words you have to do sql backup in file system and then transfer file via ftp to backup solution.


(abbreviation: Borg) is a data deduplication backup program. Also supports compression and authentication encryption.

The main goal of Borg is to provide efficient and secure backup solutions. Due to the deduplication function, Borg's backup process is very fast, which makes Borg very interesting for daily backups. You may notice that Borg is much faster than some other methods, depending on the amount of data you need to back up and the number of changes. With Borg, all data has been encrypted on the client side, which makes Borg a good choice for hosting systems.

apt install borgbackup 

More information see hetzner community docs.


Popular posts from this blog

Pgpool PgBouncer Postgresql streaming replication, load balancing and administration

The term scalability refers to the ability of a software system to grow as the business that uses it grows. PostgreSQL provides some features to help you build scalable solutions, but strictly speaking, PostgreSQL itself is not scalable. It can effectively use the following resources from one computer. Now, we will show you some configurations that may be useful for your use case. However, this can be problematic when distributing the database solution to multiple computers, because the standard PostgreSQL server can only run on a single computer. In this article, we will study different extension schemes and their implementation in PostgreSQL. Replication can be used in many expansion scenarios. Its main purpose is to create and maintain a backup database when the system fails. This is especially true for physical replication. However, replication can also be used to improve the performance of PostgreSQL-based solutions. Sometimes third-party tools can be used to implement complex exp

Tekstverwerking python Text processing python SpaCy, TensorFlow, NLTK, Allen-NLP, Stanford-NLP

 Dit post maakt gebruik van spaCy, een populaire Python-bibliotheek die de taalgegevens en algoritmen bevat die je nodig hebt om teksten in natuurlijke taal te verwerken. Zoals u in dit post zult leren, is spaCy gemakkelijk te gebruiken omdat het containerobjecten biedt die elementen van natuurlijke taalteksten vertegenwoordigen, zoals zinnen en woorden. Deze objecten hebben op hun beurt attributen die taalkenmerken vertegenwoordigen, zoals delen van spraak. Op het moment van schrijven bood spaCy voorgetrainde modellen aan voor Engels, Duits, Grieks, Spaans, Frans, Italiaans, Litouws, Noors BokmÃ¥l, Nederlands, Portugees en meerdere talen gecombineerd. Bovendien biedt spaCy ingebouwde visualizers die u programmatisch kunt aanroepen om een grafische weergave van de syntactische structuur van een zin of benoemde entiteiten in een document te genereren.   De spaCy-bibliotheek ondersteunt ook native geavanceerde NLP-functies die andere populaire NLP-bibliotheken voor Python niet hebben. Spa

Google Closure

   Closure Library De Closure-bibliotheek is een JavaScript-bibliotheek, vergelijkbaar met andere moderne producten zoals jQuery, Angular, Vue.js, Dojo en MooTools. De coderingsstijl en het gebruik van opmerkingen in de Closure-bibliotheek zijn op maat gemaakt voor Closure Compiler. In vergelijking met andere JavaScript-bibliotheken is het de belangrijkste onderscheidende factor van Closure Compiler. Een eenvoudig compressie-experiment ontdekte dat wanneer Closure Compiler wordt gebruikt in plaats van YUI Compressor, de Closure Lib-code met 85% kan worden verminderd, wat een enorme impact kan hebben op de codecompressiecapaciteit van de compiler.    De implementatie van de  closure bibliotheek richt zich ook op leesbaarheid en prestaties. Wees zuinig bij het maken van objecten, maar wees genereus bij het benoemen en opnemen van objecten. Het heeft ook een prachtig gebeurtenissysteem, ondersteuning voor klassen en overerving en verschillende UI-componenten, waaronder bijvoorbeeld een ri