Cedeus DB backups
>> return to Cedeus_IDE
Contents
- 1 How to set up Automated Backups
- 2 Performed CEDEUS Observatory Backups
- 2.1 Dump of the GeoNode DB - on CedeusDB
- 2.2 Dump of the GeoNode user db - on CedeusGeonode VM (13080)
- 2.3 Tar/zip of the (uploaded) GeoNode file data and docs - on CedeusGeonode Vm (13080)
- 2.4 Backup of Elgg miCiudad - on CedeusGeonode VM (15080)
- 2.5 MySQL dump for Mediawiki(s) - on CedeusGeonode VM (22080 vs. 21080)
- 2.6 Synchronization of backup files between CedeusGeoNode and CedeusGIS1
- 3 Deletion of old files
- 4 Installation of APC Smart UPS RT3000V
- 5 ToDo List
How to set up Automated Backups
The Objective of this exercise is to have an automated backup process of user-profiles and user contributed data, that is copied to a portable medium at least once a week.
General Workflow to Create the Backups
The backups contain several steps. Usually they consist of:
- create a script that contain commands to
- create a database dump =or= tar/zip the files in a particular folder
- copy this dump file or zip archive to another machine from where it can be easily copied to portable medium, i.e. tape
- create a cron tab entry that runs the backup script(s) at some set intervall, e.g. each night at 1am
- create a cron tab entry that triggers deletion of old backup files
Below now some personal notes on how to set things up:
Notifications
To get notified about the backups via email, a/the shell script may send emails via "mailx" - i.e Nail. => see http://klenwell.com/press/2009/03/ubuntu-email-with-nail/
Btw. postfix may work as well
=> ToDo: Install mail program
Example: cron Job that makes a Dump of the GeoNode DB
General infos on how to create a Cron tab can be found here: https://help.ubuntu.com/community/CronHowto
- create a shell script that contains the pgdump instructions - see for example /home/ssteinig/pgdbbackup.sh on CedeusDB
- test if script or script execution actually works. A simple script for testing may perhaps be this (/home/ssteinig/touchy.sh)
-
#!/bin/bash touch /home/ssteinig/ftw.text
- create a cron-tab entry for user ssteinig with "
crontab -e
"- then add entry such as "
00 01 * * * sh /home/ssteinig/geonodegisdb93backup.sh
" to run the dump script daily at 1am - => when using the user "postgres" to do the db dump
- check if postgres user has a password assigned already (use ALTER... to do so: http://wiki.geosteiniger.cl/mediawiki-1.22.7/index.php/Setting_up_geonode#Some_PostgreSQL_commands )
- create a .pgpass file to provide the password: http://wiki.postgresql.org/wiki/Pgpass
- Note, the pgpass file should have chmod 0600. If it does not, then pg will ask for a password.
- then add entry such as "
- check if cron is running: "
sudo service cron status
" otherwise start it... - to see what the cron tab contains use "
crontab -l
" - to check if a cron is executed check the log:
sudo tail -f /var/log/syslog
Dump example script geonodegisdb93backup.sh
#!/bin/bash logfile="/home/ssteinig/geonode_db_backups/pgsql.log" backup_dir="/home/ssteinig/geonode_db_backups" touch $logfile echo "Starting backup of databases " >> $logfile dateinfo=`date '+%Y-%m-%d %H:%M:%S'` timeslot=`date '+%Y%m%d-%H%M'` /usr/bin/vacuumdb -z -h localhost -U postgres geonodegisdb93 >/dev/null 2>&1 /usr/bin/pg_dump -U postgres -i -F c -b geonodegisdb93 -h 127.0.0.1 -f $backup_dir/geonodegisdb93-backup-$timeslot.backup echo "Backup and Vacuum complete on $dateinfo for database: geonodegisdb93 " >> $logfile echo "Done backup of databases " >> $logfile # sstein: email notification not used at the moment # tail -16 /home/ssteinig/geonode_db_backups/pgsql.log | mailx blabla@blub.cl
This example is based on the shell script posted here: http://stackoverflow.com/questions/854200/how-do-i-backup-my-postgresql-database-with-cron For a better Postgres dump script it may be worth to look here: https://wiki.postgresql.org/wiki/Automated_Backup_on_Linux
File transfer
To transfer files, I decided to create a new cedeus backup user on the receiving computer (20xxb...p).
A file transfer can be accomplished using scp or rsync e.g.:
- "
scp /home/ssteinig/ftw.txt user@example.com:/home/backup_user/dbbackups/
"
- However, a ssh key should be generated first so no password needs to be provided. A detailed dscription can be found on: http://troy.jdmz.net/rsync/index.html. However, later on I used this description: http://blogs.oracle.com/jkini/entry/how_to_scp_scp_and .
- in short do "
ssh-keygen -t rsa -b 2048 -f /home/thisuser/cron/thishost-rsync-key
". But do not provide a pass phrase when generating it, otherwise it will always asked for it when establishing a connection. - Then copy the key to the other servers users .ssh/ folder (e.g. using scp), and add it to the authorized_keys using "
cat blabla_key.pub >> authorized_keys
" (Note, the authorized_keys should be chmod 700, and eventually restrict the incoming IP - see http://troy.jdmz.net/rsync/index.html). - Then we would use "
scp -i /home/ssteinig/cron/thishost-rsync-key /home/ssteinig/ftw.txt user@example.com:/home/backup_user/dbbackups/
" - note that it is probably necessary to initialize a server connection once (with whatever file), so the connection gets an ECDDSA key fingerprint.
- "
- for the use of rsync see the section below on "sync with CedeusGIS1"
Performed CEDEUS Observatory Backups
A description on a test how to backup and restore GeoNode data can be found under backup of geonode. So this page was used as an input for the backup details below.
Dump of the GeoNode DB - on CedeusDB
- server: CedeusDB
- cron job running nightly at 1:00am
- using the script geonodegisdb93backup.sh
- copies the PG dump file to CedeusGeoNode into folder /home/cedeusdbbackupuser/geonodedbbackups/
Dump of the GeoNode user db - on CedeusGeonode VM (13080)
- server: CedeusGeoNode on geonode1204 VM
- cron job running nightly at 1:10am
- using the script geonodeuserdbbackup.sh
- copies the PG dump file to CedeusGeoNode into folder /home/cedeusdbbackupuser/geonodeuserdbbackups/
Tar/zip of the (uploaded) GeoNode file data and docs - on CedeusGeonode Vm (13080)
Data to backup
GeoNode settings and uploaded data may change in different frequencies or almost never. Hence it seems its best to do once-in-a-while backup of stuff that does not seem to change that much and frequent backups for file uploads and styles etc.
- We do once-in-a-while backup of stuff that does not seem to change that much, such as:
- GeoNode config: "
sudo tar -cvzf /home/ssteinig/geonodeConfigBackup.tgz /etc/geonode
" - Django language strings: "
sudo tar -cvzf /home/ssteinig/geonodei18nBackup.tgz /usr/local/lib/python2.7/dist-packages/geonode/locale/
" - GeoNode www folder (including static subfolder and data folder): "
sudo tar -cvzf /home/ssteinig/geonodeWWWBackup.tgz /var/www/geonode/
" (note, this also includes the GeoNode upload folders, that are to backup-ed daily, see below) - Eventually there are data in /var/lib/geoserver/geonode-data/, for instance the printing setup file config.yaml. So one should also do a once-in-a-while backup: "
sudo tar -cvzf /home/ssteinig/geonodeDataBackup.tgz /var/lib/geoserver/geonode-data/
"
- => These tar files need to be copied by hand to CedeusGeoNode's /home/cedeusdbbackupuser/geonode_one_time_backup/, e.g. with "
scp -i /home/ssteinig/.ssh/id_rsa /home/ssteinig/geoserverDataBackup.tgz cedeusdbbackupuser@146.155.17.19:/home/cedeusdbbackupuser/geoserverbackup
"
- GeoNode config: "
- We will backup a couple of folders that can change frequently:
- GeoServer (i.e. rasters, gwc layers, map styles, etc.): "
sudo tar -cvzf /home/ssteinig/geoserverDataBackup.tgz /usr/share/geoserver/data/
"- ... copied to /home/cedeusdbbackupuser/geoserverbackup/.
- GeoNode www-data uploads (i.e. raster data, pdfs, etc): "
sudo tar -cvzf /home/ssteinig/geonodeWWWUploadBackup.tgz /var/www/geonode/uploaded/
"- ... copied to /home/cedeusdbbackupuser/geonodewwwuploadbackup/.
- => these two frequent backups are performed in the shell script geonodewwwdatabackup.sh (see below)
- => ToDo: it is not clear to me yet if I need to run frequent backups using sudo i.e. sudo sh geonodewwwdatabackup.sh (or the sudo cron tab). Because when testing the tar files generation with and without sudo using my normal login (on 10 Dec. 2014) the resulting tar archives had the same size, indicating that content was the same.
- GeoServer (i.e. rasters, gwc layers, map styles, etc.): "
Running cron shell script
The shell script geonodewwwdatabackup.sh is used to create frequent copies of the GeoNode and GeoServer data files. The tar commands itself, in the script, are not run with sudo, as this would require to type the credentials. Instead the script should be run using "sudo" to get access to all the data folders. ToDo: However as noted above, in a test with my standard login, there was no difference in tar file size between using not using sudo and using it. Hence, I shall execute the script using my personal cron-tab, instead of using the admin/root cron-tab.
To copy the tar files to CedeusGeoNode server with scp we use the ssh login credentials that were already established for the GeoNode userdb backup.
Tar backup summary
- server: CedeusGeoNode on geonode1204 VM
- cron job running nightly at 1:20am
- using the script geonodewwwdatabackup.sh
- copies the geoserver-data tar file to CedeusGeoNode into folder /home/cedeusdbbackupuser/geoserverbackup/
- copies the geonode-data tar file to CedeusGeoNode into folder /home/cedeusdbbackupuser/geonodewwwuploadbackup/
- requires manual tar ball creation and copying to CedeusGeoNode of
- geonodeConfigBackup.tgz with copy to /home/cedeusdbbackupuser/geonode_one_time_backup/
- geonodei18nBackup.tgz with copy to /home/cedeusdbbackupuser/geonode_one_time_backup/
- geonodeWWWBackup.tgz with copy to /home/cedeusdbbackupuser/geonode_one_time_backup/
- perhaps: geonodeDataBackup.tgz with copy to /home/cedeusdbbackupuser/geonode_one_time_backup/
Backup of Elgg miCiudad - on CedeusGeonode VM (15080)
the official Elgg backup guide: http://learn.elgg.org/en/1.9/admin/backup-restore.html
Data to backup
- the elgg database as mysql dump
- the elgg web folder as tar
- the elgg data folder as tar => the folders files (e.g. in /elggdata/1/39/file/) cannot be accessed by the backup user sst... It is owned by the www-data user. This problem needs to be solved when creating the tar.
This does not work yet => To be able to backup the elgg data directory I needed to grant the my backup user (sst...) access rights to this folder or use sudo. The Elgg data directory is owned by www-data, so I added my user to this group, using sudo usermod -a -G www-data ssteinig
- see also http://www.cyberciti.biz/faq/ubuntu-add-user-to-group-www-data/ . However, I had no success.
=> Hence, I am running the script as root in the root crontab instead - with sudo crontab -e
.
Elgg backup summary
- server: CedeusGeoNode on elgg VM (15080)
- cron job running nightly at 1:45am
- using the script createmiciudadbackup.sh
- copies the three files to CedeusGeoNode into folder /home/cedeusdbbackupuser/miciudadbackups/
MySQL dump for Mediawiki(s) - on CedeusGeonode VM (22080 vs. 21080)
the official Mediawiki backup guide: http://www.mediawiki.org/wiki/Manual:Backing_up_a_wiki
Before writing the backup scripts, I actually changed the root passwords for mysql DBs using UPDATE mysql.user SET Password=PASSWORD('foobar') WHERE User='tom' AND Host='localhost';
Note, when changing the root password one needs to restart the mysql service or apply FLUSH PRIVILEGES;
right after changing the pw. However, its probably even better to create a backup user that is used for doing the mysql dumps. (see also http://www.cyberciti.biz/faq/mysql-change-user-password/)
Data to backup
what do we need to backup:
- database : via a mysql dump; e.g. using also zip for a smaller file:
mysqldump -h hostname -u userid --password dbname | gzip > backup.sql.gz
- uploaded data/images/extensions etc in /var/www/html/wiki/: create a tar ball
Mediawiki backup summary
CEDEUS Wiki
- server: CedeusGeoNode on wikicedeus VM (22080)
- cron job running nightly at 1:15am
- using the script createcedeuswikibackup.sh
- copies the two files to CedeusGeoNode into folder /home/cedeusdbbackupuser/cedeuswikibackups/
Stefan's Wiki
- server: CedeusGeoNode on mediawiki VM (21080)
- cron job running nightly at 1:40am
- using the script createmywikibackup.sh
- copies the two files to CedeusGeoNode into folder /home/cedeusdbbackupuser/stefanwikibackups/
Synchronization of backup files between CedeusGeoNode and CedeusGIS1
this file sync should serve to:
- have a second backup location
- to make copies of the backup files to a portable drive (via USB) or/and to the Dell RD1000
To perform the folder synchronization we will use "rsync" tool. For an introduction to rsync see http://www.digitalocean.com/community/tutorials/how-to-use-rsync-to-sync-local-and-remote-directories-on-a-vps
Sync summary
- from server CedeusGeoNode to CedeusGIS1
- cron job running nightly at 2:00am
- using the script syncwithcedeusgis1.sh run by backup-user
- synchronizes backup files to CedeusGIS1 with folder /home/ssteinig/backups_cedeusservers/ => sync means: deleted files on the source are also deleted at the target (but not vice versa)
Deletion of old files
Examples
An example for finding files older than a specific number of days that follow a particular name-ing pattern is
find $BACKUP_DIR -maxdepth 1 -mtime +$DAYS_TO_KEEP -name "*-daily"
taken from http://wiki.postgresql.org/wiki/Automated_Backup_on_Linux
A shorter version is:
find /home/cedeusdbbackupuser/geonode_one_time_backup/ -maxdepth 1 -mtime +5
This searches for all(!) files in the particular folder that are older than 5 days. The search does not include subfolder, as the -maxdepth param is set to "1".
To delete the found files on adds at the end -exec rm... as in this example:
find /home/cedeusdbbackupuser/geonode_one_time_backup/ -maxdepth 1 -mtime +5 -exec rm -rf '{}' ';'
File deletion realized
- GeoNode Database on CedeusDB : script removeolddbbackups.sh deletes files older than 7 days. Crontab running every Tuesday 3am. Writes to log file.
- All backups on CedeusGeoNode (as backupuser): script removeoldbackups.sh deletes files older than 7 days - except for files in folder geonode_one_time_backup. Crontab running every day 0:30 am (before any backup). Writes to sync.log log file.
- GeoNode user db and tar files on GeoNode1204 VM: script removeoldgeonodedatabackups.sh deletes files older than 7 days. Crontab running every Tuesday 3am. Writes to the 2 different log files.
- Mediawiki / stefans wiki on MediaWiki VM: script removeoldstefanwikibackups.sh deletes files older than 7 days. Crontab running every Tuesday 3am. Writes to log file.
- Cedeuswiki on WikiCedeus VM: script removeoldcedeuswikibackups.sh deletes files older than 7 days. Crontab running every Tuesday 3am. Writes to log file.
- Elgg on Elgg VM: script removeoldelggbackups.sh deletes files older than 7 days. Root Crontab running every Tuesday 3am. Writes to log file.
- deactivated (as I am using rsync with delete option): All backups on CedeusGIS1 : script removeoldserverbackups.sh deletes files older than 7 days. Crontab running every day 3am.
Installation of APC Smart UPS RT3000V
It would be nice that the servers are shutdown in case the UPS battery runs out of power. Therefore it is best to install a control software that communicates with the APC SURTD3000. The software delivered is names PowerChute, but comes unfortunately only for Suse, RedHat (rpm) or Window Systems, etc. and not for Ubuntu / Debian based systems (see here). So the solutions are:
- converting the *.rpm to a *.deb - but this was without much success.
- to use apcupsd - but unfortunately the RT3000 model comes not with the newer open modbus control protocol , but only with a proprietary protocol. However, I could still try to do a firmware update to enable communication with apcupsd via modbus.
- install a VirtualMachine with OpenSUSE.
- by the additional APC networkcard for a whopping 300 $US - given the fact that the RT3000VA costs already 1700 US$ this is kind of a scam!)
Hence I tried option number 3 - communication via the original PowerChute software installed on a OpenSUSE Virtual Machine (as I was running VMs already).
For this variant it is necessary to do a serial-port routing between host server and VM. How I did this is described here in CEDEUS Server Setup.
I shall note that the UPS was actually connected to serial connector ttyS1 on the host machine (and the VM) ... so not on ttyS0
To install PowerChute on the OpenSuse 13.2 VM, I did the following:
- copied powerchutes rpm to the VM
- navigated to folder with install_pbeagent_linux.sh
- run the sh file and choosing the following settings
- 2 : RJ45 connection
- 2 : NO (= no Share UPS, Interface Expander or Simple Signaling)
- chosen user and pw was the usual one
- selected /dev/ttyS1 as serial port, as this port was the only one I installed anyway for the VM
- openend a web browser in the opensuse VM with http:// <localhost> :3052
- => this did forward me actually to the https connection address https://10.0.2.15:6547/
Notes:
- The PowerChute Agent server can be started using
/etc/init.d/PBEAgent start
, and stopped with/etc/init.d/PBEAgent stop
. - The PowerChute files are copied into /opt/APC//opt/APC/PowerChuteBusinessEdition/Agent/
- To uninstall use
rpm -e pbeagent
- To communicate with the Server or Console, unblock port 2161
Debugging Serial Port
http://www.tldp.org/HOWTO/Serial-HOWTO-16.html
When trying to connect with minicom, I got the message that no lockfile could be created for /dev/ttyS0. (permission denied). To check what is going on:
- inspect the current log file for ttyS1:
vim /var/lock/LCK..ttyS0
- I found there a process number (2221), that I looked up with
ps 2221
. Thir returned me
PID TTY STAT TIME COMMAND 2221 ? Sl 23:19 /bin/java/jre/1.6.0_37/bin/java -Dpicard.main.thread=blocking -classpa...
- so, the PowerChute agent did block/use this already for communication. So I stopped the PowerChute agent server using
/etc/init.d/PBEAgent stop
- I did also run
sudo lsof /dev/ttyS*
to see which ports are open. The result of this was:
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME apcupsd 1197 root 7u CHR 4,64 0t0 1114 /dev/ttyS1 VBoxHeadl 24174 ssteinig 19u CHR 4,64 0t0 1114 /dev/ttyS1
- so, I saw that apcupsd was actually using the port (as I installed it before). Hence, I stopped the program
sudo service apcupsd stop
. And checked again withsudo lsof /dev/ttyS*
. Showing now that only VBox used the port...
=> Hence, I did a reboot of the OpenSUSE VM after which the the PowerChute server did run again...
Script to run by PowerChute
Powerchute can run script, to shutdown certain programs, if battery power is low.
Therefore go in the web interface to Shutdown > Shutdown Settings > and see for Operating System and Application Shutdown section. Placing a file into the folder /opt/APC/PowerChuteBusinessEdition/Agent/cmdfiles/ makes it available in the drop down list.
However, the script needs to be executable by the application. Means, I did a chmod 755 script.sh
so it can be executed by PowerChute. Note, it seems like the script is executed as root.
A test script may look like this:
#!/bin/sh touch /home/ssteinig/ftw.txt ping 127.0.0.1 -c 5 | cat > /home/ssteinig/pingtest
ToDo: write script that connects to other VMs and shuts them down, e.g.:
ssh user@remote_computer sudo poweroff
(from http://ubuntuforums.org/showthread.php?t=2093192)
I created a separate user user with root privileges to run sudo scripts that perform server and vm shutdowns. To transfer the public key file I needed to define the port for the VM access with large "-P", e.g. scp -P 17022 /root/.ssh/id_rsa.pub ced-user@146.155.17.19:/home/ced-user/
Achtung: create a separate user + use ssh keys + allow only ssh connections from this particular VM
ToDo List
- add cron tab entry that controls deletion of the log files
- install mail program to get notified about backups and syncs
- check how to use the RD1000