Configure the Postgresql server

Minimal version : 9.4

The database contains two schemas :

  • gacl : schema containing the tables needed for rights management, the creation of local users, traces;
  • col : schema containing the application data.

The name of the schemas is editable if necessary, but you will have to adapt the scripts accordingly (described in the headers of the files).

First installation

Installation in manual mode

  • Create the database, a login account that will have the rights to modify information in the tables, pre-set the default rights for this account
  • Successively run the two schema creation scripts:
    • install/gacl_create-1.1.sql
    • install/col_create-1.1.sql

Installation in semi-automatic mode

The database will be created, with the collec account which can connect only locally.

Edit the file /etc/postgresql/version/main/pg_hba.conf, and insert the followed lines :

host collec collec 127.0.0.1/32 md5
host all collec 0.0.0.0/0 reject

By being connected with rootat the root of the application, run the following commands :

cd install
su postgres -c "psql -f init_by_psql.sql"

Update

Locate the previous version of your database :

  • variable $APPLI_dbversion at the beginning of the file param/param.default.inc.php (versions 1.0.8 et ultérieures)
  • in the web application, ? > A propos

In the folder install, check for the presence of named scripts as follows:

col_alter_1.0.8-1.1.sql

where 1.0.8 corresponds to the actual version, and 1.1 to the followed one.

Run all the necessary scripts to perform the version upgrade

Backing up the database

In the server, switch to the user postgres:

su – postgres

Create the file /var/lib/postgresql/backup/dumpall.sh :

#!/bin/bash
 #!This is local_pg_dumpall
#Script de backup sequentiel avec support des gros blobs des bases postgres
PSQL_VERSION=`psql --version | cut -f 3 -d ' ' | cut -d '.' -f 1,2`
DUMPPATH=/var/lib/postgresql/backup
#cree le repertoire de backup s'il n'existe pas
[ -d $DUMPPATH ] && mkdir -p $DUMPPATH
$DUMPPATHLOG=$DUMPPATH/pgbackup.log
echo `date` > $LOG
echo "Sauvegarde des bases de donnees - dumpall" >>$LOG
/usr/bin/pg_dumpall |gzip -c > $DUMPPATH/pg.out.gz 2>>$LOG
# Pour chaque base de données
LIST=$(psql -tl | cut -d '|' -f1 | grep -v ' : ')
echo Liste des bases : $LIST
for DBNAME in $LIST
 do    
# Ne backup pas les bases "modèles" de postgresql    
[ "$DBNAME" = "template0" -o "$DBNAME" = "template1" ] && continue
    # Dump la base :
    echo -n Dumping "$DBNAME" : >> $LOG
    if pg_dump --blobs ${DBNAME} | gzip -9 -c >${DUMPPATH}/${DBNAME}.gz; then
 echo "ok" > /dev/null
    else
        rm -f ${DUMPPATH}/${DBNAME}.tgz
        echo "FAILED!" >>$LOG
    fi
    # nettoyage de la base
    vacuumdb -d "$DBNAME" --analyze
done
cp /etc/postgresql/${PSQL_VERSION}/main/pg_hba.conf $DUMPPATH/

Add an entry in the crontab :

crontab -e

and add the line :

0 20 * * * /var/lib/postgresql/backup/local_pg_dumpall

Then program a dump backup into your backup system.

Set up a replication of the database in real time

If you don’t agree to lose information in the database (labeled sample whose original information is no longer discoverable, for example), you will need to replicate the database to a second server in real time.

The procedure was described by Alexandra Darrieutort, trainee at Irstea in 2016, and completed by Jacques Foury, IT manager of the Irstea center in Cestas (33), who took inspiration from various documents (https://www.digitalocean.com/community/tutorials/how-to-set-up-master-slave-replication-on-postgresql-on-an-ubuntu-12-04-vps  http://www.rassoc.com/gregr/weblog/2013/02/16/zero-to-postgresql-streaming-replication-in-10-mins http://connect.ed-diamond.com/GNU-Linux-Magazine/GLMF-184/Configurer-la-replication-d-un-serveur-PostgreSQL  https://wiki.postgresql.org/wiki/Binary_Replication_Tutorial).

Principle

The replication mode corresponding to the need is master/slave. We can read and write on the master and only read on the slave if it’s configured in hot standby. Here, the master server is citerne-8 and the slave server is chappie.

The data changes are saved in transaction logs called Write-Ahead Log (WAL) xlogs. These WALs are transferred to the slave who plays them continuously so that they are in the same state as the master. He will then be ready to take over in case of unavailability of the master.

Thanks to the principle of Streaming replication, one waits for the WAL file (16 Mio) to be filled but it will be transmitted without delay from the master to the slave.

Limitations and precautions

In the configuration, as we will keep 256 xlogs using the parameter wal_keep_segments, we must provide enough free disk space.

Replication between two servers of different versions of postgresql is impossible.

Installation of postgreSQL on chappie and setting up the keys ssh

root@chappie:~# apt-get install postgresql-9.4
root@chappie:~# su - postgres
postgres@chappie:~$ mkdir /var/lib/postgresql/.ssh/
postgres@chappie:~$ ssh-keygen

For the connection ssh between the two servers, the postgres user key must be put in the id_rsa.pub} file on chappie in the authorized_keys file of citerne-8 and vice versa.

Replication set-up

Master

Creation of the posgresql user responsible for replication :

root@citerne-8:~# su - postgres
postgres:~$ psql -c "CREATE USER rep REPLICATION LOGIN ENCRYPTED PASSWORD 'desperados';"

Dans le fichier pg_hba.conf (/etc/postgresql/9.4/main/) add :

host replication rep 10.33.192.31/32 md5

For the parameter {wal_keep_segments, it is given a value large enough to avoid accumulating too much delay between the two servers in case of unavailability of the slave.

In the postgresql.conf file, ass this lines :

listen_address = 'localhost,10.33.192.36'
wal_level = hot_standby 
max_wal_senders = 3 
max_wal_size = 436MB 
wal_keep_segments = 256

Then restart postgresql.

Slave

Stop the postgresql service, then enter this lines in the postgresql.conf file :

wal_level = hot_standby
max_wal_senders = 3
max_wal_size = 384MB
wal_keep_segments = 256
hot_standby = on
max_locks_per_transaction = 128

Modify the file \textbf{pg\_hba.conf} :

host replication rep 10.33.192.36/32 md5

Perform the full backup of the master server databases (from the slave, always) with the postgres user:

pg_dropcluster 9.5 main
pg_basebackup -h 10.33.192.36 -D /var/lib/postgresql/9.5/main -U rep -v -P --xlog

The –xlog option is added to keep the latest transaction logs.

Create the recovery.conf file in /var/lib/postgresql/9.5/main/ to configure continuous restore.

Continuous restore is enabled using the standby_mode parameter. To connect to the master and retrieve the WAL, we define the necessary information in the primary_conninfo parameter.

The trigger_file parameter indicates whether the restore should be interrupted (if the specified file is present, the process is stopped).

standby_mode = on 
primary_conninfo = 'host=10.33.192.36 port=5432 user=rep password=desperados' 
trigger_file = '/var/lib/postgresql/9.4/postgresql.trigger'

Finally, start the postgresql service.

Monitoring informations

The postgresql-9.4-main.log log file is located in the /var/log/postgresql/directory

To know where the replication is on the master’s side:

sudo -u postgres psql -x -c "select * from pg_stat_replication;"

To know when was the last synchronization on the side of the slave :

sudo -u postgres psql -x -c "SELECT now() - pg_last_xact_replay_timestamp() AS time_lag;"

To see the current snapshot number :

sudo -u postgres psql -x -c "SELECT txid_current_snapshot();"

To test failovers or manage an interruption

The master server is unavailable.

It is necessary to stop the continuous restoration on the slave so that it becomes the master, by creating the file rigger. The databases will then go into read/write mode and the recovery.conf file will be renamed recovery.done.

When the master is back, replication will not work anymore. You will need to restore the data from the slave server to the master server and then restart the replication by recreating the recovery.conf file as described in the slave section.