Postgres Pro Enterprise in Microsoft Azure

Postgres Pro Enterprise (VM) in Microsoft Azure Quick Start Guide

Internet access and valid Microsoft Azure account are required to use Postgres Pro Enterprise (VM) database in Microsoft Azure cloud.

Postgres Pro Enterprise 9.6/10/11/12/13/14 virtual machine image is available in Microsoft Azure Marketplace.

Software required for installation:

  • Azure CLI 2.x for cloud management
  • 'psql' or 'Pgadmin' for database connection

Azure CLI 2.x is a cross-platform command line utility. 

Azure Portal https://portal.azure.com or Azure PowerShell can be used as well.

Azure CLI 2.x Installation guide: https://docs.microsoft.com/en-us/cli/azure/install-azure-cli

Azure CLI 2.x Get started guide: https://docs.microsoft.com/en-us/cli/azure/get-started-with-azure-cli

Azure CLI 2.x Command reference guide: https://docs.microsoft.com/en-us/cli/azure

Azure Linux Virtual Machine Installation guide: https://docs.microsoft.com/en-us/azure/virtual-machines/linux

Azure Backup Documentation: https://docs.microsoft.com/en-us/azure/backup/


Connection to Azure and environment check

  • Connect to Microsoft Azure with:
az login

or

az login --username <myusername>
  • Set Azure CLI 2.x commands table format output:
az configure
  • Verify Azure CLI 2.x version (should be the latest available):
az --version | head -1
  • Make sure required services Microsoft.Storage, Microsoft.Compute and Microsoft.Network are registered:
az provider show --namespace Microsoft.Storage
az provider show --namespace Microsoft.Compute
az provider show --namespace Microsoft.Network
  • If not, register them:
az provider register --namespace Microsoft.Storage
az provider register --namespace Microsoft.Compute
az provider register --namespace Microsoft.Network
  • A list of locations for VM:
az account list-locations

‘northeurope’ location will be used further.

  • A list of available VM sizes in ‘northeurope’ location:
az vm list-sizes --location northeurope

‘Standard_DS1_v2’ VM size will be used further (available for ‘Free Trial’ subscription).

  • Obtain publisher name of VM Postgres Pro image in Microsoft Azure Marketplace for ‘northeurope’ location:
az vm image list-publishers \
--location northeurope \
--query "[?starts_with(name,'postgres')].{Name:name}"

‘postgres-pro’ VM publisher name will be used further.

  • Obtain VM Postgres Pro image names available in Microsoft Azure Marketplace for ‘postgres-pro’ publisher in ‘northeurope’ location:
az vm image list \
--publisher postgres-pro \
--location northeurope \
--all \
--query "[?contains(urn,'enterprise')].{Urn:urn}"
  • The following VM image name will be used further:
urn_id='postgres-pro:postgres-pro-enterprise-96-vm:pgpro-ent-96-centos7-x64-hourly:latest'
or
urn_id='postgres-pro:postgres-pro-enterprise-10-vm:pgpro-ent-10-centos7-x64-hourly:latest'
or
urn_id='postgres-pro:postgres-pro-enterprise-11-vm:pgpro-ent-11-centos7-x64-hourly:latest'
or
urn_id='postgres-pro:postgres-pro-enterprise-12-vm:pgpro-ent-12-centos7-x64-hourly:latest'
or
urn_id='postgres-pro:postgres-pro-enterprise-13-vm:pgpro-ent-13-centos7-x64-hourly:latest'
or
urn_id='postgres-pro:postgres-pro-enterprise-14-vm:pgpro-ent-14-centos7-x64-hourly:latest'
  • Configure VM programmatic deployment:
az vm image terms accept --urn $urn_id
  • Create a pair of private/public ssh-keys in ~/.ssh directory to connect to VM:
ssh-keygen -t rsa -b 2048


VM creation

  • Create resource group:
az group create \
--name myresourcegroup \
--location northeurope
  • Create VM from VM image available in Microsoft Azure Marketplace: 
az vm create \
--name myvm-ent-xx \
--resource-group myresourcegroup \
--image $urn_id \
--location northeurope \
--size Standard_DS1_v2 \
--ssh-key-value ~/.ssh/id_rsa.pub \
--admin-username azureuser \
--authentication-type ssh \
--public-ip-address-dns-name myvm-ent-xx-dnsname \
--os-disk-name myvm-ent-xx-osdisk

Replace ‘xx’ by '01', '02', '03’ and so on.


Connection to VM

As a result VM is created with FQDN of 'myvm-ent-xx-dnsname.northeurope.cloudapp.azure.com' (FQDN is a combination of short DNS-name used during VM creation, location name and ‘cloudapp.azure.com’) and OS-account of ‘azureuser’ (by default with ‘sudo’ permissions)

  • Connect to VM: 
ssh azureuser@myvm-ent-xx-dnsname.northeurope.cloudapp.azure.com

Replace ‘xx’ by '01', '02', '03’ and so on.


Postgres Pro database service status

  • Verify Postgres Pro database service status: 
sudo systemctl -l status postgrespro-ent-14.service
  • To stop/start Postgres Pro database service use the following commands: 
sudo systemctl stop postgrespro-ent-14.service
sudo systemctl start postgrespro-ent-14.service


Connection to Postgres Pro database

  • Switch to ‘postgres’ account:
sudo su - postgres
  • To connect to Postgres Pro database use the following command:
psql
  • To exit from ‘psql’ use the following command:
\q
  • To return to Azure CLI 2.x interface run 'exit' command twice


External connection to VM

  • TCP-port 5433 has to be opened for external connection to Postgres Pro database: 
az vm open-port \
--name myvm-ent-xx \
--port 5433 \
--resource-group myresourcegroup \
--priority 1001
  • TCP-ports 80 and 443 have to be opened for external connection to database monitoring server: 
az vm open-port \
--name myvm-ent-xx \
--port 80 \
--resource-group myresourcegroup \
--priority 1002

az vm open-port \
--name myvm-ent-xx \
--port 443 \
--resource-group myresourcegroup \
--priority 1003

Replace ‘xx’ by '01', '02', '03’ and so on.


External connection to Postgres Pro database

  • For external connection to Postgres Pro database set up 'postgres' user password:
ssh azureuser@myvm-ent-xx-dnsname.northeurope.cloudapp.azure.com

sudo su - postgres
psql -c "alter user postgres with encrypted password 'YOUR_POSTGRES_USER_PASSWORD'"
exit

exit
  • For external connection to Postgres Pro database using ‘psql’ utility use the following command:
psql --host=myvm-ent-xx-dnsname.northeurope.cloudapp.azure.com --port=5433 --username=postgres --dbname=postgres
  • For external connection to Postgres Pro database using ‘Pgadmin’ utility configure the following server settings in ‘Pgadmin’ menu:
    • ‘mydb-xx’ for ‘Name’
    • ‘myvm-ent-xx-dnsname.northeurope.cloudapp.azure.com’ for ‘Host’
    • ‘5433’ for ‘Port’
    • ‘postgres’ for ‘Maintenance DB’
    • ‘postgres’ for ‘Username’

Replace ‘xx’ by '01', '02', '03’ and so on.


External connection to database monitoring server

  • For connection to database monitoring server set up 'Admin' user password:
ssh azureuser@myvm-ent-xx-dnsname.northeurope.cloudapp.azure.com

sudo su - postgres
source .pgsql_profile
psql --dbname=zabbix --username=zabbix -c "update users set passwd=md5('YOUR_ZABBIX_ADMIN_PASSWORD') where alias='Admin'"
exit

exit
  • External connection to database monitoring server is via the following link:

https://myvm-ent-xx-dnsname.northeurope.cloudapp.azure.com/zabbix

(Replace ‘xx’ by '01', '02', '03’ and so on.)


VM configuration change

Let's have some examples of VM configuration change:

1) Change VM size from ‘Standard_DS1_v2’ to ‘Standard_DS2_v2’ to increase VM computing power

  • $PGDATA/postgresql.tune.lock file can be deleted in order to automatically adjust Postgres Pro database parameters values before increasing VM size:
ssh azureuser@myvm-ent-xx-dnsname.northeurope.cloudapp.azure.com

sudo su - postgres
cp $PGDATA/postgresql.auto.conf $PGDATA/postgresql.auto.conf.ORIG
rm $PGDATA/postgresql.tune.lock
exit

exit
  • A list of available VM sizes in ‘northeurope’ location (‘az vm deallocate’ is not required):
az vm list-vm-resize-options \
--name myvm-ent-xx \
--resource-group myresourcegroup
  • To change VM size use the following command: 
az vm resize \
--name myvm-ent-xx \
--resource-group myresourcegroup \
--size Standard_DS2_v2

Replace ‘xx’ by '01', '02', '03’ and so on. 

2) Increase OS-disk size up to 80 GB

  • Obtain OS-disk size details: 
az disk list \
--resource-group myresourcegroup \
--query "[?starts_with(name,'myvm-ent-xx-osdisk')].{Name:name,Gb:diskSizeGb}"
  • Deallocate VM temporarily: 
az vm deallocate \
--name myvm-ent-xx \
--resource-group myresourcegroup
  • Increase OS-disk size: 
az disk update \
--name myvm-ent-xx-osdisk \
--resource-group myresourcegroup \
--size-gb 80
  • Verify new OS-disk size: 
az disk list \
--resource-group myresourcegroup \
--query "[?starts_with(name,'myvm-ent-xx-osdisk')].{Name:name,Gb:diskSizeGb}"
  • Start VM: 
az vm start \
--name myvm-ent-xx \
--resource-group myresourcegroup
  • Connect to VM: 
ssh azureuser@myvm-ent-xx-dnsname.northeurope.cloudapp.azure.com
  • Increase ‘/’ filesystem partition size:
(echo d; echo 2; echo n; echo p; echo 2; echo ; echo ; echo w) | sudo fdisk /dev/sda
  • Restart VM:
sudo reboot
  • Connect to VM: 
ssh azureuser@myvm-ent-xx-dnsname.northeurope.cloudapp.azure.com
  • Increase ‘/’ filesystem size:
sudo xfs_growfs -d /dev/sda2
  • Restart VM:
sudo reboot

Replace ‘xx’ by '01', '02', '03’ and so on. 

3) Use dedicated datadisk for Postgres Pro database files to improve database performance

  • Let's create new 200 GB datadisk and attach it to VM:
az vm disk attach \
--disk myvm-ent-xx-datadisk \
--resource-group myresourcegroup \
--vm-name myvm-ent-xx \
--caching ReadOnly \
--lun 1 \
--new \
--size-gb 200
  • Connect to VM:
ssh azureuser@myvm-ent-xx-dnsname.northeurope.cloudapp.azure.com
  • Stop Postgres Pro database service and verify its status:
sudo systemctl stop postgrespro-ent-14.service
sudo systemctl -l status postgrespro-ent-14.service
  • Create new filesystem mountpoint:
sudo mkdir /PGDATA
  • Use 'lsscsi' utility to find out datadisk device name (in this case it is '/dev/sdc'):
lsscsi
  • Create datadisk single partition of maximum size, create new filesystem on top of it:
(echo n; echo p; echo 1; echo ; echo ; echo w) | sudo fdisk /dev/sdc
sudo mkfs -t ext4 /dev/sdc1
  • Amend /etc/fstab file for new filesystem automount and mount it:
sudo sh -c "echo '`sudo blkid -o export /dev/sdc1 | grep UUID` /PGDATA ext4 defaults,nofail,barrier=0 1 2' >> /etc/fstab"
sudo mount /PGDATA
  • Create 'data' directory on the new filesystem and set its permissions:
sudo mkdir /PGDATA/data
sudo chown postgres:postgres /PGDATA/data
sudo chmod 0700 /PGDATA/data
  • Switch to ‘postgres’ account and move Postgres Pro database files to the new filesystem:
sudo su - postgres
mv /var/lib/pgpro/ent-14/data/* /PGDATA/data; rmdir /var/lib/pgpro/ent-14/data; ln -s /PGDATA/data /var/lib/pgpro/ent-14/data
exit
  • Start Postgres Pro database service and verify its status:
sudo systemctl start postgrespro-ent-14.service
sudo systemctl -l status postgrespro-ent-14.service
  • Restart VM, check filesystem automount and verify Postgres Pro database service status:
sudo reboot

ssh azureuser@myvm-ent-xx-dnsname.northeurope.cloudapp.azure.com

sudo mount | grep /PGDATA
sudo df -h /PGDATA

sudo systemctl -l status postgrespro-ent-14.service

Replace ‘xx’ by '01', '02', '03’ and so on.

4) Database service auto restart in case of database failure

  • Edit database service systemd file and restart database service:
sudo sed -i '/KillSignal=/a Restart=on-failure' /usr/lib/systemd/system/postgrespro-ent-14.service
sudo systemctl daemon-reload
sudo systemctl restart postgrespro-ent-14.service
sudo systemctl -l status postgrespro-ent-14.service


VM backup/restore

Let's consider how to do a VM backup/restore in Azure (you can find more information about it at https://docs.microsoft.com/en-us/azure/backup/quick-backup-vm-cli).

  • To take file-consistent backup of VM, create Recovery Services vault first:
az backup vault create \
--name myvault-ent-xx \
--resource-group myresourcegroup \
--location northeurope

'myvault-ent-xx' Recovery Services vault name will be used further.

  • Change Recovery Services vault storage type to Locally-Redundant if you don't plan to use Geo-Redundant storage:
az backup vault backup-properties set \
--name myvault-ent-xx \
--resource-group myresourcegroup \
--backup-storage-redundancy LocallyRedundant
  • Enable VM backup protection with default backup policy, it will schedule VM backup once a day:
az backup protection enable-for-vm \
--vm myvm-ent-xx \
--vault-name myvault-ent-xx \
--policy-name DefaultPolicy \
--resource-group myresourcegroup
  • Run the first backup manually, specifying recovery point availability date in 'dd-mm-yyyy' format. Provide VM name both for '--item-name' and '--container-name' parameters.
az backup protection backup-now \
--item-name myvm-ent-xx \
--container-name myvm-ent-xx \
--vault-name myvault-ent-xx \
--resource-group myresourcegroup \
--retain-until 31-12-2022
  • Monitor backup job status:
az backup job list \
--vault-name myvault-ent-xx \
--resource-group myresourcegroup
  • After backup job is successfully finished, you can use this backup as a recovery point to restore VM disk. Create Azure storage account if you don't have it:
az storage account create \
--name mystorageaccountent \
--resource-group myresourcegroup \
--location northeurope \
--sku Standard_LRS

'mystorageaccountent' Azure storage account name will be used further.

  • Supply the latest recovery point for restore:
rp_id=$(az backup recoverypoint list --item-name myvm-ent-xx --container-name myvm-ent-xx --vault-name myvault-ent-xx --resource-group myresourcegroup --query [0].name --output tsv)

az backup restore restore-disks \
--item-name myvm-ent-xx \
--container-name myvm-ent-xx \
--vault-name myvault-ent-xx \
--resource-group myresourcegroup \
--storage-account mystorageaccountent \
--rp-name $rp_id
  • Monitor restore job status:
az backup job list \
--vault-name myvault-ent-xx \
--resource-group myresourcegroup
  • Once restore is secured in storage account, supply required information to create VM disk:
container_id=$(az storage container list --account-name mystorageaccountent --query [0].name -o tsv)
blob_id=$(az storage blob list --container-name $container_id --account-name mystorageaccountent --query [0].name -o tsv)
uri_id=$(az storage blob url --name $blob_id --container-name $container_id --account-name mystorageaccountent -o tsv)

az disk create \
--name myrestoredvm-ent-xx-osdisk \
--resource-group myresourcegroup \
--source $uri_id
  • Create VM 'myrestoredvm-ent-xx' from restored disk:
az vm create \
--name myrestoredvm-ent-xx \
--resource-group myresourcegroup \
--attach-os-disk myrestoredvm-ent-xx-osdisk \
--location northeurope \
--size Standard_DS1_v2 \
--public-ip-address-dns-name myrestoredvm-ent-xx-dnsname \
--os-type linux
  • Now you can connect to restored VM:
ssh azureuser@myrestoredvm-ent-xx-dnsname.northeurope.cloudapp.azure.com

Replace ‘xx’ by '01', '02', '03’ and so on.


Postgres Pro database backup/restore

Let's have a look at Postgres Pro database backup/restore options (you can find more information about it at https://postgrespro.com/docs/enterprise/14/backup).

  • Connect to VM:
ssh azureuser@myvm-ent-xx-dnsname.northeurope.cloudapp.azure.com

Make sure Azure CLI 2.x is installed inside VM (you can find more information about it at https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest).

  • Switch to ‘postgres’ account:
sudo su - postgres
  • Connect to Microsoft Azure, set Azure CLI 2.x commands table output and prepare environment:
az login

az configure

echo 'export backup_home=$HOME/backup' >> .pgpro_profile
echo 'export file_date=$(date +"%Y%m%d-%H%M%S")' >> .pgpro_profile
echo 'export db_name=testdb' >> .pgpro_profile
echo 'export instance_name=myinstancename' >> .pgpro_profile
  • Create storage container for database backups:
az storage account create \
--name mystorageaccountent \
--resource-group myresourcegroup \
--location northeurope \
--sku Standard_LRS

az storage container create \
--name mydbbackup-ent-xx \
--account-name mystorageaccountent

exit

'mydbbackup-std-xx' storage container name will be used further.

1) Logical backup

1a) using 'pg_dump'

  • Start Postgres Pro database service and verify its status:
sudo systemctl restart postgrespro-ent-14.service
sudo systemctl -l status postgrespro-ent-14.service
  • Switch to ‘postgres’ account:
sudo su - postgres
  • Prepare environment (temporary database will be used furhter):
rm -rf $backup_home
mkdir $backup_home

psql -c "create database $db_name"

for ((i=1;i<=3;i++)); do
psql --dbname $db_name -c "create table test_table_0$i(id numeric)"
psql --dbname $db_name -c "insert into test_table_0$i select * from generate_series(1, 5)"
psql --dbname $db_name -c "select * from test_table_0$i"
done

db_owner=$(psql -c "\l $db_name" | grep $db_name | awk '{print $3}')
dump_backup_file=dump_$db_name-backup-$file_date.gz
  • Create temporary database dump:
pg_dump $db_name | gzip > $backup_home/$dump_backup_file

ls $backup_home/$dump_backup_file

gzip -ltv $backup_home/$dump_backup_file
  • Upload temporary database dump into storage container:
az storage blob upload \
--container-name mydbbackup-ent-xx \
--account-name mystorageaccountent \
--file $backup_home/$dump_backup_file \
--name $dump_backup_file

az storage blob list \
--account-name mystorageaccountent \
--container-name mydbbackup-ent-xx
  • Delete temporary database and temporary database dump and restore temporary database from storage container:
psql -c "drop database $db_name"

rm $backup_home/$dump_backup_file
ls $backup_home/$dump_backup_file

psql -c "create database $db_name"
psql -c "alter database $db_name owner to $db_owner"

az storage blob download \
--container-name mydbbackup-ent-xx \
--account-name mystorageaccountent \
--file $backup_home/$dump_backup_file \
--name $dump_backup_file

ls $backup_home/$dump_backup_file

gzip -cdv $backup_home/$dump_backup_file | psql $db_name
  • Run test SQL-query on temporary database:
for ((i=1;i<=3;i++)); do
psql --dbname $db_name -c "select * from test_table_0$i"
done

exit

1b) using 'pg_dumpall'

  • Start Postgres Pro database service and verify its status:
sudo systemctl restart postgrespro-ent-14.service
sudo systemctl -l status postgrespro-ent-14.service
  • Switch to ‘postgres’ account:
sudo su - postgres
  • Prepare environment (temporary database will be used furhter):
rm -rf $backup_home
mkdir $backup_home

psql -c "create database $db_name"

for ((i=1;i<=3;i++)); do
psql --dbname $db_name -c "create table test_table_0$i(id numeric)"
psql --dbname $db_name -c "insert into test_table_0$i select * from generate_series(1, 5)"
psql --dbname $db_name -c "select * from test_table_0$i"
done

dumpall_backup_file=dumpall-backup-$file_date.gz
  • Create Postgres Pro database dump:
pg_dumpall | gzip > $backup_home/$dumpall_backup_file

ls $backup_home/$dumpall_backup_file

gzip -ltv $backup_home/$dumpall_backup_file
  • Upload Postgres Pro database dump into storage container:
az storage blob upload \
--container-name mydbbackup-ent-xx \
--account-name mystorageaccountent \
--file $backup_home/$dumpall_backup_file \
--name $dumpall_backup_file

az storage blob list \
--account-name mystorageaccountent \
--container-name mydbbackup-ent-xx
  • Delete temporary database and Postgres Pro database dump and restore Postgres Pro database from storage container:
psql -c "drop database $db_name"

rm $backup_home/$dumpall_backup_file
ls $backup_home/$dumpall_backup_file

az storage blob download \
--container-name mydbbackup-ent-xx \
--account-name mystorageaccountent \
--file $backup_home/$dumpall_backup_file \
--name $dumpall_backup_file

ls $backup_home/$dumpall_backup_file

gzip -cdv $backup_home/$dumpall_backup_file | psql postgres
  • Run test SQL-query on temporary database:
for ((i=1;i<=3;i++)); do
psql --dbname $db_name -c "select * from test_table_0$i"
done

exit

2) File system level backup

2a) using 'tar'

  • Stop Postgres Pro database service and verify its status:
sudo systemctl stop postgrespro-ent-14.service
sudo systemctl -l status postgrespro-ent-14.service
  • Switch to ‘postgres’ account:
sudo su - postgres
  • Prepare environment:
rm -rf $backup_home
mkdir $backup_home

db_backup_file=db-backup-$file_date.tgz
  • Create Postgres Pro database backup:
cd $PGDATA
tar -zcvf $backup_home/$db_backup_file *

ls $backup_home/$db_backup_file

tar -ztvf $backup_home/$db_backup_file
  • Upload Postgres Pro database backup into storage container:
az storage blob upload \
--container-name mydbbackup-ent-xx \
--account-name mystorageaccountent \
--file $backup_home/$db_backup_file \
--name $db_backup_file

az storage blob list \
--account-name mystorageaccountent \
--container-name mydbbackup-ent-xx
  • Delete Postgres Pro database files and Postgres Pro database backup and restore Postgres Pro database from storage container:
rm -rf $PGDATA/*
ls $PGDATA/

rm $backup_home/$db_backup_file
ls $backup_home/$db_backup_file

az storage blob download \
--container-name mydbbackup-ent-xx \
--account-name mystorageaccountent \
--file $backup_home/$db_backup_file \
--name $db_backup_file

ls $backup_home/$db_backup_file

cd $PGDATA
tar -zxvf $backup_home/$db_backup_file

ls $PGDATA

exit
  • Start Postgres Pro database service, verify its status and run test SQL-query on Postgres Pro database:
sudo systemctl start postgrespro-ent-14.service
sudo systemctl -l status postgrespro-ent-14.service

sudo su -l postgres -c "psql -c \"select pgpro_version(), pgpro_edition(), pgpro_build()\""

2b) using 'pg_basebackup'

  • Start Postgres Pro database service and verify its status:
sudo systemctl restart postgrespro-ent-14.service
sudo systemctl -l status postgrespro-ent-14.service
  • Switch to ‘postgres’ account:
sudo su - postgres
  • Prepare environment:
rm -rf $backup_home
mkdir $backup_home

db_backup_file=db-backup-$file_date.tgz
wal_backup_file=wal-backup-$file_date.tgz
  • Create backup of Postgres Pro database and WAL files:
pg_basebackup \
--pgdata=$backup_home \
--format=tar \
--wal-method=stream \
--gzip \
--checkpoint=fast \
--label=$file_date \
--progress \
--verbose

ls $backup_home/base.tar.gz
ls $backup_home/pg_wal.tar.gz

tar -ztvf $backup_home/base.tar.gz
tar -ztvf $backup_home/pg_wal.tar.gz
  • Upload backup of Postgres Pro database and WAL files into storage container:
az storage blob upload \
--container-name mydbbackup-ent-xx \
--account-name mystorageaccountent \
--file $backup_home/base.tar.gz \
--name $db_backup_file

az storage blob upload \
--container-name mydbbackup-ent-xx \
--account-name mystorageaccountent \
--file $backup_home/pg_wal.tar.gz \
--name $wal_backup_file

az storage blob list \
--account-name mystorageaccountent \
--container-name mydbbackup-ent-xx

exit
  • Stop Postgres Pro database service, delete Postgres Pro database files and backup of Postgres Pro database and WAL files and restore Postgres Pro database from storage container:
sudo systemctl stop postgrespro-ent-14.service
sudo systemctl -l status postgrespro-ent-14.service

sudo su - postgres

rm -rf $PGDATA/*
ls $PGDATA/

rm -rf $backup_home
mkdir $backup_home

db_backup_file=$(az storage blob list --account-name mystorageaccountent --container-name mydbbackup-ent-xx | grep ^db-backup | tail -n 1 | awk {'print $1'})
wal_backup_file=$(az storage blob list --account-name mystorageaccountent --container-name mydbbackup-ent-xx | grep ^wal-backup | tail -n 1 | awk {'print $1'})

az storage blob download \
--container-name mydbbackup-ent-xx \
--account-name mystorageaccountent \
--file $backup_home/$db_backup_file \
--name $db_backup_file

az storage blob download \
--container-name mydbbackup-ent-xx \
--account-name mystorageaccountent \
--file $backup_home/$wal_backup_file \
--name $wal_backup_file

ls $backup_home/$db_backup_file
ls $backup_home/$wal_backup_file

cd $PGDATA
tar -zxvf $backup_home/$db_backup_file
cd $PGDATA/pg_wal
tar -zxvf $backup_home/$wal_backup_file

exit
  • Start Postgres Pro database service, verify its status and run test SQL-query on Postgres Pro database:
sudo systemctl start postgrespro-ent-14.service
sudo systemctl -l status postgrespro-ent-14.service

sudo su -l postgres -c "psql -c \"select pgpro_version(), pgpro_edition(), pgpro_build()\""

3) Continuous archiving and point-in-time recovery

3a) full backup via 'pg_basebackup'

  • Start Postgres Pro database service and verify its status:
sudo systemctl restart postgrespro-ent-14.service
sudo systemctl -l status postgrespro-ent-14.service
  • Switch to ‘postgres’ account:
sudo su - postgres
  • Turn on archiving mode for Postgres Pro database:
psql -c "show archive_mode"
psql -c "show archive_command"

psql -c "alter system set archive_mode=on"
psql -c "alter system set archive_command='az storage blob upload --container-name mydbbackup-ent-xx --account-name mystorageaccountent --file %p --name %f'"

exit
  • Restart Postgres Pro database service, verify its status and value of parameters 'archive_mode' and 'archive_command':
sudo systemctl restart postgrespro-ent-14.service
sudo systemctl -l status postgrespro-ent-14.service

sudo su - postgres

psql -c "show archive_mode"
psql -c "show archive_command"
  • Prepare environment:
rm -rf $backup_home
mkdir $backup_home

db_backup_file=db-backup-$file_date.tgz
wal_backup_file=wal-backup-$file_date.tgz
  • Create backup of Postgres Pro database and WAL files:
pg_basebackup \
--pgdata=$backup_home \
--format=tar \
--wal-method=stream \
--gzip \
--checkpoint=fast \
--label=$file_date \
--progress \
--verbose

ls $backup_home/base.tar.gz
ls $backup_home/pg_wal.tar.gz

tar -ztvf $backup_home/base.tar.gz
tar -ztvf $backup_home/pg_wal.tar.gz
  • Upload backup of Postgres Pro database and WAL files into storage container:
az storage blob upload \
--container-name mydbbackup-ent-xx \
--account-name mystorageaccountent \
--file $backup_home/base.tar.gz \
--name $db_backup_file

az storage blob upload \
--container-name mydbbackup-ent-xx \
--account-name mystorageaccountent \
--file $backup_home/pg_wal.tar.gz \
--name $wal_backup_file

az storage blob list \
--account-name mystorageaccountent \
--container-name mydbbackup-ent-xx

exit
  • Stop Postgres Pro database service, delete Postgres Pro database files and backup of Postgres Pro database and WAL files and restore Postgres Pro database from storage container:
sudo systemctl stop postgrespro-ent-14.service
sudo systemctl -l status postgrespro-ent-14.service

sudo su - postgres

rm -rf $PGDATA/*
ls $PGDATA/

rm -rf $backup_home
mkdir $backup_home

db_backup_file=$(az storage blob list --account-name mystorageaccountent --container-name mydbbackup-ent-xx | grep ^db-backup | tail -n 1 | awk {'print $1'})
wal_backup_file=$(az storage blob list --account-name mystorageaccountent --container-name mydbbackup-ent-xx | grep ^wal-backup | tail -n 1 | awk {'print $1'})

az storage blob download \
--container-name mydbbackup-ent-xx \
--account-name mystorageaccountent \
--file $backup_home/$db_backup_file \
--name $db_backup_file

az storage blob download \
--container-name mydbbackup-ent-xx \
--account-name mystorageaccountent \
--file $backup_home/$wal_backup_file \
--name $wal_backup_file

ls $backup_home/$db_backup_file
ls $backup_home/$wal_backup_file

cd $PGDATA
tar -zxvf $backup_home/$db_backup_file
cd $PGDATA/pg_wal
tar -zxvf $backup_home/$wal_backup_file

exit
  • Start Postgres Pro database service, verify its status and run test SQL-query on Postgres Pro database:
sudo systemctl start postgrespro-ent-14.service
sudo systemctl -l status postgrespro-ent-14.service

sudo su -l postgres -c "psql -c \"select pgpro_version(), pgpro_edition(), pgpro_build()\""

3b) full backup via 'pg_probackup'

  • Start Postgres Pro database service and verify its status:
sudo systemctl restart postgrespro-ent-14.service
sudo systemctl -l status postgrespro-ent-14.service
  • Switch to ‘postgres’ account:
sudo su - postgres
  • Turn on archiving mode for Postgres Pro database:
psql -c "show archive_mode"
psql -c "show archive_command"

psql -c "alter system set archive_mode=on"
psql -c "alter system set archive_command='/opt/pgpro/ent-14/bin/pg_probackup archive-push -B $backup_home --instance $instance_name --wal-file-path %p --wal-file-name %f'"

exit
  • Restart Postgres Pro database service, verify its status and value of parameters 'archive_mode' and 'archive_command':
sudo systemctl restart postgrespro-ent-14.service
sudo systemctl -l status postgrespro-ent-14.service

sudo su - postgres

psql -c "show archive_mode"
psql -c "show archive_command"
  • Prepare environment:
rm -rf $backup_home
mkdir $backup_home

db_backup_file=db-backup-$file_date.tgz

pg_probackup init -B $backup_home
pg_probackup add-instance -B $backup_home -D $PGDATA --instance $instance_name
pg_probackup show-config -B $backup_home --instance $instance_name
pg_probackup show -B $backup_home
  • Create backup of Postgres Pro database and WAL files:
pg_probackup backup -B $backup_home --instance $instance_name -b FULL --progress
pg_probackup validate -B $backup_home --instance $instance_name
pg_probackup show -B $backup_home
  • Upload backup of Postgres Pro database and WAL files into storage container:
cd $backup_home
tar -zcvf $HOME/$db_backup_file *

ls $HOME/$db_backup_file

tar -ztvf $HOME/$db_backup_file

az storage blob upload \
--container-name mydbbackup-ent-xx \
--account-name mystorageaccountent \
--file $HOME/$db_backup_file \
--name $db_backup_file

az storage blob list \
--account-name mystorageaccountent \
--container-name mydbbackup-ent-xx

exit
  • Stop Postgres Pro database service, delete Postgres Pro database files and backup of Postgres Pro database and WAL files and restore Postgres Pro database from storage container:
sudo systemctl stop postgrespro-ent-14.service
sudo systemctl -l status postgrespro-ent-14.service

sudo su - postgres

rm -rf $PGDATA/*
ls $PGDATA/

rm -rf $backup_home
mkdir $backup_home

db_backup_file=$(az storage blob list --account-name mystorageaccountent --container-name mydbbackup-ent-xx | grep ^db-backup | tail -n 1 | awk {'print $1'})

rm $HOME/$db_backup_file
ls $HOME/$db_backup_file

az storage blob download \
--container-name mydbbackup-ent-xx \
--account-name mystorageaccountent \
--file $HOME/$db_backup_file \
--name $db_backup_file

ls $HOME/$db_backup_file

cd $backup_home
tar -zxvf $HOME/$db_backup_file

backup_id=$(pg_probackup show -B $backup_home | grep $instance_name | grep FULL | awk '{print $3}')
pg_probackup restore -B $backup_home -D $PGDATA --instance $instance_name -i $backup_id --progress

exit
  • Start Postgres Pro database service, verify its status and run test SQL-query on Postgres Pro database:
sudo systemctl start postgrespro-ent-14.service
sudo systemctl -l status postgrespro-ent-14.service

sudo su -l postgres -c "psql -c \"select pgpro_version(), pgpro_edition(), pgpro_build()\""

3c) incremental backup via 'pg_probackup'

  • Start Postgres Pro database service and verify its status:
sudo systemctl restart postgrespro-ent-14.service
sudo systemctl -l status postgrespro-ent-14.service
  • Switch to ‘postgres’ account:
sudo su - postgres
  • Turn on archiving mode for Postgres Pro database:
psql -c "show archive_mode"
psql -c "show archive_command"

psql -c "alter system set archive_mode=on"
psql -c "alter system set archive_command='/opt/pgpro/ent-14/bin/pg_probackup archive-push -B $backup_home --instance $instance_name --wal-file-path %p --wal-file-name %f'"

exit
  • Restart Postgres Pro database service, verify its status and value of parameters 'archive_mode' and 'archive_command':
sudo systemctl restart postgrespro-ent-14.service
sudo systemctl -l status postgrespro-ent-14.service

sudo su - postgres

psql -c "show archive_mode"
psql -c "show archive_command"
  • Prepare environment (temporary database will be used furhter):
rm -rf $backup_home
mkdir $backup_home

db_backup_file=db-backup-$file_date.tgz

pg_probackup init -B $backup_home
pg_probackup add-instance -B $backup_home -D $PGDATA --instance $instance_name
pg_probackup show-config -B $backup_home --instance $instance_name
pg_probackup show -B $backup_home
  • Create full backup of Postgres Pro database and backup of WAL files:
pg_probackup backup -B $backup_home --instance $instance_name -b FULL --progress
pg_probackup validate -B $backup_home --instance $instance_name
pg_probackup show -B $backup_home
  • Create temporary database:
psql -c "create database $db_name"

for ((i=1;i<=3;i++)); do
psql --dbname $db_name -c "create table test_table_0$i(id numeric)"
psql --dbname $db_name -c "insert into test_table_0$i select * from generate_series(1, 5)"
psql --dbname $db_name -c "select * from test_table_0$i"
done
  • Create incremental backup of Postgres Pro database and backup of WAL files:
pg_probackup backup -B $backup_home --instance $instance_name -b PAGE --progress
pg_probackup validate -B $backup_home --instance $instance_name
pg_probackup show -B $backup_home
  • Upload full and incremental backups of Postgres Pro database and backup of WAL files into storage container:
cd $backup_home
tar -zcvf $HOME/$db_backup_file *

ls $HOME/$db_backup_file

tar -ztvf $HOME/$db_backup_file

az storage blob upload \
--container-name mydbbackup-ent-xx \
--account-name mystorageaccountent \
--file $HOME/$db_backup_file \
--name $db_backup_file

az storage blob list \
--account-name mystorageaccountent \
--container-name mydbbackup-ent-xx

exit
  • Stop Postgres Pro database service, delete Postgres Pro database files and backups of Postgres Pro database and WAL files and restore Postgres Pro database from storage container:
sudo systemctl stop postgrespro-ent-14.service
sudo systemctl -l status postgrespro-ent-14.service

sudo su - postgres

rm -rf $PGDATA/*
ls $PGDATA/

rm -rf $backup_home
mkdir $backup_home

db_backup_file=$(az storage blob list --account-name mystorageaccountent --container-name mydbbackup-ent-xx | grep ^db-backup | tail -n 1 | awk {'print $1'})

rm $HOME/$db_backup_file
ls $HOME/$db_backup_file

az storage blob download \
--container-name mydbbackup-ent-xx \
--account-name mystorageaccountent \
--file $HOME/$db_backup_file \
--name $db_backup_file

ls $HOME/$db_backup_file

cd $backup_home
tar -zxvf $HOME/$db_backup_file

backup_id=$(pg_probackup show -B $backup_home | grep $instance_name | grep PAGE | awk '{print $3}')
pg_probackup restore -B $backup_home -D $PGDATA --instance $instance_name -i $backup_id --progress

exit
  • Start Postgres Pro database service, verify its status and run test SQL-query on Postgres Pro database:
sudo systemctl start postgrespro-ent-14.service
sudo systemctl -l status postgrespro-ent-14.service

sudo su - postgres

for ((i=1;i<=3;i++)); do
psql --dbname $db_name -c "select * from test_table_0$i"
done

exit

Replace ‘xx’ by '01', '02', '03’ and so on.


Postgres Pro database high availability

Let's have a look at Postgres Pro database high availability options (you can find more information about it at https://postgrespro.com/docs/enterprise/14/high-availability) taking Patroni cluster as an example (you can find more information about it at https://github.com/zalando/patroni). Assume Patroni cluster node names are: 'myvm-ent-01', 'myvm-ent-02' and 'myvm-ent-03'.

  • Create resource group 'myresourcegroup':
az group create \
--name myresourcegroup \
--location northeurope
az vm availability-set create \
--name myavailabilityset \
--resource-group myresourcegroup \
--platform-fault-domain-count 3 \
--platform-update-domain-count 3
  • Create virtual local network 'myvnet' 10.0.0.0/8 and subnetwork 'myvnetsubnet' 10.0.0.0/24:
az network vnet create \
--name myvnet \
--location northeurope \
--resource-group myresourcegroup \
--subnet-name myvnetsubnet \
--subnet-prefix 10.0.0.0/24 \
--address-prefixes 10.0.0.0/8
  • Create network security group 'mynsg':
az network nsg create \
--name mynsg \
--location northeurope \
--resource-group myresourcegroup
  • Create network security group rules to allow incoming traffic to TCP-ports 22 (ssh), 5433 (Postgres) and 80/443 (http/https):
array=(AllowInboundSsh 22 1000 AllowInboundPostgresql 5433 1001 AllowInboundHttp 80 1002 AllowInboundHttps 443 1003); for i in `echo 0 3 6 9`; do
az network nsg rule create \
--name "${array[$i]}" \
--resource-group myresourcegroup \
--nsg-name mynsg \
--access Allow \
--direction Inbound \
--protocol Tcp \
--destination-port-range "${array[$i+1]}" \
--priority "${array[$i+2]}"
done
  • Create dynamic public IP-addresses for VMs:
for i in `seq 1 3`; do
az network public-ip create \
--name myvm-ent-0$i-public-ip \
--location northeurope \
--resource-group myresourcegroup \
--dns-name myvm-ent-0$i-dnsname \
--sku Basic \
--allocation-method Dynamic
done
  • Create network interfaces for VMs and assign dynamic public and static private IP-addresses to them:
for i in `seq 1 3`; do
az network nic create \
--name myvm-ent-0$i-nic \
--location northeurope \
--resource-group myresourcegroup \
--vnet-name myvnet \
--subnet myvnetsubnet \
--network-security-group mynsg \
--public-ip-address myvm-ent-0$i-public-ip \
--private-ip-address 10.0.0.10$i
done

Private IP-addresses assignment: 10.0.0.101 (myvm-ent-01), 10.0.0.102 (myvm-ent-02), 10.0.0.103 (myvm-ent-03).

  • Finally, create VMs:
for i in `seq 1 3`; do
az vm create \
--name myvm-ent-0$i \
--resource-group myresourcegroup \
--availability-set myavailabilityset \
--image $urn_id \
--location northeurope \
--size Standard_DS1_v2 \
--ssh-key-value ~/.ssh/id_rsa.pub \
--admin-username azureuser \
--authentication-type ssh \
--nics myvm-ent-0$i-nic \
--public-ip-sku Basic \
--os-disk-name myvm-ent-0$i-osdisk
done
  • Now connect to each of three VMs and run the same set of commands on all of them:
ssh azureuser@myvm-ent-01-dnsname.northeurope.cloudapp.azure.com
ssh azureuser@myvm-ent-02-dnsname.northeurope.cloudapp.azure.com
ssh azureuser@myvm-ent-03-dnsname.northeurope.cloudapp.azure.com
  • Configure Postgres Pro database, Consul, haproxy and Patroni services:
sudo systemctl disable postgrespro-ent-14.service

sudo sh -c "echo '' >> /etc/hosts"
for i in `seq 1 3`; do
sudo sh -c "echo '10.0.0.10$i myvm-ent-0$i' >> /etc/hosts"
done

sudo sed -i "/retry_join/s|\[\]|\[\"myvm-ent-01\", \"myvm-ent-02\", \"myvm-ent-03\"\]|" /etc/consul.d/consul.hcl
sudo sed -i "s|# retry_join|retry_join|" /etc/consul.d/consul.hcl

sudo systemctl daemon-reload; sudo systemctl enable consul.service; sudo systemctl start consul.service; sudo systemctl -l status consul.service

for i in `seq 1 3`; do
sudo sh -c "echo ' server myvm-ent-0$i 10.0.0.10$i:5433 maxconn 100 check port 8008' >> /etc/haproxy/haproxy.cfg"
done

sudo systemctl daemon-reload; sudo systemctl enable haproxy; sudo systemctl start haproxy; sudo systemctl -l status haproxy

sudo sed -i "s|# name:|name: $HOSTNAME|" /etc/patroni/config.yml
sudo sed -i "/connect_address/s|127.0.0.1|`hostname -i`|" /etc/patroni/config.yml
  • Run the following commands on the first VM:
sudo su - postgres

psql -c "alter system set wal_level to 'replica'"
psql -c "alter system set hot_standby to 'on'"
psql -c "alter system set wal_keep_segments to '8'"
psql -c "alter system set max_wal_senders to '5'"
psql -c "alter system set max_replication_slots to '5'"
psql -c "alter system set checkpoint_timeout to '30'"

psql -c "create user patroni_replicator with replication encrypted password 'replicatorpassword'"
psql -c "create user patroni_superuser with superuser encrypted password 'superuserpassword'"

for i in `seq 1 3`; do
sed -i "/^host all.*md5/i host replication patroni_replicator myvm-ent-0$i md5" $PGDATA/pg_hba.conf
done

for i in `seq 1 3`; do
echo "myvm-ent-0$i:5433:replication:patroni_replicator:replicatorpassword" >> ~/.pgpass
done
chmod 0600 ~/.pgpass

exit

sudo systemctl restart postgrespro-ent-14.service
  • Run the following commands on the second and third VMs:
sudo systemctl stop postgrespro-ent-14.service

sudo su - postgres

rm -rf $PGDATA/*

for i in `seq 1 3`; do
echo "myvm-ent-0$i:5433:replication:patroni_replicator:replicatorpassword" >> ~/.pgpass
done
chmod 0600 ~/.pgpass

exit
  • Start Patroni service on the first VM:
sudo systemctl daemon-reload; sudo systemctl enable patroni.service; sudo systemctl start patroni.service; sudo systemctl -l status patroni.service
sudo patronictl -c /etc/patroni/config.yml restart mypatroni myvm-ent-01
  • Start Patroni service on the second and third VMs:
sudo systemctl daemon-reload; sudo systemctl enable patroni.service; sudo systemctl start patroni.service; sudo systemctl -l status patroni.service
  • Use the following commands to monitor Patroni cluster and streaming replication status:
sudo patronictl -c /etc/patroni/config.yml list

psql 'postgres://patroni_superuser:superuserpassword@myvm-ent-01:5000/postgres' -x -c 'table pg_stat_replication'
psql 'postgres://patroni_superuser:superuserpassword@myvm-ent-02:5000/postgres' -x -c 'table pg_stat_replication'
psql 'postgres://patroni_superuser:superuserpassword@myvm-ent-03:5000/postgres' -x -c 'table pg_stat_replication'
  • To return to Azure CLI 2.x interface run 'exit' command twice


Postgres Pro Enterprise Multimaster

Multimaster is a Postgres Pro Enterprise extension with a set of patches that turns Postgres Pro Enterprise into a synchronous shared-nothing cluster to provide Online Transaction Processing (OLTP) scalability for read transactions and high availability with automatic disaster recovery.

You can find more information about Postgres Pro Enterprise Multimaster at:

https://postgrespro.com/docs/enterprise/14/multimaster

Let's proceed with 3-node Multimaster installation and configuration. Assume Multimaster node names are 'myvm-ent-01', 'myvm-ent-02' and 'myvm-ent-03'.

  • Create resource group 'myresourcegroup':
az group create \
--name myresourcegroup \
--location northeurope
az vm availability-set create \
--name myavailabilityset \
--resource-group myresourcegroup \
--platform-fault-domain-count 3 \
--platform-update-domain-count 3
  • Create virtual local network 'myvnet' 10.0.0.0/8 and subnetwork 'myvnetsubnet' 10.0.0.0/24:
az network vnet create \
--name myvnet \
--location northeurope \
--resource-group myresourcegroup \
--subnet-name myvnetsubnet \
--subnet-prefix 10.0.0.0/24 \
--address-prefixes 10.0.0.0/8
  • Create network security group 'mynsg':
az network nsg create \
--name mynsg \
--location northeurope \
--resource-group myresourcegroup
  • Create network security group rules to allow incoming traffic to TCP-ports 22 (ssh), 5433 (Postgres) and 80/443 (http/https):
array=(AllowInboundSsh 22 1000 AllowInboundPostgresql 5433 1001 AllowInboundHttp 80 1002 AllowInboundHttps 443 1003); for i in `echo 0 3 6 9`; do
az network nsg rule create \
--name "${array[$i]}" \
--resource-group myresourcegroup \
--nsg-name mynsg \
--access Allow \
--direction Inbound \
--protocol Tcp \
--destination-port-range "${array[$i+1]}" \
--priority "${array[$i+2]}"
done
  • Create dynamic public IP-addresses for VMs:
for i in `seq 1 3`; do
az network public-ip create \
--name myvm-ent-0$i-public-ip \
--location northeurope \
--resource-group myresourcegroup \
--dns-name myvm-ent-0$i-dnsname \
--sku Basic \
--allocation-method Dynamic
done
  • Create network interfaces for VMs and assign dynamic public and static private IP-addresses to them:
for i in `seq 1 3`; do
az network nic create \
--name myvm-ent-0$i-nic \
--location northeurope \
--resource-group myresourcegroup \
--vnet-name myvnet \
--subnet myvnetsubnet \
--network-security-group mynsg \
--public-ip-address myvm-ent-0$i-public-ip \
--private-ip-address 10.0.0.10$i
done

Private IP-addresses assignment: 10.0.0.101 (myvm-ent-01), 10.0.0.102 (myvm-ent-02), 10.0.0.103 (myvm-ent-03).

  • Finally, create VMs:
for i in `seq 1 3`; do
az vm create \
--name myvm-ent-0$i \
--resource-group myresourcegroup \
--availability-set myavailabilityset \
--image $urn_id \
--location northeurope \
--size Standard_DS1_v2 \
--ssh-key-value ~/.ssh/id_rsa.pub \
--admin-username azureuser \
--authentication-type ssh \
--nics myvm-ent-0$i-nic \
--public-ip-sku Basic \
--os-disk-name myvm-ent-0$i-osdisk
done
  • Now connect to each of three VMs and run the same set of commands on all of them:
ssh azureuser@myvm-ent-01-dnsname.northeurope.cloudapp.azure.com
ssh azureuser@myvm-ent-02-dnsname.northeurope.cloudapp.azure.com
ssh azureuser@myvm-ent-03-dnsname.northeurope.cloudapp.azure.com
  • Configure replicated 'mydb' database:
sudo su - postgres
psql -c "create user myuser with superuser encrypted password 'myuserpassword'"
psql --username=myuser -c "create database mydb"
sed -i 's/PGDATABASE=postgres/PGDATABASE=mydb/' .pgpro_profile
sed -i 's/PGUSER=postgres/PGUSER=myuser/' .pgpro_profile
source .pgpro_profile

for i in `seq 1 3`; do
echo "hostssl replication myuser myvm-ent-0$i md5" >> $PGDATA/pg_hba.conf
echo "myvm-ent-0$i:5433:mydb:myuser:myuserpassword" >> ~/.pgpass
done
chmod 0600 ~/.pgpass
pg_ctl reload

echo "" >> $PGDATA/postgresql.conf
echo "" >> $PGDATA/postgresql.conf
echo "#------------------------------------------------------------------------------" >> $PGDATA/postgresql.conf
echo "# MULTIMASTER SETTINGS" >> $PGDATA/postgresql.conf
echo "#------------------------------------------------------------------------------" >> $PGDATA/postgresql.conf
echo "" >> $PGDATA/postgresql.conf
echo "multimaster.max_nodes = 3" >> $PGDATA/postgresql.conf
echo "" >> $PGDATA/postgresql.conf
echo "" >> $PGDATA/postgresql.conf

psql -c "alter system set default_transaction_isolation to 'read committed'"
psql -c "alter system set wal_level to logical"
psql -c "alter system set max_connections to 100"
psql -c "alter system set max_prepared_transactions to 300"
psql -c "alter system set max_wal_senders to 10"
psql -c "alter system set max_replication_slots to 10"
psql -c "alter system set max_worker_processes to 250"
psql -c "alter system set shared_preload_libraries to multimaster,pg_stat_statements,pg_buffercache,pg_wait_sampling"

psql -c "alter system set wal_sender_timeout to 0"

exit
  • Configure mamonsu agent for 'mydb' database and restart mamonsu service:
sudo sed -i 's|user = mamonsu|user = myuser|' /etc/mamonsu/agent.conf
sudo sed -i 's|database = mamonsu|database = mydb|' /etc/mamonsu/agent.conf
sudo systemctl restart mamonsu.service
  • Restart Postgres Pro database service and verify its status:
sudo systemctl restart postgrespro-ent-14.service
sudo systemctl -l status postgrespro-ent-14.service

exit
  • Now connect to the first VM:
ssh azureuser@myvm-ent-01-dnsname.northeurope.cloudapp.azure.com
  • and create Multimaster extension:
sudo su - postgres
psql
create extension if not exists multimaster;
select mtm.init_cluster('dbname=mydb user=myuser host=myvm-ent-01 port=5433 sslmode=require','{"dbname=mydb user=myuser host=myvm-ent-02 port=5433 sslmode=require", "dbname=mydb user=myuser host=myvm-ent-03 port=5433 sslmode=require"}');
\q
  • Create other extensions required for mamonsu service:
psql -c "create extension if not exists pg_buffercache"
psql -c "create extension if not exists pg_stat_statements"
psql -c "create extension if not exists pg_wait_sampling"
  • Verify the extensions have been successfully created:
psql --host=myvm-ent-01 -c "select * from pg_extension"
psql --host=myvm-ent-02 -c "select * from pg_extension"
psql --host=myvm-ent-03 -c "select * from pg_extension"
  • Configure mamonsu for Multimaster:
mamonsu bootstrap --dbname mydb --username postgres --host 127.0.0.1 --port 5433 --mamonsu-username=myuser
psql --host=myvm-ent-01 -c "select mtm.make_table_local('mamonsu_config')"
psql --host=myvm-ent-01 -c "select mtm.make_table_local('mamonsu_timestamp_master_2_7_1')"
  • Use the following commands to monitor Multimaster status:
psql --host=myvm-ent-01 -x -c "select mtm.status()"
psql --host=myvm-ent-02 -x -c "select mtm.status()"
psql --host=myvm-ent-03 -x -c "select mtm.status()"

psql --host=myvm-ent-01 -x -c "select mtm.nodes()"
psql --host=myvm-ent-02 -x -c "select mtm.nodes()"
psql --host=myvm-ent-03 -x -c "select mtm.nodes()"
  • To return to Azure CLI 2.x interface run 'exit' command twice


External connection to Postgres Pro Enterprise Multimaster database

Let's look at how to connect to Postgres Pro Enterprise Multimaster database via Azure Load Balancer.

You can find more information about it at https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-overview.

  • Create load balancer public IP-address 'myvm-ent-lb-public-ip' and DNS-name 'myvm-ent-lb-dnsname':
az network public-ip create \
--name myvm-ent-lb-public-ip \
--dns-name myvm-ent-lb-dnsname \
--resource-group myresourcegroup
  • Create load balancer 'myloadbalancer':
az network lb create \
--name myloadbalancer \
--resource-group myresourcegroup \
--frontend-ip-name myFrontEndPool \
--backend-pool-name myBackEndPool \
--public-ip-address myvm-ent-lb-public-ip
  • Create load balancer health probe 'PostgresqlHealthProbe' for Postgres Pro database service:
az network lb probe create \
--name PostgresqlHealthProbe \
--lb-name myloadbalancer \
--resource-group myresourcegroup \
--protocol tcp \
--port 5433
  • Create load balancer rule 'PostgresqlLoadBalancerRule' to distribute incoming requests to Postgres Pro database to Multimaster nodes:
az network lb rule create \
--name PostgresqlLoadBalancerRule \
--lb-name myloadbalancer \
--probe-name PostgresqlHealthProbe \
--resource-group myresourcegroup \
--protocol tcp \
--frontend-port 5433 \
--backend-port 5433 \
--frontend-ip-name myFrontEndPool \
--backend-pool-name myBackEndPool
  • Now add Multimaster nodes to load balancer configuration:
for i in `seq 1 3`; do
ipconfig_id=$(az network nic list --resource-group myresourcegroup --output tsv --query "[?starts_with(name,'myvm-ent-0$i')].[ipConfigurations[0].name]")
az network nic ip-config address-pool add \
--nic-name myvm-ent-0$i-nic \
--ip-config-name $ipconfig_id \
--lb-name myloadbalancer \
--resource-group myresourcegroup \
--address-pool myBackEndPool
done
  • Connect to Postgres Pro Enterprise Multimaster database via load balancer:
psql --host=myvm-ent-lb-dnsname.northeurope.cloudapp.azure.com --port=5433 --user=myuser --dbname=mydb


Postgres Pro Enterprise Multimaster (2-node + referee node configuration)

If 3-node Multimaster is an overkill configuration (triple storage size for database), it's still possible to have 2-node Multimaster configuration either by nominating one of two nodes a 'major' node (multimaster.major_node=on) or by using a light-weight node as a referee instead of the third node.

The way to migrate from 3-node Multimaster to 2-node + referee node Multimaster is the following.

  • Exclude 'myvm-ent-03' node from Multimaster configuration, remove Multimster settings from 'myvm-ent-03' referee node and change Multimaster settings on 'myvm-ent-01' and 'myvm-ent-02' nodes:
sudo su - postgres
sed -i '/multimaster/d' $PGDATA/postgresql.conf
psql --host=myvm-ent-03 -c "alter system set shared_preload_libraries to pg_stat_statements,pg_buffercache,pg_wait_sampling"

psql --host=myvm-ent-01 -c "select mtm.drop_node(3)"
psql --host=myvm-ent-01 -x -c "select mtm.nodes()"
psql --host=myvm-ent-01 -c "alter system set multimaster.referee_connstring = 'dbname=mydb user=myuser host=myvm-ent-03 port=5433 sslmode=require'"
psql --host=myvm-ent-02 -c "alter system set multimaster.referee_connstring = 'dbname=mydb user=myuser host=myvm-ent-03 port=5433 sslmode=require'"

exit

sudo systemctl restart postgrespro-ent-14.service

sudo su - postgres
psql --host=myvm-ent-03 -c "drop extension multimaster"
psql --host=myvm-ent-03 -c "drop publication if exists multimaster"
psql --host=myvm-ent-03 -c "create extension referee"
exit

sudo systemctl restart postgrespro-ent-14.service
  • Apply new Multimster settings on 'myvm-ent-01' and 'myvm-ent-02' nodes:
sudo systemctl restart postgrespro-ent-14.service
  • Finally, drop replication slots and finalize settings on 'myvm-ent-03' referee node:
sudo su - postgres
sed -i '/^#/!d' $PGDATA/postgresql.auto.conf
echo "shared_preload_libraries = 'pg_stat_statements, pg_buffercache, pg_wait_sampling'" >> $PGDATA/postgresql.auto.conf
psql --host=myvm-ent-03 -c "select pg_drop_replication_slot('mtm_slot_1')"
psql --host=myvm-ent-03 -c "select pg_drop_replication_slot('mtm_filter_slot_1')"
psql --host=myvm-ent-03 -c "select pg_drop_replication_slot('mtm_slot_2')"
psql --host=myvm-ent-03 -c "select pg_drop_replication_slot('mtm_filter_slot_2')"
psql --host=myvm-ent-03 -c "select pg_replication_origin_drop('mtm_slot_1')"
psql --host=myvm-ent-03 -c "select pg_replication_origin_drop('mtm_slot_2')"
exit

sudo systemctl restart postgrespro-ent-14.service
  • Check the status of Multimaster on each node:
sudo su - postgres
psql --dbname=mydb --username=myuser --host=myvm-ent-01 --port=5433 -x -c "select mtm.nodes()"
psql --dbname=mydb --username=myuser --host=myvm-ent-02 --port=5433 -x -c "select mtm.nodes()"
psql --dbname=mydb --username=myuser --host=myvm-ent-03 --port=5433 -c "select * from referee.decision"
exit


Postgres Pro Enterprise CFS (compressed file system):

In order to use functionality of Postgres Pro Enterprise CFS (compressed file system) extension, proceed with the following commands.

  • Create OS filesystem directory for 'cfs_ts' tablespace and 'cfs_ts' tablespace in the database: 
sudo su - postgres
mkdir $PGDATA/../cfs_ts
chmod 0700 $PGDATA/../cfs_ts
psql -c "create tablespace cfs_ts location '/var/lib/pgpro/ent-14/cfs_ts' with (compression=true)"
exit
  • Make sure the 'cfs_ts' tablespace has been created with 'compression=true' option: 
sudo su - postgres
psql -c "select * from pg_tablespace"
exit

Use one of the following ways to utilise 'cfs_ts' tablespace for new database objects:

  • Upon database object creation: 
sudo su - postgres
psql -c "create table t1 (t int) tablespace cfs_ts"
psql -c "select tablename, tablespace from pg_tables where schemaname = 'public'"
exit
  • Setting default tablespace for current database connection: 
sudo su - postgres
psql
set default_tablespace=cfs_ts;
show default_tablespace;
create table t2 (t int);
select tablename, tablespace from pg_tables where schemaname = 'public';
\q
exit
  • Setting default tablespace for particular database: 
sudo su - postgres
psql --dbname=postgres -c "alter database mydb set tablespace cfs_ts"
psql -c "select datname, dattablespace from pg_database"
exit
  • Setting default tablespace for particular user/role: 
sudo su - postgres
psql --username=postgres -c "alter user myuser set default_tablespace to 'cfs_ts'"
psql -c "select usename, useconfig from pg_user"
psql -c "select rolname, rolconfig from pg_roles"
exit

Use one of the following ways to move existing database objects from one tablespace ('pg_default') to another ('cfs_ts'):

  • One by one:
sudo su - postgres
psql -c "create table t3 (t int)"
psql -c "alter table t3 set tablespace cfs_ts"
psql -c "select tablename, tablespace from pg_tables where schemaname = 'public'"
exit
  • All together:
sudo su - postgres
psql -c "alter table all in tablespace pg_default set tablespace cfs_ts"
psql -c "select tablename, tablespace from pg_tables where schemaname = 'public'"
exit

Depending on the data stored in tablespace 'cfs_ts', commpression ratio may vary.


Postgres Pro Enterprise (VM) content:

OS - Linux CentOS 7.x (64-bit)

  • OS-account - ‘postgres’
  • OS-account - ‘zabbix’
  • OS-account - ‘mamonsu’

OS-disk size - 30 GB

  • xfs filesystem ‘/boot’ (/dev/sda1) - 1 GB
  • xfs filesystem ‘/’ (/dev/sda2) - 29 GB

Main database - Postgres Pro Enterprise

  • DB version: 9.6/10/11/12/13/14
  • TCP-port: 5433 (opened in OS-firewall settings)
  • configuration file: /var/lib/pgsql/.pgpro_profile
  • database account: ’postgres’

Database monitoring (server)

  • zabbix-server version: 4.x
  • TCP-ports: 80/443 (opened in OS-firewall settings)
  • account: ‘Admin’

Database monitoring (agent)

  • zabbix-agent version: 4.x
  • mamonsu-agent version: 2.x
  • configuration file: /etc/mamonsu/agent.conf

Auxiliary database PostgreSQL (as a zabbix-server database)

  • DB version: 10/11/12/13/14
  • TCP-port: 5432
  • configuration file: /var/lib/pgsql/.pgsql_profile
  • database account: 'postgres'


Documentation links