Postgres Pro Standard on Microsoft Azure

Postgres Pro Standard (VM) in Microsoft Azure Quick Start Guide

Internet access and a valid Microsoft Azure account are required to use Postgres Pro Standard (VM) database in the Microsoft Azure cloud.

Postgres Pro Standard 9.5/9.6/10/11/12/13 virtual machine image is available in the Microsoft Azure Marketplace.

Software required for installation:

  • Azure CLI 2.x for cloud management
  • 'psql' or 'Pgadmin' for database connection

Azure CLI 2.x is a cross-platform command-line utility.

Azure Portal https://portal.azure.com or Azure PowerShell can be used as well.

Azure CLI 2.x Installation guide: https://docs.microsoft.com/en-us/cli/azure/install-azure-cli

Azure CLI 2.x Get started guide: https://docs.microsoft.com/en-us/cli/azure/get-started-with-azure-cli

Azure CLI 2.x Command reference guide: https://docs.microsoft.com/en-us/cli/azure

Azure Linux Virtual Machine Installation guide: https://docs.microsoft.com/en-us/azure/virtual-machines/linux

Azure Backup Documentation: https://docs.microsoft.com/en-us/azure/backup/


Connection to Azure and environment check

  • Connect to Microsoft Azure with:
az login

or

az login --username <myusername>
  • Set Azure CLI 2.x commands table format output:
az configure
  • Verify Azure CLI 2.x version (should be the latest available):
az --version | head -1
  • Make sure that the required services (Microsoft.Storage, Microsoft.Compute and Microsoft.Network) are registered:
az provider show --namespace Microsoft.Storage
az provider show --namespace Microsoft.Compute
az provider show --namespace Microsoft.Network
  • If not, register them:
az provider register --namespace Microsoft.Storage
az provider register --namespace Microsoft.Compute
az provider register --namespace Microsoft.Network
  • A list of locations for VM:
az account list-locations

‘northeurope’ location will be used further.

  • A list of available VM sizes in the ‘northeurope’ location:
az vm list-sizes --location northeurope

‘Standard_DS1_v2’ VM size will be used further (available for ‘Free Trial’ subscription).

  • Obtain publisher name of VM Postgres Pro image in the Microsoft Azure Marketplace for the ‘northeurope’ location:
az vm image list-publishers \
--location northeurope \
--query "[?starts_with(name,'postgres')].{Name:name}"

The ‘postgres-pro’ VM publisher name will be used further.

  • Obtain VM Postgres Pro image names available in the Microsoft Azure Marketplace for the ‘postgres-pro’ publisher in the ‘northeurope’ location:
az vm image list \
--publisher postgres-pro \
--location northeurope \
--all \
--query "[?contains(urn,'standard')].{Urn:urn}"
  • The following VM image name will be used further:
urn_id='postgres-pro:postgres-pro-standard-95:pgpro-std-95-centos7-x64-byol:latest'
or
urn_id='postgres-pro:postgres-pro-standard:pgpro-std-96-centos7-x64-byol:latest'
or
urn_id='postgres-pro:postgres-pro-standard-10:pgpro-std-10-centos7-x64-byol:latest'
or
urn_id='postgres-pro:postgres-pro-standard-11:pgpro-std-11-centos7-x64-byol:latest'
or
urn_id='postgres-pro:postgres-pro-standard-12-vm:pgpro-std-12-centos7-x64-byol:latest'
or
urn_id='postgres-pro:postgres-pro-standard-13-vm:pgpro-std-13-centos7-x64-byol:latest'
  • Configure VM programmatic deployment:
az vm image terms accept --urn $urn_id
  • Create a pair of private/public ssh-keys in ~/.ssh directory to connect to the VM:
ssh-keygen -t rsa -b 2048


VM creation

  • Create a resource group:
az group create \
--name myresourcegroup \
--location northeurope
  • Create a VM from the VM image available in the Microsoft Azure Marketplace:
az vm create \
--name myvm-std-xx \
--resource-group myresourcegroup \
--image $urn_id \
--location northeurope \
--size Standard_DS1_v2 \
--ssh-key-value ~/.ssh/id_rsa.pub \
--admin-username azureuser \
--authentication-type ssh \
--public-ip-address-dns-name myvm-std-xx-dnsname \
--os-disk-name myvm-std-xx-osdisk

Replace ‘xx’ by '01', '02', '03’ and so on.


Connection to VM

As a result, the VM is created with FQDN of ‘myvm-std-xx-dnsname.northeurope.cloudapp.azure.com’ (FQDN is a combination of your short DNS-name used during VM creation, your location name and ‘cloudapp.azure.com’) and the OS-account of ‘azureuser’ (by default it has ‘sudo’ permissions)

  • Connect to VM:
ssh azureuser@myvm-std-xx-dnsname.northeurope.cloudapp.azure.com

Replace ‘xx’ by '01', '02', '03’ and so on.


Postgres Pro database service status

  • Verify Postgres Pro database service status:
sudo systemctl -l status postgrespro-9.5.service
or
sudo systemctl -l status postgrespro-9.6.service
or
sudo systemctl -l status postgrespro-std-10.service
or
sudo systemctl -l status postgrespro-std-11.service
or
sudo systemctl -l status postgrespro-std-12.service
or
sudo systemctl -l status postgrespro-std-13.service
  • To stop/start Postgres Pro database service, use the following commands:
sudo systemctl stop postgrespro-9.5.service
sudo systemctl start postgrespro-9.5.service
or
sudo systemctl stop postgrespro-9.6.service
sudo systemctl start postgrespro-9.6.service
or
sudo systemctl stop postgrespro-std-10.service
sudo systemctl start postgrespro-std-10.service
or
sudo systemctl stop postgrespro-std-11.service
sudo systemctl start postgrespro-std-11.service
or
sudo systemctl stop postgrespro-std-12.service
sudo systemctl start postgrespro-std-12.service
or
sudo systemctl stop postgrespro-std-13.service
sudo systemctl start postgrespro-std-13.service


Connection to Postgres Pro database

  • Switch to ‘postgres’ account:
sudo su - postgres
  • To connect to Postgres Pro database, use the following command:
psql
  • To exit from ‘psql’, use the following command:
\q
  • To return to Azure CLI 2.x interface, run the 'exit' command twice.


External connection to VM

  • TCP-port 5433 has to be opened for external connection to Postgres Pro database:
az vm open-port \
--name myvm-std-xx \
--port 5433 \
--resource-group myresourcegroup \
--priority 1001
  • TCP-ports 80 and 443 have to be opened for external connection to the database monitoring server:
az vm open-port \
--name myvm-std-xx \
--port 80 \
--resource-group myresourcegroup \
--priority 1002

az vm open-port \
--name myvm-std-xx \
--port 443 \
--resource-group myresourcegroup \
--priority 1003

Replace ‘xx’ by '01', '02', '03’ and so on.


External connection to Postgres Pro database

  • For external connection to Postgres Pro database, set up a 'postgres' user password:
ssh azureuser@myvm-std-xx-dnsname.northeurope.cloudapp.azure.com

sudo su - postgres
psql -c "alter user postgres with encrypted password 'YOUR_POSTGRES_USER_PASSWORD'"
exit

exit
  • For external connection to Postgres Pro database using ‘psql’ utility, use the following command:
psql --host=myvm-std-xx-dnsname.northeurope.cloudapp.azure.com --port=5433 --username=postgres --dbname=postgres
  • For external connection to Postgres Pro database using ‘Pgadmin’ utility, configure the following server settings in ‘Pgadmin’ menu:
    • ‘mydb-xx’ for ‘Name’
    • ‘myvm-std-xx-dnsname.northeurope.cloudapp.azure.com’ for ‘Host’
    • ‘5433’ for ‘Port’
    • ‘postgres’ for ‘Maintenance DB’
    • ‘postgres’ for ‘Username’

Replace ‘xx’ by '01', '02', '03’ and so on.


External connection to the database monitoring server

  • For connection to the database monitoring server, set up the 'Admin' user password:
ssh azureuser@myvm-std-xx-dnsname.northeurope.cloudapp.azure.com

sudo su - postgres
source .pgsql_profile
psql --dbname=zabbix --username=zabbix -c "update users set passwd=md5('YOUR_ZABBIX_ADMIN_PASSWORD') where alias='Admin'"
exit

exit
  • The link for external connection to database monitoring server:

https://myvm-std-xx-dnsname.northeurope.cloudapp.azure.com/zabbix

(Replace ‘xx’ by '01', '02', '03’ and so on.)


VM configuration change

Let's consider some examples of a VM configuration change:

1) Change VM size from ‘Standard_DS1_v2’ to ‘Standard_DS2_v2’ to increase VM computing power.

  • $PGDATA/postgresql.tune.lock file can be deleted in order to automatically adjust Postgres Pro database parameter values before increasing the VM size:
ssh azureuser@myvm-std-xx-dnsname.northeurope.cloudapp.azure.com

sudo su - postgres
cp $PGDATA/postgresql.auto.conf $PGDATA/postgresql.auto.conf.ORIG
rm $PGDATA/postgresql.tune.lock
exit

exit
  • A list of available VM sizes in ‘northeurope’ location (‘az vm deallocate’ is not required):
az vm list-vm-resize-options \
--name myvm-std-xx \
--resource-group myresourcegroup
  • To change the VM size, use the following command:
az vm resize \
--name myvm-std-xx \
--resource-group myresourcegroup \
--size Standard_DS2_v2

Replace ‘xx’ by '01', '02', '03’ and so on. 

2) Increase OS-disk size up to 80 GB.

  • Obtain OS-disk size details:
az disk list \
--resource-group myresourcegroup \
--query "[?starts_with(name,'myvm-std-xx-osdisk')].{Name:name,Gb:diskSizeGb}"
  • Deallocate VM temporarily:
az vm deallocate \
--name myvm-std-xx \
--resource-group myresourcegroup
  • Increase OS-disk size:
az disk update \
--name myvm-std-xx-osdisk \
--resource-group myresourcegroup \
--size-gb 80
  • Verify new OS-disk size:
az disk list \
--resource-group myresourcegroup \
--query "[?starts_with(name,'myvm-std-xx-osdisk')].{Name:name,Gb:diskSizeGb}"
  • Start a VM:
az vm start \
--name myvm-std-xx \
--resource-group myresourcegroup
  • Connect to the VM:
ssh azureuser@myvm-std-xx-dnsname.northeurope.cloudapp.azure.com
  • Increase ‘/’ filesystem partition size:
(echo d; echo 2; echo n; echo p; echo 2; echo ; echo ; echo w) | sudo fdisk /dev/sda
  • Restart the VM:
sudo reboot
  • Connect to the VM:
ssh azureuser@myvm-std-xx-dnsname.northeurope.cloudapp.azure.com
  • Increase ‘/’ filesystem size:
sudo xfs_growfs -d /dev/sda2
  • Restart the VM:
sudo reboot

Replace ‘xx’ by '01', '02', '03’ and so on. 

3) Use a dedicated datadisk for Postgres Pro database files to improve database performance.

  • Let's create a new 200 GB datadisk and attach it to the VM:
az vm disk attach \
--disk myvm-std-xx-datadisk \
--resource-group myresourcegroup \
--vm-name myvm-std-xx \
--caching ReadOnly \
--lun 1 \
--new \
--size-gb 200
  • Connect to the VM:
ssh azureuser@myvm-std-xx-dnsname.northeurope.cloudapp.azure.com
  • Stop the Postgres Pro database service and verify its status:
sudo systemctl stop postgrespro-9.5.service
sudo systemctl -l status postgrespro-9.5.service
or
sudo systemctl stop postgrespro-9.6.service
sudo systemctl -l status postgrespro-9.6.service
or
sudo systemctl stop postgrespro-std-10.service
sudo systemctl -l status postgrespro-std-10.service
or
sudo systemctl stop postgrespro-std-11.service
sudo systemctl -l status postgrespro-std-11.service
or
sudo systemctl stop postgrespro-std-12.service
sudo systemctl -l status postgrespro-std-12.service
or
sudo systemctl stop postgrespro-std-13.service
sudo systemctl -l status postgrespro-std-13.service
  • Create a new filesystem mountpoint:
sudo mkdir /PGDATA
  • Use 'lsscsi' utility to find out the datadisk device name (in this case it is '/dev/sdc'):
lsscsi
  • Create the datadisk single partition of maximum size, create a new filesystem on top of it:
(echo n; echo p; echo 1; echo ; echo ; echo w) | sudo fdisk /dev/sdc
sudo mkfs -t ext4 /dev/sdc1
  • Amend /etc/fstab file for a new filesystem automount and mount it:
sudo sh -c "echo '`sudo blkid -o export /dev/sdc1 | grep UUID` /PGDATA ext4 defaults,nofail,barrier=0 1 2' >> /etc/fstab"
sudo mount /PGDATA
  • Create 'data' directory on the new filesystem and set its permissions:
sudo mkdir /PGDATA/data
sudo chown postgres:postgres /PGDATA/data
sudo chmod 0700 /PGDATA/data
  • Switch to ‘postgres’ account and move Postgres Pro database files to the new filesystem:
sudo su - postgres
mv /var/lib/pgpro/9.5/data/* /PGDATA/data; rmdir /var/lib/pgpro/9.5/data; ln -s /PGDATA/data /var/lib/pgpro/9.5/data
or
mv /var/lib/pgpro/9.6/data/* /PGDATA/data; rmdir /var/lib/pgpro/9.6/data; ln -s /PGDATA/data /var/lib/pgpro/9.6/data
or
mv /var/lib/pgpro/std-10/data/* /PGDATA/data; rmdir /var/lib/pgpro/std-10/data; ln -s /PGDATA/data /var/lib/pgpro/std-10/data
or
mv /var/lib/pgpro/std-11/data/* /PGDATA/data; rmdir /var/lib/pgpro/std-11/data; ln -s /PGDATA/data /var/lib/pgpro/std-11/data
or
mv /var/lib/pgpro/std-12/data/* /PGDATA/data; rmdir /var/lib/pgpro/std-12/data; ln -s /PGDATA/data /var/lib/pgpro/std-12/data
or
mv /var/lib/pgpro/std-13/data/* /PGDATA/data; rmdir /var/lib/pgpro/std-13/data; ln -s /PGDATA/data /var/lib/pgpro/std-13/data
exit
  • Start Postgres Pro database service and verify its status:
sudo systemctl start postgrespro-9.5.service
sudo systemctl -l status postgrespro-9.5.service
or
sudo systemctl start postgrespro-9.6.service
sudo systemctl -l status postgrespro-9.6.service
or
sudo systemctl start postgrespro-std-10.service
sudo systemctl -l status postgrespro-std-10.service
or
sudo systemctl start postgrespro-std-11.service
sudo systemctl -l status postgrespro-std-11.service
or
sudo systemctl start postgrespro-std-12.service
sudo systemctl -l status postgrespro-std-12.service
or
sudo systemctl start postgrespro-std-13.service
sudo systemctl -l status postgrespro-std-13.service
  • Restart VM, check filesystem automount and verify Postgres Pro database service status:
sudo reboot

ssh azureuser@myvm-std-xx-dnsname.northeurope.cloudapp.azure.com

sudo mount | grep /PGDATA
sudo df -h /PGDATA

sudo systemctl -l status postgrespro-9.5.service
or
sudo systemctl -l status postgrespro-9.6.service
or
sudo systemctl -l status postgrespro-std-10.service
or
sudo systemctl -l status postgrespro-std-11.service
or
sudo systemctl -l status postgrespro-std-12.service
or
sudo systemctl -l status postgrespro-std-13.service

Replace ‘xx’ by '01', '02', '03’ and so on.

4) Database service auto restart in case of database failure

  • Edit database service systemd file and restart database service:
sudo sed -i '/KillSignal=/a Restart=on-failure' /usr/lib/systemd/system/postgrespro-9.5.service
sudo systemctl daemon-reload
sudo systemctl restart postgrespro-9.5.service
sudo systemctl -l status postgrespro-9.5.service
or
sudo sed -i '/KillSignal=/a Restart=on-failure' /usr/lib/systemd/system/postgrespro-9.6.service
sudo systemctl daemon-reload
sudo systemctl restart postgrespro-9.6.service
sudo systemctl -l status postgrespro-9.6.service
or
sudo sed -i '/KillSignal=/a Restart=on-failure' /usr/lib/systemd/system/postgrespro-std-10.service
sudo systemctl daemon-reload
sudo systemctl restart postgrespro-std-10.service
sudo systemctl -l status postgrespro-std-10.service
or
sudo sed -i '/KillSignal=/a Restart=on-failure' /usr/lib/systemd/system/postgrespro-std-11.service
sudo systemctl daemon-reload
sudo systemctl restart postgrespro-std-11.service
sudo systemctl -l status postgrespro-std-11.service
or
sudo sed -i '/KillSignal=/a Restart=on-failure' /usr/lib/systemd/system/postgrespro-std-12.service
sudo systemctl daemon-reload
sudo systemctl restart postgrespro-std-12.service
sudo systemctl -l status postgrespro-std-12.service
or
sudo sed -i '/KillSignal=/a Restart=on-failure' /usr/lib/systemd/system/postgrespro-std-13.service
sudo systemctl daemon-reload
sudo systemctl restart postgrespro-std-13.service
sudo systemctl -l status postgrespro-std-13.service


VM backup/restore

Let's consider how to do a VM backup/restore in Azure (you can find more information about it at https://docs.microsoft.com/en-us/azure/backup/quick-backup-vm-cli).

  • To take file-consistent backup of VM, create Recovery Services vault first:
az backup vault create \
--name myvault-std-xx \
--resource-group myresourcegroup \
--location northeurope

'myvault-std-xx' Recovery Services vault name will be used further.

  • Change Recovery Services vault storage type to Locally-Redundant if you don't plan to use Geo-Redundant storage:
az backup vault backup-properties set \
--name myvault-std-xx \
--resource-group myresourcegroup \
--backup-storage-redundancy LocallyRedundant
  • Enable VM backup protection with default backup policy, it will schedule VM backup once a day:
az backup protection enable-for-vm \
--vm myvm-std-xx \
--vault-name myvault-std-xx \
--policy-name DefaultPolicy \
--resource-group myresourcegroup
  • Run the first backup manually, specifying recovery point availability date in 'dd-mm-yyyy' format. Provide VM name both for '--item-name' and '--container-name' parameters.
az backup protection backup-now \
--item-name myvm-std-xx \
--container-name myvm-std-xx \
--vault-name myvault-std-xx \
--resource-group myresourcegroup \
--retain-until 31-12-2021
  • Monitor backup job status:
az backup job list \
--vault-name myvault-std-xx \
--resource-group myresourcegroup
  • After backup job is successfully finished, you can use this backup as a recovery point to restore VM disk. Create Azure storage account if you don't have it:
az storage account create \
--name mystorageaccountstd \
--resource-group myresourcegroup \
--location northeurope \
--sku Standard_LRS

'mystorageaccountstd' Azure storage account name will be used further.

  • Supply the latest recovery point for restore:
rp_id=$(az backup recoverypoint list --item-name myvm-std-xx --container-name myvm-std-xx --vault-name myvault-std-xx --resource-group myresourcegroup --query [0].name --output tsv)

az backup restore restore-disks \
--item-name myvm-std-xx \
--container-name myvm-std-xx \
--vault-name myvault-std-xx \
--resource-group myresourcegroup \
--storage-account mystorageaccountstd \
--rp-name $rp_id
  • Monitor restore job status:
az backup job list \
--vault-name myvault-std-xx \
--resource-group myresourcegroup
  • Once restore is secured in storage account, supply required information to create VM disk:
container_id=$(az storage container list --account-name mystorageaccountstd --query [0].name -o tsv)
blob_id=$(az storage blob list --container-name $container_id --account-name mystorageaccountstd --query [0].name -o tsv)
uri_id=$(az storage blob url --name $blob_id --container-name $container_id --account-name mystorageaccountstd -o tsv)

az disk create \
--name myrestoredvm-std-xx-osdisk \
--resource-group myresourcegroup \
--source $uri_id
  • Create VM 'myrestoredvm-std-xx' from restored disk:
az vm create \
--name myrestoredvm-std-xx \
--resource-group myresourcegroup \
--attach-os-disk myrestoredvm-std-xx-osdisk \
--location northeurope \
--size Standard_DS1_v2 \
--public-ip-address-dns-name myrestoredvm-std-xx-dnsname \
--os-type linux
  • Now you can connect to restored VM:
ssh azureuser@myrestoredvm-std-xx-dnsname.northeurope.cloudapp.azure.com

Replace ‘xx’ by '01', '02', '03’ and so on. 


Postgres Pro database backup/restore

Let's have a look at Postgres Pro database backup/restore options (you can find more information about it at https://postgrespro.com/docs/postgrespro/13/backup).

  • Connect to VM:
ssh azureuser@myvm-std-xx-dnsname.northeurope.cloudapp.azure.com

Make sure Azure CLI 2.x is installed inside VM (you can find more information about it at https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest).

  • Switch to ‘postgres’ account:
sudo su - postgres
  • Connect to Microsoft Azure, set Azure CLI 2.x commands table output and prepare environment:
az login

az configure

echo 'export backup_home=$HOME/backup' >> .pgpro_profile
echo 'export file_date=$(date +"%Y%m%d-%H%M%S")' >> .pgpro_profile
echo 'export db_name=testdb' >> .pgpro_profile
echo 'export instance_name=myinstancename' >> .pgpro_profile
  • Create storage container for database backups:
az storage account create \
--name mystorageaccountstd \
--resource-group myresourcegroup \
--location northeurope \
--sku Standard_LRS

az storage container create \
--name mydbbackup-std-xx \
--account-name mystorageaccountstd

exit

'mydbbackup-std-xx' storage container name will be used further.

1) Logical backup

1a) using 'pg_dump'

  • Start Postgres Pro database service and verify its status:
sudo systemctl restart postgrespro-std-13.service
sudo systemctl -l status postgrespro-std-13.service
  • Switch to ‘postgres’ account:
sudo su - postgres
  • Prepare environment (temporary database will be used furhter):
rm -rf $backup_home
mkdir $backup_home

psql -c "create database $db_name"

for ((i=1;i<=3;i++)); do
psql --dbname $db_name -c "create table test_table_0$i(id numeric)"
psql --dbname $db_name -c "insert into test_table_0$i select * from generate_series(1, 5)"
psql --dbname $db_name -c "select * from test_table_0$i"
done

db_owner=$(psql -c "\l $db_name" | grep $db_name | awk '{print $3}')
dump_backup_file=dump_$db_name-backup-$file_date.gz
  • Create temporary database dump:
pg_dump $db_name | gzip > $backup_home/$dump_backup_file

ls $backup_home/$dump_backup_file

gzip -ltv $backup_home/$dump_backup_file
  • Upload temporary database dump into storage container:
az storage blob upload \
--container-name mydbbackup-std-xx \
--account-name mystorageaccountstd \
--file $backup_home/$dump_backup_file \
--name $dump_backup_file

az storage blob list \
--account-name mystorageaccountstd \
--container-name mydbbackup-std-xx
  • Delete temporary database and temporary database dump and restore temporary database from storage container:
psql -c "drop database $db_name"

rm $backup_home/$dump_backup_file
ls $backup_home/$dump_backup_file

psql -c "create database $db_name"
psql -c "alter database $db_name owner to $db_owner"

az storage blob download \
--container-name mydbbackup-std-xx \
--account-name mystorageaccountstd \
--file $backup_home/$dump_backup_file \
--name $dump_backup_file

ls $backup_home/$dump_backup_file

gzip -cdv $backup_home/$dump_backup_file | psql $db_name
  • Run test SQL-query on temporary database:
for ((i=1;i<=3;i++)); do
psql --dbname $db_name -c "select * from test_table_0$i"
done

exit

1b) using 'pg_dumpall'

  • Start Postgres Pro database service and verify its status:
sudo systemctl restart postgrespro-std-13.service
sudo systemctl -l status postgrespro-std-13.service
  • Switch to ‘postgres’ account:
sudo su - postgres
  • Prepare environment (temporary database will be used furhter):
rm -rf $backup_home
mkdir $backup_home

psql -c "create database $db_name"

for ((i=1;i<=3;i++)); do
psql --dbname $db_name -c "create table test_table_0$i(id numeric)"
psql --dbname $db_name -c "insert into test_table_0$i select * from generate_series(1, 5)"
psql --dbname $db_name -c "select * from test_table_0$i"
done

dumpall_backup_file=dumpall-backup-$file_date.gz
  • Create Postgres Pro database dump:
pg_dumpall | gzip > $backup_home/$dumpall_backup_file

ls $backup_home/$dumpall_backup_file

gzip -ltv $backup_home/$dumpall_backup_file
  • Upload Postgres Pro database dump into storage container:
az storage blob upload \
--container-name mydbbackup-std-xx \
--account-name mystorageaccountstd \
--file $backup_home/$dumpall_backup_file \
--name $dumpall_backup_file

az storage blob list \
--account-name mystorageaccountstd \
--container-name mydbbackup-std-xx
  • Delete temporary database and Postgres Pro database dump and restore Postgres Pro database from storage container:
psql -c "drop database $db_name"

rm $backup_home/$dumpall_backup_file
ls $backup_home/$dumpall_backup_file

az storage blob download \
--container-name mydbbackup-std-xx \
--account-name mystorageaccountstd \
--file $backup_home/$dumpall_backup_file \
--name $dumpall_backup_file

ls $backup_home/$dumpall_backup_file

gzip -cdv $backup_home/$dumpall_backup_file | psql postgres
  • Run test SQL-query on temporary database:
for ((i=1;i<=3;i++)); do
psql --dbname $db_name -c "select * from test_table_0$i"
done

exit

2) File system level backup

2a) using 'tar'

  • Stop Postgres Pro database service and verify its status:
sudo systemctl stop postgrespro-std-13.service
sudo systemctl -l status postgrespro-std-13.service
  • Switch to ‘postgres’ account:
sudo su - postgres
  • Prepare environment:
rm -rf $backup_home
mkdir $backup_home

db_backup_file=db-backup-$file_date.tgz
  • Create Postgres Pro database backup:
cd $PGDATA
tar -zcvf $backup_home/$db_backup_file *

ls $backup_home/$db_backup_file

tar -ztvf $backup_home/$db_backup_file
  • Upload Postgres Pro database backup into storage container:
az storage blob upload \
--container-name mydbbackup-std-xx \
--account-name mystorageaccountstd \
--file $backup_home/$db_backup_file \
--name $db_backup_file

az storage blob list \
--account-name mystorageaccountstd \
--container-name mydbbackup-std-xx
  • Delete Postgres Pro database files and Postgres Pro database backup and restore Postgres Pro database from storage container:
rm -rf $PGDATA/*
ls $PGDATA/

rm $backup_home/$db_backup_file
ls $backup_home/$db_backup_file

az storage blob download \
--container-name mydbbackup-std-xx \
--account-name mystorageaccountstd \
--file $backup_home/$db_backup_file \
--name $db_backup_file

ls $backup_home/$db_backup_file

cd $PGDATA
tar -zxvf $backup_home/$db_backup_file

ls $PGDATA

exit
  • Start Postgres Pro database service, verify its status and run test SQL-query on Postgres Pro database:
sudo systemctl start postgrespro-std-13.service
sudo systemctl -l status postgrespro-std-13.service

sudo su -l postgres -c "psql -c \"select pgpro_version(), pgpro_edition(), pgpro_build()\""

2b) using 'pg_basebackup'

  • Start Postgres Pro database service and verify its status:
sudo systemctl restart postgrespro-std-13.service
sudo systemctl -l status postgrespro-std-13.service
  • Switch to ‘postgres’ account:
sudo su - postgres
  • Prepare environment:
rm -rf $backup_home
mkdir $backup_home

db_backup_file=db-backup-$file_date.tgz
wal_backup_file=wal-backup-$file_date.tgz
  • Create backup of Postgres Pro database and WAL files:
pg_basebackup \
--pgdata=$backup_home \
--format=tar \
--wal-method=stream \
--gzip \
--checkpoint=fast \
--label=$file_date \
--progress \
--verbose

ls $backup_home/base.tar.gz
ls $backup_home/pg_wal.tar.gz

tar -ztvf $backup_home/base.tar.gz
tar -ztvf $backup_home/pg_wal.tar.gz
  • Upload backup of Postgres Pro database and WAL files into storage container:
az storage blob upload \
--container-name mydbbackup-std-xx \
--account-name mystorageaccountstd \
--file $backup_home/base.tar.gz \
--name $db_backup_file

az storage blob upload \
--container-name mydbbackup-std-xx \
--account-name mystorageaccountstd \
--file $backup_home/pg_wal.tar.gz \
--name $wal_backup_file

az storage blob list \
--account-name mystorageaccountstd \
--container-name mydbbackup-std-xx

exit
  • Stop Postgres Pro database service, delete Postgres Pro database files and backup of Postgres Pro database and WAL files and restore Postgres Pro database from storage container:
sudo systemctl stop postgrespro-std-13.service
sudo systemctl -l status postgrespro-std-13.service

sudo su - postgres

rm -rf $PGDATA/*
ls $PGDATA/

rm -rf $backup_home
mkdir $backup_home

db_backup_file=$(az storage blob list --account-name mystorageaccountstd --container-name mydbbackup-std-xx | grep ^db-backup | tail -n 1 | awk {'print $1'})
wal_backup_file=$(az storage blob list --account-name mystorageaccountstd --container-name mydbbackup-std-xx | grep ^wal-backup | tail -n 1 | awk {'print $1'})

az storage blob download \
--container-name mydbbackup-std-xx \
--account-name mystorageaccountstd \
--file $backup_home/$db_backup_file \
--name $db_backup_file

az storage blob download \
--container-name mydbbackup-std-xx \
--account-name mystorageaccountstd \
--file $backup_home/$wal_backup_file \
--name $wal_backup_file

ls $backup_home/$db_backup_file
ls $backup_home/$wal_backup_file

cd $PGDATA
tar -zxvf $backup_home/$db_backup_file
cd $PGDATA/pg_wal
tar -zxvf $backup_home/$wal_backup_file

exit
  • Start Postgres Pro database service, verify its status and run test SQL-query on Postgres Pro database:
sudo systemctl start postgrespro-std-13.service
sudo systemctl -l status postgrespro-std-13.service

sudo su -l postgres -c "psql -c \"select pgpro_version(), pgpro_edition(), pgpro_build()\""

3) Continuous archiving and point-in-time recovery

3a) full backup via 'pg_basebackup'

  • Start Postgres Pro database service and verify its status:
sudo systemctl restart postgrespro-std-13.service
sudo systemctl -l status postgrespro-std-13.service
  • Switch to ‘postgres’ account:
sudo su - postgres
  • Turn on archiving mode for Postgres Pro database:
psql -c "show archive_mode"
psql -c "show archive_command"

psql -c "alter system set archive_mode=on"
psql -c "alter system set archive_command='az storage blob upload --container-name mydbbackup-std-xx --account-name mystorageaccountstd --file %p --name %f'"

exit
  • Restart Postgres Pro database service, verify its status and value of parameters 'archive_mode' and 'archive_command':
sudo systemctl restart postgrespro-std-13.service
sudo systemctl -l status postgrespro-std-13.service

sudo su - postgres

psql -c "show archive_mode"
psql -c "show archive_command"
  • Prepare environment:
rm -rf $backup_home
mkdir $backup_home

db_backup_file=db-backup-$file_date.tgz
wal_backup_file=wal-backup-$file_date.tgz
  • Create backup of Postgres Pro database and WAL files:
pg_basebackup \
--pgdata=$backup_home \
--format=tar \
--wal-method=stream \
--gzip \
--checkpoint=fast \
--label=$file_date \
--progress \
--verbose

ls $backup_home/base.tar.gz
ls $backup_home/pg_wal.tar.gz

tar -ztvf $backup_home/base.tar.gz
tar -ztvf $backup_home/pg_wal.tar.gz
  • Upload backup of Postgres Pro database and WAL files into storage container:
az storage blob upload \
--container-name mydbbackup-std-xx \
--account-name mystorageaccountstd \
--file $backup_home/base.tar.gz \
--name $db_backup_file

az storage blob upload \
--container-name mydbbackup-std-xx \
--account-name mystorageaccountstd \
--file $backup_home/pg_wal.tar.gz \
--name $wal_backup_file

az storage blob list \
--account-name mystorageaccountstd \
--container-name mydbbackup-std-xx

exit
  • Stop Postgres Pro database service, delete Postgres Pro database files and backup of Postgres Pro database and WAL files and restore Postgres Pro database from storage container:
sudo systemctl stop postgrespro-std-13.service
sudo systemctl -l status postgrespro-std-13.service

sudo su - postgres

rm -rf $PGDATA/*
ls $PGDATA/

rm -rf $backup_home
mkdir $backup_home

db_backup_file=$(az storage blob list --account-name mystorageaccountstd --container-name mydbbackup-std-xx | grep ^db-backup | tail -n 1 | awk {'print $1'})
wal_backup_file=$(az storage blob list --account-name mystorageaccountstd --container-name mydbbackup-std-xx | grep ^wal-backup | tail -n 1 | awk {'print $1'})

az storage blob download \
--container-name mydbbackup-std-xx \
--account-name mystorageaccountstd \
--file $backup_home/$db_backup_file \
--name $db_backup_file

az storage blob download \
--container-name mydbbackup-std-xx \
--account-name mystorageaccountstd \
--file $backup_home/$wal_backup_file \
--name $wal_backup_file

ls $backup_home/$db_backup_file
ls $backup_home/$wal_backup_file

cd $PGDATA
tar -zxvf $backup_home/$db_backup_file
cd $PGDATA/pg_wal
tar -zxvf $backup_home/$wal_backup_file

exit
  • Start Postgres Pro database service, verify its status and run test SQL-query on Postgres Pro database:
sudo systemctl start postgrespro-std-13.service
sudo systemctl -l status postgrespro-std-13.service

sudo su -l postgres -c "psql -c \"select pgpro_version(), pgpro_edition(), pgpro_build()\""

3b) full backup via 'pg_probackup'

  • Start Postgres Pro database service and verify its status:
sudo systemctl restart postgrespro-std-13.service
sudo systemctl -l status postgrespro-std-13.service
  • Switch to ‘postgres’ account:
sudo su - postgres
  • Turn on archiving mode for Postgres Pro database:
psql -c "show archive_mode"
psql -c "show archive_command"

psql -c "alter system set archive_mode=on"
psql -c "alter system set archive_command='/opt/pgpro/std-13/bin/pg_probackup archive-push -B $backup_home --instance $instance_name --wal-file-path %p --wal-file-name %f'"

exit
  • Restart Postgres Pro database service, verify its status and value of parameters 'archive_mode' and 'archive_command':
sudo systemctl restart postgrespro-std-13.service
sudo systemctl -l status postgrespro-std-13.service

sudo su - postgres

psql -c "show archive_mode"
psql -c "show archive_command"
  • Prepare environment:
rm -rf $backup_home
mkdir $backup_home

db_backup_file=db-backup-$file_date.tgz

pg_probackup init -B $backup_home
pg_probackup add-instance -B $backup_home -D $PGDATA --instance $instance_name
pg_probackup show-config -B $backup_home --instance $instance_name
pg_probackup show -B $backup_home
  • Create backup of Postgres Pro database and WAL files:
pg_probackup backup -B $backup_home --instance $instance_name -b FULL --progress
pg_probackup validate -B $backup_home --instance $instance_name
pg_probackup show -B $backup_home
  • Upload backup of Postgres Pro database and WAL files into storage container:
cd $backup_home
tar -zcvf $HOME/$db_backup_file *

ls $HOME/$db_backup_file

tar -ztvf $HOME/$db_backup_file

az storage blob upload \
--container-name mydbbackup-std-xx \
--account-name mystorageaccountstd \
--file $HOME/$db_backup_file \
--name $db_backup_file

az storage blob list \
--account-name mystorageaccountstd \
--container-name mydbbackup-std-xx

exit
  • Stop Postgres Pro database service, delete Postgres Pro database files and backup of Postgres Pro database and WAL files and restore Postgres Pro database from storage container:
sudo systemctl stop postgrespro-std-13.service
sudo systemctl -l status postgrespro-std-13.service

sudo su - postgres

rm -rf $PGDATA/*
ls $PGDATA/

rm -rf $backup_home
mkdir $backup_home

db_backup_file=$(az storage blob list --account-name mystorageaccountstd --container-name mydbbackup-std-xx | grep ^db-backup | tail -n 1 | awk {'print $1'})

rm $HOME/$db_backup_file
ls $HOME/$db_backup_file

az storage blob download \
--container-name mydbbackup-std-xx \
--account-name mystorageaccountstd \
--file $HOME/$db_backup_file \
--name $db_backup_file

ls $HOME/$db_backup_file

cd $backup_home
tar -zxvf $HOME/$db_backup_file

backup_id=$(pg_probackup show -B $backup_home | grep $instance_name | grep FULL | awk '{print $3}')
pg_probackup restore -B $backup_home -D $PGDATA --instance $instance_name -i $backup_id --progress

exit
  • Start Postgres Pro database service, verify its status and run test SQL-query on Postgres Pro database:
sudo systemctl start postgrespro-std-13.service
sudo systemctl -l status postgrespro-std-13.service

sudo su -l postgres -c "psql -c \"select pgpro_version(), pgpro_edition(), pgpro_build()\""

3c) incremental backup via 'pg_probackup'

  • Start Postgres Pro database service and verify its status:
sudo systemctl restart postgrespro-std-13.service
sudo systemctl -l status postgrespro-std-13.service
  • Switch to ‘postgres’ account:
sudo su - postgres
  • Turn on archiving mode for Postgres Pro database:
psql -c "show archive_mode"
psql -c "show archive_command"

psql -c "alter system set archive_mode=on"
psql -c "alter system set archive_command='/opt/pgpro/std-13/bin/pg_probackup archive-push -B $backup_home --instance $instance_name --wal-file-path %p --wal-file-name %f'"

exit
  • Restart Postgres Pro database service, verify its status and value of parameters 'archive_mode' and 'archive_command':
sudo systemctl restart postgrespro-std-13.service
sudo systemctl -l status postgrespro-std-13.service

sudo su - postgres

psql -c "show archive_mode"
psql -c "show archive_command"
  • Prepare environment (temporary database will be used furhter):
rm -rf $backup_home
mkdir $backup_home

db_backup_file=db-backup-$file_date.tgz

pg_probackup init -B $backup_home
pg_probackup add-instance -B $backup_home -D $PGDATA --instance $instance_name
pg_probackup show-config -B $backup_home --instance $instance_name
pg_probackup show -B $backup_home
  • Create full backup of Postgres Pro database and backup of WAL files:
pg_probackup backup -B $backup_home --instance $instance_name -b FULL --progress
pg_probackup validate -B $backup_home --instance $instance_name
pg_probackup show -B $backup_home
  • Create temporary database:
psql -c "create database $db_name"

for ((i=1;i<=3;i++)); do
psql --dbname $db_name -c "create table test_table_0$i(id numeric)"
psql --dbname $db_name -c "insert into test_table_0$i select * from generate_series(1, 5)"
psql --dbname $db_name -c "select * from test_table_0$i"
done
  • Create incremental backup of Postgres Pro database and backup of WAL files:
pg_probackup backup -B $backup_home --instance $instance_name -b PAGE --progress
pg_probackup validate -B $backup_home --instance $instance_name
pg_probackup show -B $backup_home
  • Upload full and incremental backups of Postgres Pro database and backup of WAL files into storage container:
cd $backup_home
tar -zcvf $HOME/$db_backup_file *

ls $HOME/$db_backup_file

tar -ztvf $HOME/$db_backup_file

az storage blob upload \
--container-name mydbbackup-std-xx \
--account-name mystorageaccountstd \
--file $HOME/$db_backup_file \
--name $db_backup_file

az storage blob list \
--account-name mystorageaccountstd \
--container-name mydbbackup-std-xx

exit
  • Stop Postgres Pro database service, delete Postgres Pro database files and backups of Postgres Pro database and WAL files and restore Postgres Pro database from storage container:
sudo systemctl stop postgrespro-std-13.service
sudo systemctl -l status postgrespro-std-13.service

sudo su - postgres

rm -rf $PGDATA/*
ls $PGDATA/

rm -rf $backup_home
mkdir $backup_home

db_backup_file=$(az storage blob list --account-name mystorageaccountstd --container-name mydbbackup-std-xx | grep ^db-backup | tail -n 1 | awk {'print $1'})

rm $HOME/$db_backup_file
ls $HOME/$db_backup_file

az storage blob download \
--container-name mydbbackup-std-xx \
--account-name mystorageaccountstd \
--file $HOME/$db_backup_file \
--name $db_backup_file

ls $HOME/$db_backup_file

cd $backup_home
tar -zxvf $HOME/$db_backup_file

backup_id=$(pg_probackup show -B $backup_home | grep $instance_name | grep PAGE | awk '{print $3}')
pg_probackup restore -B $backup_home -D $PGDATA --instance $instance_name -i $backup_id --progress

exit
  • Start Postgres Pro database service, verify its status and run test SQL-query on Postgres Pro database:
sudo systemctl start postgrespro-std-13.service
sudo systemctl -l status postgrespro-std-13.service

sudo su - postgres

for ((i=1;i<=3;i++)); do
psql --dbname $db_name -c "select * from test_table_0$i"
done

exit

Replace ‘xx’ by '01', '02', '03’ and so on.


Postgres Pro database high availability

Let's have a look at Postgres Pro database high availability options (you can find more information about it at https://postgrespro.com/docs/postgrespro/13/high-availability) taking Patroni cluster as an example (you can find more information about it at https://github.com/zalando/patroni). Assume Patroni cluster node names are: 'myvm-std-01', 'myvm-std-02' and 'myvm-std-03'.

  • Create resource group 'myresourcegroup':
az group create \
--name myresourcegroup \
--location northeurope
az vm availability-set create \
--name myavailabilityset \
--resource-group myresourcegroup \
--platform-fault-domain-count 3 \
--platform-update-domain-count 3
  • Create virtual local network 'myvnet' 10.0.0.0/8 and subnetwork 'myvnetsubnet' 10.0.0.0/24:
az network vnet create \
--name myvnet \
--location northeurope \
--resource-group myresourcegroup \
--subnet-name myvnetsubnet \
--subnet-prefix 10.0.0.0/24 \
--address-prefixes 10.0.0.0/8
  • Create network security group 'mynsg':
az network nsg create \
--name mynsg \
--location northeurope \
--resource-group myresourcegroup
  • Create network security group rules to allow incoming traffic to TCP-ports 22 (ssh), 5433 (Postgres) and 80/443 (http/https):
array=(AllowInboundSsh 22 1000 AllowInboundPostgresql 5433 1001 AllowInboundHttp 80 1002 AllowInboundHttps 443 1003); for i in `echo 0 3 6 9`; do
az network nsg rule create \
--name "${array[$i]}" \
--resource-group myresourcegroup \
--nsg-name mynsg \
--access Allow \
--direction Inbound \
--protocol Tcp \
--destination-port-range "${array[$i+1]}" \
--priority "${array[$i+2]}"
done
  • Create dynamic public IP-addresses for VMs:
for i in `seq 1 3`; do
az network public-ip create \
--name myvm-std-0$i-public-ip \
--location northeurope \
--resource-group myresourcegroup \
--dns-name myvm-std-0$i-dnsname \
--allocation-method Dynamic
done
  • Create network interfaces for VMs and assign dynamic public and static private IP-addresses to them:
for i in `seq 1 3`; do
az network nic create \
--name myvm-std-0$i-nic \
--location northeurope \
--resource-group myresourcegroup \
--vnet-name myvnet \
--subnet myvnetsubnet \
--network-security-group mynsg \
--public-ip-address myvm-std-0$i-public-ip \
--private-ip-address 10.0.0.10$i
done

Private IP-addresses assignment: 10.0.0.101 (myvm-std-01), 10.0.0.102 (myvm-std-02), 10.0.0.103 (myvm-std-03).

  • Finally, create VMs:
for i in `seq 1 3`; do
az vm create \
--name myvm-std-0$i \
--resource-group myresourcegroup \
--availability-set myavailabilityset \
--image $urn_id \
--location northeurope \
--size Standard_DS1_v2 \
--ssh-key-value ~/.ssh/id_rsa.pub \
--admin-username azureuser \
--authentication-type ssh \
--nics myvm-std-0$i-nic \
--os-disk-name myvm-std-0$i-osdisk
done
  • Now connect to each of three VMs and run the same set of commands on all of them:
ssh azureuser@myvm-std-01-dnsname.northeurope.cloudapp.azure.com
ssh azureuser@myvm-std-02-dnsname.northeurope.cloudapp.azure.com
ssh azureuser@myvm-std-03-dnsname.northeurope.cloudapp.azure.com
  • Configure Postgres Pro database, Consul, haproxy and Patroni services:
sudo systemctl disable postgrespro-9.5.service
or
sudo systemctl disable postgrespro-9.6.service
or
sudo systemctl disable postgrespro-std-10.service
or
sudo systemctl disable postgrespro-std-11.service
or
sudo systemctl disable postgrespro-std-12.service
or
sudo systemctl disable postgrespro-std-13.service

sudo sh -c "echo '' >> /etc/hosts"
for i in `seq 1 3`; do
sudo sh -c "echo '10.0.0.10$i myvm-std-0$i' >> /etc/hosts"
done

sudo sed -i "/retry_join/s|\[\]|\[\"myvm-std-01\", \"myvm-std-02\", \"myvm-std-03\"\]|" /etc/consul.d/consul.hcl
sudo sed -i "s|# retry_join|retry_join|" /etc/consul.d/consul.hcl

sudo systemctl daemon-reload; sudo systemctl enable consul.service; sudo systemctl start consul.service; sudo systemctl -l status consul.service

for i in `seq 1 3`; do
sudo sh -c "echo ' server myvm-std-0$i 10.0.0.10$i:5433 maxconn 100 check port 8008' >> /etc/haproxy/haproxy.cfg"
done

sudo systemctl daemon-reload; sudo systemctl enable haproxy; sudo systemctl start haproxy; sudo systemctl -l status haproxy

sudo sed -i "s|# name:|name: $HOSTNAME|" /etc/patroni.yml
sudo sed -i "/connect_address/s|127.0.0.1|`hostname -i`|" /etc/patroni.yml
  • Run the following commands on the first VM:
sudo su - postgres

psql -c "alter system set wal_level to 'replica'"
psql -c "alter system set hot_standby to 'on'"
psql -c "alter system set wal_keep_segments to '8'"
psql -c "alter system set max_wal_senders to '5'"
psql -c "alter system set max_replication_slots to '5'"
psql -c "alter system set checkpoint_timeout to '30'"

psql -c "create user patroni_replicator with replication encrypted password 'replicatorpassword'"
psql -c "create user patroni_superuser with superuser encrypted password 'superuserpassword'"

for i in `seq 1 3`; do
sed -i "/^host all.*md5/i host replication patroni_replicator myvm-std-0$i md5" $PGDATA/pg_hba.conf
done

for i in `seq 1 3`; do
echo "myvm-std-0$i:5433:replication:patroni_replicator:replicatorpassword" >> ~/.pgpass
done
chmod 0600 ~/.pgpass

exit

sudo systemctl restart postgrespro-9.5.service
or
sudo systemctl restart postgrespro-9.6.service
or
sudo systemctl restart postgrespro-std-10.service
or
sudo systemctl restart postgrespro-std-11.service
or
sudo systemctl restart postgrespro-std-12.service
or
sudo systemctl restart postgrespro-std-13.service
  • Run the following commands on the second and third VMs:
sudo systemctl stop postgrespro-9.5.service
or
sudo systemctl stop postgrespro-9.6.service
or
sudo systemctl stop postgrespro-std-10.service
or
sudo systemctl stop postgrespro-std-11.service
or
sudo systemctl stop postgrespro-std-12.service
or
sudo systemctl stop postgrespro-std-13.service

sudo su - postgres

rm -rf $PGDATA/*

for i in `seq 1 3`; do
echo "myvm-std-0$i:5433:replication:patroni_replicator:replicatorpassword" >> ~/.pgpass
done
chmod 0600 ~/.pgpass

exit
  • Start Patroni service on the first VM:
sudo systemctl daemon-reload; sudo systemctl enable patroni.service; sudo systemctl start patroni.service; sudo systemctl -l status patroni.service
sudo patronictl -c /etc/patroni.yml restart mypatroni myvm-std-01
  • Start Patroni service on the second and third VMs:
sudo systemctl daemon-reload; sudo systemctl enable patroni.service; sudo systemctl start patroni.service; sudo systemctl -l status patroni.service
  • Use the following commands to monitor Patroni cluster and streaming replication status:
sudo patronictl -c /etc/patroni.yml list

psql 'postgres://patroni_superuser:superuserpassword@myvm-std-01:5000/postgres' -x -c 'table pg_stat_replication'
psql 'postgres://patroni_superuser:superuserpassword@myvm-std-02:5000/postgres' -x -c 'table pg_stat_replication'
psql 'postgres://patroni_superuser:superuserpassword@myvm-std-03:5000/postgres' -x -c 'table pg_stat_replication'
  • To return to Azure CLI 2.x interface run 'exit' command twice


Postgres Pro Standard (VM) content:

OS - Linux CentOS 7.x (64-bit)

  • OS-account - ‘postgres’
  • OS-account - ‘zabbix’
  • OS-account - ‘mamonsu’

OS-disk size - 30 GB

  • xfs filesystem ‘/boot’ (/dev/sda1) - 1 GB
  • xfs filesystem ‘/’ (/dev/sda2) - 29 GB

Main database - Postgres Pro Standard

  • DB version: 9.5/9.6/10/11/12/13
  • TCP-port: 5433 (opened in OS-firewall settings)
  • configuration file: /var/lib/pgsql/.pgpro_profile
  • database account: ’postgres’

Database monitoring (server)

  • zabbix-server version: 4.x
  • TCP-ports: 80/443 (opened in OS-firewall settings)
  • account: ‘Admin’

Database monitoring (agent)

  • zabbix-agent version: 4.x
  • mamonsu-agent version: 2.x
  • configuration file: /etc/mamonsu/agent.conf

Auxiliary database PostgreSQL (as a zabbix-server database)

  • DB version: 9.5/9.6/10/11/12/13
  • TCP-port: 5432
  • configuration file: /var/lib/pgsql/.pgsql_profile
  • database account: 'postgres'


Documentation links


Postgres Pro Standard (VM+docker) in Microsoft Azure Quick Start Guide

Docker image with Postgres Pro Standard 9.5/9.6/10/11/12/13 is available inside Microsoft Azure VM.

Pls. refer to Postgres Pro Standard (VM) in Microsoft Azure Quick Start Guide in order to setup your access to Microsoft Azure cloud.


Connection to Azure and environment check

  • A list of locations for VM:
az account list-locations

‘northeurope’ location will be used further.

  • A list of available VM sizes in ‘northeurope’ location:
az vm list-sizes --location northeurope

‘Standard_DS1_v2’ VM size will be used further (available for ‘Free Trial’ subscription).

  • Obtain publisher name of VM Postgres Pro image in Microsoft Azure Marketplace for ‘northeurope’ location:
az vm image list-publishers \
--location northeurope \
--query "[?starts_with(name,'postgres')].{Name:name}"

‘postgres-pro’ VM publisher name will be used further.

  • Obtain VM Postgres Pro image names available in Microsoft Azure Marketplace for ‘postgres-pro’ publisher in ‘northeurope’ location:
az vm image list \
--publisher postgres-pro \
--location northeurope \
--all \
--query "[?contains(urn,'vm-docker')].{Urn:urn}"
  • The following VM image name will be used further:
urn_id='postgres-pro:postgres-pro-standard-95-vm-docker:pgpro-std-95-vm-docker-x64-byol:latest'
or
urn_id='postgres-pro:postgres-pro-standard-96-vm-docker:pgpro-std-96-vm-docker-x64-byol:latest'
or
urn_id='postgres-pro:postgres-pro-standard-10-vm-docker:pgpro-std-10-vm-docker-x64-byol:latest'
or
urn_id='postgres-pro:postgres-pro-standard-11-vm-docker:pgpro-std-11-vm-docker-x64-byol:latest'
or
urn_id='postgres-pro:postgres-pro-standard-12-vm-docker:pgpro-std-12-vm-docker-x64-byol:latest'
or
urn_id='postgres-pro:postgres-pro-standard-13-vm-docker:pgpro-std-13-vm-docker-x64-byol:latest'
  • Configure VM programmatic deployment:
az vm image terms accept --urn $urn_id
  • Create a pair of private/public ssh-keys in ~/.ssh directory to connect to VM:
ssh-keygen -t rsa -b 2048


VM creation

  • Create resource group:
az group create \
--name myresourcegroup \
--location northeurope
  • Create VM from VM image available in Microsoft Azure Marketplace:
az vm create \
--name myvm-std-xx \
--resource-group myresourcegroup \
--image $urn_id \
--location northeurope \
--size Standard_DS1_v2 \
--ssh-key-value ~/.ssh/id_rsa.pub \
--admin-username azureuser \
--authentication-type ssh \
--public-ip-address-dns-name myvm-std-xx-dnsname \
--os-disk-name myvm-std-xx-osdisk

Replace ‘xx’ by '01', '02', '03’ and so on.


Connection to VM

As a result VM is created with FQDN of ‘myvm-std-xx-dnsname.northeurope.cloudapp.azure.com’ (FQDN is a combination of short DNS-name used during VM creation, location name and ‘cloudapp.azure.com’) and OS-account of ‘azureuser’ (by default with ‘sudo’ permissions)

  • Connect to VM:
ssh azureuser@myvm-std-xx-dnsname.northeurope.cloudapp.azure.com

Replace ‘xx’ by '01', '02', '03’ and so on.


Docker check service status, list images and run container

  • Docker check service status and list images:
sudo systemctl -l status docker.service

sudo docker images
  • To stop/start docker service use the following commands:
sudo systemctl stop docker.service
sudo systemctl start docker.service
  • To run container use the following commands:
image_id=$(sudo docker images --format "{{.Repository}}" --filter "reference=pgpro*")
tag_id=$(sudo docker images --format "{{.Tag}}" --filter "reference=pgpro*")

sudo docker run --rm --detach --publish 5433:5433 --name $image_id $image_id:$tag_id

sudo docker ps -a
  • To stop containers use the following commands:
sudo docker ps -qa
sudo docker stop $(sudo docker ps -qa)
sudo docker ps -qa


Connection to Postgres Pro database

  • Use the following procedure to set up Postgres Pro database user 'postgres' password:
sudo docker exec --interactive --tty $image_id bash

psql -c "alter user postgres with encrypted password 'YOUR_POSTGRES_USER_PASSWORD'"
exit
  • To connect to Postgres Pro database use the following command:
psql --dbname=postgres --username=postgres --host=localhost --port=5433
  • To exit from ‘psql’ use the following command:
\q
  • To return to Azure CLI 2.x interface run 'exit' command


External connection to VM

  • TCP-port 5433 has to be opened for external connection to Postgres Pro database:
az vm open-port \
--name myvm-std-xx \
--port 5433 \
--resource-group myresourcegroup \
--priority 1001

Replace ‘xx’ by '01', '02', '03’ and so on.


External connection to Postgres Pro database

  • For external connection to Postgres Pro database using ‘psql’ utility use the following command:
psql --host=myvm-std-xx-dnsname.northeurope.cloudapp.azure.com --port=5433 --username=postgres --dbname=postgres
  • For external connection to Postgres Pro database using ‘Pgadmin’ utility configure the following server settings in ‘Pgadmin’ menu:
    • ‘mydb-xx’ for ‘Name’
    • ‘myvm-std-xx-dnsname.northeurope.cloudapp.azure.com’ for ‘Host’
    • ‘5433’ for ‘Port’
    • ‘postgres’ for ‘Maintenance DB’
    • ‘postgres’ for ‘Username’

Replace ‘xx’ by '01', '02', '03’ and so on.


VM configuration change

Pls. refer to VM configuration change in order to change your VM configuration.


VM backup/restore

Pls. refer to VM backup/restore in order to backup/restore your VM.


Postgres Pro database backup/restore

Pls. refer to Postgres Pro database backup/restore in order to backup/restore your Postgres Pro database.


Postgres Pro Standard (VM+docker) content:

OS - Linux Debian 10.x (64-bit)

  • Docker image - pgpro-std95/96/10/11/12/13-debian-x64:95.x.x/96.x.x/10.x.x/11.x.x/12.x.x/13.x.x

OS-disk size - 30 GB

  • xfs filesystem ‘/boot’ (/dev/sda1) - 1 GB
  • xfs filesystem ‘/’ (/dev/sda2) - 29 GB

Database - Postgres Pro Standard

  • DB version: 9.5/9.6/10/11/12/13
  • TCP-port: 5433 (opened in OS-firewall settings)
  • configuration file: /var/lib/pgsql/.pgpro_profile
  • database account: ’postgres’


Documentation links


Postgres Pro Standard (docker) in Microsoft Azure Quick Start Guide

Docker image with Postgres Pro Standard 9.5/9.6/10/11/12/13 is available in Azure Container Registry (ACR).

Pls. refer to Postgres Pro Standard (VM) in Microsoft Azure Quick Start Guide in order to setup your access to Microsoft Azure cloud.


Connection to Azure and environment check

After subscribing to this offer in Azure Marketplace via browser, use Azure CLI to access Postgres Pro Standard 9.5/9.6/10/11/12/13 in the Docker container or in the Azure Kubernetes Service (AKS) cluster container:

  • Prepare environment:
RG - name of your resource group
ACR - name of your ACR
TAG - tag of your container image
CG - name of your container group
PASS - 'postgres' user password
CLUSTER - name of your AKS cluster
DEPLOYMENT - name of your deployment in the AKS cluster
  • Connect to ACR:
az acr list --resource-group RG

az acr login --name ACR

loginServer_id=$(az acr show --name ACR --query loginServer -o tsv)

password_id=$(az acr credential show --name ACR --query "passwords[0].value" -o tsv)

image_id=pgpro-std95-debian-x64
or
image_id=pgpro-std96-debian-x64
or
image_id=pgpro-std10-debian-x64
or
image_id=pgpro-std11-debian-x64
or
image_id=pgpro-std12-debian-x64
or
image_id=pgpro-std13-debian-x64


Container creation

  • Create Docker container and check its status:
az container create \
--resource-group RG \
--name CG \
--environment-variables PGPRO_PASSWORD=PASS \
--image $loginServer_id/postgres-pro/$image_id:TAG \
--ports 5433 \
--registry-login-server $loginServer_id \
--registry-username ACR \
--registry-password $password_id \
--dns-name-label $image_id \
--no-wait

az container list --resource-group RG
  • or create AKS cluster container and check its status:
az aks create --name CLUSTER --resource-group RG --attach-acr ACR

az aks get-credentials --name CLUSTER --resource-group RG --overwrite-existing

kubectl create deployment DEPLOYMENT --image=$loginServer_id/postgres-pro/$image_id:TAG

kubectl expose deployment DEPLOYMENT --type=LoadBalancer --port 5433 --target-port 5433

kubectl get all


Connection to Postgres Pro database

  • Use the following procedure to set up Postgres Pro database user 'postgres' password in the Docker container:
az container exec \
--resource-group RG \
--name CG \
--exec-command "/bin/bash"

psql -c "alter user postgres with encrypted password 'YOUR_POSTGRES_USER_PASSWORD'"
exit
  • or in the AKS cluster container:
kubectl exec --stdin --tty $(kubectl get pods --no-headers | awk '{print $1}') -- /bin/bash

psql -c "alter user postgres with encrypted password 'YOUR_POSTGRES_USER_PASSWORD'"
exit


External connection to Postgres Pro database

  • Find out external IP-address to connect to either doker container or AKS cluster container:
external_ip=$(az container show --resource-group RG --name CG --query "{IP:ipAddress.ip}" -o tsv)
or
external_ip=$(kubectl get services DEPLOYMENT --no-headers | awk '{print $4}')
  • For external connection to Postgres Pro database using ‘psql’ utility use the following command:
psql --dbname=postgres --username=postgres --host=$external_ip --port=5433
  • For external connection to Postgres Pro database using ‘Pgadmin’ utility configure the following server settings in ‘Pgadmin’ menu:
    • ‘mydb-xx’ for ‘Name’
    • ‘$external_ip’ for ‘Host’
    • ‘5433’ for ‘Port’
    • ‘postgres’ for ‘Maintenance DB’
    • ‘postgres’ for ‘Username’

Replace ‘xx’ by '01', '02', '03’ and so on.


Postgres Pro Standard (docker) content:

OS - Linux Debian 10.x (64-bit)

  • Docker image - pgpro-std95/96/10/11/12/13-debian-x64:95.x.x/96.x.x/10.x.x/11.x.x/12.x.x/13.x.x

Database - Postgres Pro Standard

  • DB version: 9.5/9.6/10/11/12/13
  • TCP-port: 5433 (opened in OS-firewall settings)
  • configuration file: /var/lib/pgsql/.pgpro_profile
  • database account: ’postgres’


Documentation links