Postgres Pro Enterprise in Google Cloud

Postgres Pro Enterprise in Google Cloud Quick Start Guide

Internet access and valid Google Cloud Platform (GCP) account are required to use Postgres Pro 9.6/10/11 database in Google Cloud.

Postgres Pro 9.6/10/11 virtual machine (VM) image is available in Google Cloud Marketplace.

Software required for installation:

  • Google Cloud SDK for cloud management
  • 'psql' or 'Pgadmin' for database connection

'gcloud', as part of Google Cloud SDK, is a cross-platform command line utility.

Google Cloud Console https://console.cloud.google.com can be used as well.

Google Cloud SDK installation guide: https://cloud.google.com/sdk/install

Google Cloud SDK reference guide: https://cloud.google.com/sdk/

 

Connection to Google Cloud Platform and environment check

  • Connect to Google Cloud Platform and configure environment with:
gcloud init
  • Verify 'gcloud' version (should be the latest available):
gcloud --version
  • A list of regions and zones for VM:
gcloud compute regions list
gcloud compute zones list

gcloud config set compute/region europe-north1
gcloud config set compute/zone europe-north1-a

gcloud compute project-info add-metadata \
--metadata google-compute-default-region=europe-north1,google-compute-default-zone=europe-north1-a

REGION=europe-north1
echo $REGION
ZONE=europe-north1-a
echo $ZONE

‘europe-north1’ region and 'europe-north1-a' zone will be used further.

  • A list of available VM types and disk types:
gcloud compute machine-types list \
--filter="zone:(europe-north1-a)"

gcloud compute disk-types list \
--filter="zone:(europe-north1-a)"

‘n1-standard-1’ VM type and ‘pd-standard’ disk type will be used further.

  • Obtain VM Postgres Pro names of images available in Google Cloud Marketplace:
gcloud compute images list | grep -E "(NAME|postgres-pro)"

The following VM image ID will be used further (PROJECT and FAMILY):

PROJECT=postgres-pro
echo $PROJECT

--- Postgres Pro Enterprise Database 9.6
FAMILY=ppro-ent96-centos7
echo $FAMILY
or
--- Postgres Pro Enterprise Database 10
FAMILY=ppro-ent10-centos7
echo $FAMILY
or
--- Postgres Pro Enterprise Database 11
FAMILY=ppro-ent11-centos7
echo $FAMILY

 

VM creation

  • 'auto mode default network' will be used furhter:
gcloud compute networks list
gcloud compute routes list
gcloud compute firewall-rules list
  • Create a pair of private/public ssh-keys in ~/.ssh directory for VM connection:
ssh-keygen -t rsa -b 2048

ln -s ~/.ssh/id_rsa ~/.ssh/google_compute_engine
ln -s ~/.ssh/id_rsa.pub ~/.ssh/google_compute_engine.pub
  • Create VM from VM image available in Google Cloud Marketplace:
INSTANCE=myvm-ent-xx
echo $INSTANCE

gcloud compute instances create $INSTANCE \
--async \
--boot-disk-type=pd-standard \
--image-family=$FAMILY \
--image-project=$PROJECT \
--machine-type=n1-standard-1

Replace ‘xx’ by '01', '02', '03’ and so on.

 

Connection to VM

  • Connect to VM with private ssh-key using VM's public IP-address ($IpAddress):
gcloud compute instances list

gcloud compute ssh SYSTEM@$INSTANCE
or
IpAddress=$(gcloud compute instances list | grep $INSTANCE | awk '{print $5}')
echo $IpAddress
ssh SYSTEM@$IpAddress

 

Postgres Pro database service status

  • Verify Postgres Pro database service status: 
sudo systemctl -l status postgrespro-enterprise-9.6.service
or
sudo systemctl -l status postgrespro-ent-10.service
or
sudo systemctl -l status postgrespro-ent-11.service
  • To stop/start Postgres Pro database service use the following commands: 
sudo systemctl stop postgrespro-enterprise-9.6.service
sudo systemctl start postgrespro-enterprise-9.6.service
or
sudo systemctl stop postgrespro-ent-10.service
sudo systemctl start postgrespro-ent-10.service
or
sudo systemctl stop postgrespro-ent-11.service
sudo systemctl start postgrespro-ent-11.service

 

Connection to Postgres Pro database

  • Switch to ‘postgres’ account:
sudo su - postgres
  • To connect to Postgres Pro database use the following command:
psql
  • To exit from ‘psql’ use the following command:
\q
  • To return to 'gcloud' interface run 'exit' command twice

 

External connection to VM

  • TCP-port 5433 has to be opened for external connection to Postgres Pro database: 
gcloud compute firewall-rules create ppro-allow-postgres \
--allow=tcp:5433 \
--description="Allow Postgres from anywhere" \
--direction=INGRESS \
--network=default \
--priority=1000
  • TCP-ports 80 and 443 have to be opened for external connection to database monitoring server: 
gcloud compute firewall-rules create ppro-allow-http \
--allow=tcp:80 \
--description="Allow HTTP from anywhere" \
--direction=INGRESS \
--network=default \
--priority=1000

gcloud compute firewall-rules create ppro-allow-https \
--allow=tcp:443 \
--description="Allow HTTPS from anywhere" \
--direction=INGRESS \
--network=default \
--priority=1000

 

External connection to Postgres Pro database

  • For external connection to Postgres Pro database set up 'postgres' user password:
gcloud compute ssh SYSTEM@$INSTANCE

sudo su - postgres
psql -c "alter user postgres with encrypted password 'YOUR_POSTGRES_USER_PASSWORD'"
exit

exit
  • For external connection to Postgres Pro database using ‘psql’ utility use the following command:
psql --host=$IpAddress --port=5433 --username=postgres --dbname=postgres
  • For external connection to Postgres Pro database using ‘Pgadmin’ utility configure the following server settings in ‘Pgadmin’ menu:
    • ‘mydb-xx’ for ‘Name’
    • ‘$IpAddress’ for ‘Host’
    • ‘5433’ for ‘Port’
    • ‘postgres’ for ‘Maintenance DB’
    • ‘postgres’ for ‘Username’

Replace ‘xx’ by '01', '02', '03’ and so on.

Replace 'IpAddress' by VM's external IP-address.

 

External connection to database monitoring server

  • For connection to database monitoring server set up 'Admin' user password:
gcloud compute ssh SYSTEM@$INSTANCE

sudo su - postgres
source .pgsql_profile
psql --dbname=zabbix --username=zabbix -c "update users set passwd=md5('YOUR_ZABBIX_ADMIN_PASSWORD') where alias='Admin'"
exit

exit
  • External connection to database monitoring server is via the following link:

https://$IpAddress/zabbix

Replace ‘$IpAddress’ by VM's external IP-address.

 

VM configuration change

Let's have some examples of VM configuration change:

1) Change VM type from ‘n1-standard-1’ to ‘n1-standard-2’ to increase VM computing power

  • $PGDATA/postgresql.tune.lock file can be deleted in order to automatically adjust Postgres Pro database parameters values before increasing VM size:
gcloud compute ssh SYSTEM@$INSTANCE

sudo su - postgres
cp $PGDATA/postgresql.auto.conf $PGDATA/postgresql.auto.conf.ORIG
rm $PGDATA/postgresql.tune.lock
exit

exit
  • Obtain a list of available VM types:
gcloud compute machine-types list \
--filter="zone:(europe-north1-a)"
  • To change VM type use the following commands:
gcloud compute instances stop $INSTANCE

InstanceType=n1-standard-2
echo $InstanceType

gcloud compute instances set-machine-type $INSTANCE \
--machine-type $InstanceType

gcloud compute instances start $INSTANCE

2) Increase OS-disk size up to 80 GB

  • Obtain OS-disk size details: 
gcloud compute disks list | grep -E "(NAME|$INSTANCE)"
  • Stop VM temporarily: 
gcloud compute instances stop $INSTANCE
  • Increase OS-disk size: 
echo "y" | gcloud compute disks resize $INSTANCE \
--size=80GB \
--zone=$ZONE
  • Verify new OS-disk size: 
gcloud compute disks list | grep -E "(NAME|$INSTANCE)"
  • Start VM: 
gcloud compute instances start $INSTANCE
  • Connect to VM: 
gcloud compute ssh SYSTEM@$INSTANCE
  • Increase ‘/’ filesystem partition size:
(echo d; echo n; echo p; echo 1; echo ; echo ; echo w) | sudo fdisk /dev/sda
  • Restart VM:
sudo reboot
  • Connect to VM: 
gcloud compute ssh SYSTEM@$INSTANCE
  • Increase ‘/’ filesystem size:
sudo xfs_growfs -d /dev/sda1
  • Restart VM:
sudo reboot

3) Use dedicated datadisk for Postgres Pro database files to improve database performance

  • Let's create new 200 GB datadisk and attach it to VM:
gcloud compute disks create $INSTANCE-datadisk \
--size=200GB \
--type=pd-standard \
--zone=$ZONE

gcloud compute instances attach-disk $INSTANCE \
--disk $INSTANCE-datadisk
  • Connect to VM:
gcloud compute ssh SYSTEM@$INSTANCE
  • Stop Postgres Pro database service and verify its status:
sudo systemctl stop postgrespro-enterprise-9.6.service
sudo systemctl -l status postgrespro-enterprise-9.6.service
or
sudo systemctl stop postgrespro-ent-10.service
sudo systemctl -l status postgrespro-ent-10.service
or
sudo systemctl stop postgrespro-ent-11.service
sudo systemctl -l status postgrespro-ent-11.service
  • Create new filesystem mountpoint:
sudo mkdir /PGDATA
  • Use 'fdisk' utility to find out datadisk device name (in this case it is '/dev/sdb'):
sudo fdisk -l
  • Create datadisk single partition of maximum size, create new filesystem on top of it:
(echo n; echo p; echo 1; echo ; echo ; echo w) | sudo fdisk /dev/sdb
sudo mkfs -t ext4 /dev/sdb1
  • Amend /etc/fstab file for new filesystem automount and mount it:
sudo sh -c "echo '`sudo blkid -o export /dev/sdb1 | grep UUID` /PGDATA ext4 defaults,nofail,barrier=0 1 2' >> /etc/fstab"
sudo mount /PGDATA
  • Create 'data' directory on the new filesystem and set its permissions:
sudo mkdir /PGDATA/data
sudo chown postgres:postgres /PGDATA/data
sudo chmod 0700 /PGDATA/data
  • Switch to ‘postgres’ account and move Postgres Pro database files to the new filesystem:
sudo su - postgres
mv /var/lib/pgproee/9.6/data/* /PGDATA/data; rmdir /var/lib/pgproee/9.6/data; ln -s /PGDATA/data /var/lib/pgproee/9.6/data
or
mv /var/lib/pgpro/ent-10/data/* /PGDATA/data; rmdir /var/lib/pgpro/ent-10/data; ln -s /PGDATA/data /var/lib/pgpro/ent-10/data
or
mv /var/lib/pgpro/ent-11/data/* /PGDATA/data; rmdir /var/lib/pgpro/ent-11/data; ln -s /PGDATA/data /var/lib/pgpro/ent-11/data
exit
  • Start Postgres Pro database service and verify its status:
sudo systemctl start postgrespro-enterprise-9.6.service
sudo systemctl -l status postgrespro-enterprise-9.6.service
or
sudo systemctl start postgrespro-ent-10.service
sudo systemctl -l status postgrespro-ent-10.service
or
sudo systemctl start postgrespro-ent-11.service
sudo systemctl -l status postgrespro-ent-11.service
  • Restart VM, check filesystem automount and verify Postgres Pro database service status:
sudo reboot

gcloud compute ssh SYSTEM@$INSTANCE

sudo mount | grep /PGDATA
sudo df -h /PGDATA

sudo systemctl -l status postgrespro-enterprise-9.6.service
or
sudo systemctl -l status postgrespro-ent-10.service
or
sudo systemctl -l status postgrespro-ent-11.service

Replace ‘xx’ by '01', '02', '03’ and so on.

4) Database service auto restart in case of database failure

  • Edit database service systemd file and restart database service:
sudo sed -i '/KillSignal=/a Restart=on-failure' /usr/lib/systemd/system/postgrespro-enterprise-9.6.service
sudo systemctl daemon-reload
sudo systemctl restart postgrespro-enterprise-9.6.service
sudo systemctl -l status postgrespro-enterprise-9.6.service
or
sudo sed -i '/KillSignal=/a Restart=on-failure' /usr/lib/systemd/system/postgrespro-ent-10.service
sudo systemctl daemon-reload
sudo systemctl restart postgrespro-ent-10.service
sudo systemctl -l status postgrespro-ent-10.service
or
sudo sed -i '/KillSignal=/a Restart=on-failure' /usr/lib/systemd/system/postgrespro-ent-11.service
sudo systemctl daemon-reload
sudo systemctl restart postgrespro-ent-11.service
sudo systemctl -l status postgrespro-ent-11.service

 

VM backup/restore

 

Postgres Pro Enterprise Multimaster

Multimaster is a Postgres Pro Enterprise extension with a set of patches that turns Postgres Pro Enterprise into a synchronous shared-nothing cluster to provide Online Transaction Processing (OLTP) scalability for read transactions and high availability with automatic disaster recovery.

You can find more information about Postgres Pro Enterprise Multimaster at:

https://postgrespro.com/docs/enterprise/9.6/multimaster.html

or

https://postgrespro.com/docs/enterprise/10/multimaster

or

https://postgrespro.com/docs/enterprise/11/multimaster

Let's proceed with 3-node Multimaster installation and configuration. Assume Multimaster node names are 'myvm-ent-01', 'myvm-ent-02' and 'myvm-ent-03'.

  • Create three VMs, each in different availability zone 'europe-north1-a', 'europe-north1-b' and 'europe-north1-c':
array=(a b c); for i in `seq 1 3`; do
gcloud compute instances create myvm-ent-0$i \
--async \
--boot-disk-type=pd-standard \
--image-family=$FAMILY \
--image-project=$PROJECT \
--machine-type=n1-standard-1 \
--zone=europe-north1-"${array[$i-1]}" \
--private-network-ip=10.166.0.10$i
done

Private IP-addresses assignment: 10.166.0.101 (myvm-ent-01), 10.166.0.102 (myvm-ent-02), 10.166.0.103 (myvm-ent-03).

  • Now connect to each of three VMs:
gcloud compute ssh SYSTEM@myvm-ent-01 --zone=europe-north1-a
gcloud compute ssh SYSTEM@myvm-ent-02 --zone=europe-north1-b
gcloud compute ssh SYSTEM@myvm-ent-03 --zone=europe-north1-c

or

for i in `seq 1 3`; do
export IpAddress_0$i=$(gcloud compute instances list | grep myvm-ent-0$i | awk '{print $5}')
done

ssh SYSTEM@$IpAddress_01
ssh SYSTEM@$IpAddress_02
ssh SYSTEM@$IpAddress_03
  • and run the same set of commands on all of them:
--- Postgres Pro Enterprise 9.6/10/11
for i in `seq 1 3`; do
sudo sh -c "echo '10.166.0.10$i myvm-ent-0$i' >> /etc/hosts"
done
  • Configure firewall for Multimaster arbiter traffic via TCP-port '5555':
--- Postgres Pro Enterprise 9.6/10
sudo firewall-cmd --zone=public --add-port=5555/tcp
sudo firewall-cmd --zone=public --permanent --add-port=5555/tcp
sudo systemctl restart firewalld
  • Configure replicated 'mydb' database:
--- Postgres Pro Enterprise 9.6/10/11
sudo su - postgres
psql -c "create user myuser with superuser encrypted password 'myuserpassword'"
psql --username=myuser -c "create database mydb"
sed -i 's/PGDATABASE=postgres/PGDATABASE=mydb/' .pgpro_profile
sed -i 's/PGUSER=postgres/PGUSER=myuser/' .pgpro_profile
source .pgpro_profile

for i in `seq 1 3`; do
echo "hostssl replication myuser myvm-ent-0$i md5" >> $PGDATA/pg_hba.conf
echo "myvm-ent-0$i:5433:mydb:myuser:myuserpassword" >> ~/.pgpass
done
chmod 0600 ~/.pgpass
pg_ctl reload

--- Postgres Pro Enterprise 9.6/10
node_id=`hostname | awk -F "-" '{ print substr($3,2,1) }'`
echo "" >> $PGDATA/postgresql.conf
echo "" >> $PGDATA/postgresql.conf
echo "#------------------------------------------------------------------------------" >> $PGDATA/postgresql.conf
echo "# MULTIMASTER SETTINGS" >> $PGDATA/postgresql.conf
echo "#------------------------------------------------------------------------------" >> $PGDATA/postgresql.conf
echo "" >> $PGDATA/postgresql.conf
echo "multimaster.cluster_name = mymmts" >> $PGDATA/postgresql.conf
echo "multimaster.max_nodes = 3" >> $PGDATA/postgresql.conf
echo "multimaster.node_id = $node_id" >> $PGDATA/postgresql.conf
echo "multimaster.arbiter_port = 5555" >> $PGDATA/postgresql.conf
echo "multimaster.conn_strings = 'dbname=mydb user=myuser host=myvm-ent-01 port=5433 sslmode=require arbiter_port=5555,dbname=mydb user=myuser host=myvm-ent-02 port=5433 sslmode=require arbiter_port=5555,dbname=mydb user=myuser host=myvm-ent-03 port=5433 sslmode=require arbiter_port=5555'" >> $PGDATA/postgresql.conf
echo "" >> $PGDATA/postgresql.conf
echo "" >> $PGDATA/postgresql.conf

--- Postgres Pro Enterprise 11
echo "" >> $PGDATA/postgresql.conf
echo "" >> $PGDATA/postgresql.conf
echo "#------------------------------------------------------------------------------" >> $PGDATA/postgresql.conf
echo "# MULTIMASTER SETTINGS" >> $PGDATA/postgresql.conf
echo "#------------------------------------------------------------------------------" >> $PGDATA/postgresql.conf
echo "" >> $PGDATA/postgresql.conf
echo "multimaster.max_nodes = 3" >> $PGDATA/postgresql.conf
echo "" >> $PGDATA/postgresql.conf
echo "" >> $PGDATA/postgresql.conf

--- Postgres Pro Enterprise 9.6/10/11
psql -c "alter system set default_transaction_isolation to 'read committed'"
psql -c "alter system set wal_level to logical"
psql -c "alter system set max_connections to 100"
psql -c "alter system set max_prepared_transactions to 300"
psql -c "alter system set max_wal_senders to 10"
psql -c "alter system set max_replication_slots to 10"
psql -c "alter system set max_worker_processes to 250"
psql -c "alter system set shared_preload_libraries to multimaster,pg_stat_statements,pg_buffercache,pg_wait_sampling"

--- Postgres Pro Enterprise 11
psql -c "alter system set wal_sender_timeout to 0"

--- Postgres Pro Enterprise 9.6/10/11
exit
  • Configure mamonsu agent for 'mydb' database and restart mamonsu service:
--- Postgres Pro Enterprise 9.6/10/11
sudo sed -i 's|user = mamonsu|user = myuser|' /etc/mamonsu/agent.conf
sudo sed -i 's|database = mamonsu|database = mydb|' /etc/mamonsu/agent.conf
sudo systemctl restart mamonsu.service
  • Restart Postgres Pro database service and verify its status:
--- Postgres Pro Enterprise 9.6/10/11
sudo systemctl restart postgrespro-enterprise-9.6.service
sudo systemctl -l status postgrespro-enterprise-9.6.service
или
sudo systemctl restart postgrespro-ent-10.service
sudo systemctl -l status postgrespro-ent-10.service
или
sudo systemctl restart postgrespro-ent-11.service
sudo systemctl -l status postgrespro-ent-11.service

exit
  • Now connect to the first VM:
--- Postgres Pro Enterprise 9.6/10/11
gcloud compute ssh SYSTEM@myvm-ent-01 --zone=europe-north1-a
  • and create Multimaster extension:
--- Postgres Pro Enterprise 9.6/10/11
sudo su - postgres
psql
create extension if not exists multimaster;

--- Postgres Pro Enterprise 11
select mtm.init_cluster('dbname=mydb user=myuser host=myvm-ent-01 port=5433 sslmode=require','{"dbname=mydb user=myuser host=myvm-ent-02 port=5433 sslmode=require", "dbname=mydb user=myuser host=myvm-ent-03 port=5433 sslmode=require"}');

--- Postgres Pro Enterprise 9.6/10/11
\q
  • Create other extensions required for mamonsu service:
--- Postgres Pro Enterprise 9.6/10/11
psql -c "create extension if not exists pg_buffercache"
psql -c "create extension if not exists pg_stat_statements"
psql -c "create extension if not exists pg_wait_sampling"
  • Verify the extensions have been successfully created:
--- Postgres Pro Enterprise 9.6/10/11
psql --host=myvm-ent-01 -c "select * from pg_extension"
psql --host=myvm-ent-02 -c "select * from pg_extension"
psql --host=myvm-ent-03 -c "select * from pg_extension"
  • Configure mamonsu for Multimaster:
--- Postgres Pro Enterprise 9.6/10/11
mamonsu bootstrap --dbname mydb --username postgres --host 127.0.0.1 --port 5433 --mamonsu-username=myuser
psql --host=myvm-ent-01 -c "select mtm.make_table_local('mamonsu_config')"
psql --host=myvm-ent-01 -c "select mtm.make_table_local('mamonsu_timestamp_master_2_4_1')"
  • Use the following commands to monitor Multimaster status:
--- Postgres Pro Enterprise 9.6/10
psql --host=myvm-ent-01 -x -c "select mtm.collect_cluster_info()"
psql --host=myvm-ent-01 -x -c "select mtm.get_nodes_state()"

psql --host=myvm-ent-01 -x -c "select mtm.get_cluster_state()"
psql --host=myvm-ent-02 -x -c "select mtm.get_cluster_state()"
psql --host=myvm-ent-03 -x -c "select mtm.get_cluster_state()"

--- Postgres Pro Enterprise 11
psql --host=myvm-ent-01 -x -c "select mtm.status()"
psql --host=myvm-ent-02 -x -c "select mtm.status()"
psql --host=myvm-ent-03 -x -c "select mtm.status()"

psql --host=myvm-ent-01 -x -c "select mtm.nodes()"
psql --host=myvm-ent-02 -x -c "select mtm.nodes()"
psql --host=myvm-ent-03 -x -c "select mtm.nodes()"
  • To return to 'gcloud' interface run 'exit' command twice

 

External connection to Postgres Pro Enterprise Multimaster database

 

 

Postgres Pro Enterprise Multimaster (2-node + referee node configuration)

If 3-node Multimaster is an overkill configuration (triple storage size for database), it's still possible to have 2-node Multimaster configuration either by nominating one of two nodes a 'major' node (multimaster.major_node=on) or by using a light-weight node as a referee instead of the third node.

The way to migrate from 3-node Multimaster to 2-node + referee node Multimaster is the following.

  • Exclude 'myvm-ent-03' node from Multimaster configuration and remove Multimster settings from 'myvm-ent-03' referee node:
--- Postgres Pro Enterprise 9.6/10/11
sudo su - postgres
sed -i '/multimaster/d' $PGDATA/postgresql.conf
psql --host=myvm-ent-03 -c "alter system set shared_preload_libraries to pg_stat_statements,pg_buffercache,pg_wait_sampling"

--- Postgres Pro Enterprise 9.6/10
psql --host=myvm-ent-01 -c "select mtm.stop_node(3, true)"
psql --host=myvm-ent-01 -x -c "select mtm.collect_cluster_info()"

--- Postgres Pro Enterprise 11
psql --host=myvm-ent-01 -c "select mtm.drop_node(3)"
psql --host=myvm-ent-01 -x -c "select mtm.nodes()"

--- Postgres Pro Enterprise 9.6/10/11
exit

sudo systemctl restart postgrespro-enterprise-9.6.service
или
sudo systemctl restart postgrespro-ent-10.service
или
sudo systemctl restart postgrespro-ent-11.service

sudo su - postgres
psql --host=myvm-ent-03 -c "drop extension multimaster"
psql --host=myvm-ent-03 -c "create extension referee"
exit

sudo systemctl restart postgrespro-enterprise-9.6.service
или
sudo systemctl restart postgrespro-ent-10.service
или
sudo systemctl restart postgrespro-ent-11.service
  • Change Multimster settings on 'myvm-ent-01' node:
--- Postgres Pro Enterprise 9.6/10/11
sudo su - postgres
psql --host=myvm-ent-01 -c "alter system set multimaster.referee_connstring = 'dbname=mydb user=myuser host=myvm-ent-03 port=5433 sslmode=require'"

--- Postgres Pro Enterprise 9.6/10
psql --host=myvm-ent-01 -c "alter system set multimaster.conn_strings = 'dbname=mydb user=myuser host=myvm-ent-01 port=5433 sslmode=require arbiter_port=5555,dbname=mydb user=myuser host=myvm-ent-02 port=5433 sslmode=require arbiter_port=5555'"

--- Postgres Pro Enterprise 9.6/10/11
exit

sudo systemctl restart postgrespro-enterprise-9.6.service
или
sudo systemctl restart postgrespro-ent-10.service
или
sudo systemctl restart postgrespro-ent-11.service
  • and on 'myvm-ent-02' node:
--- Postgres Pro Enterprise 9.6/10/11
sudo su - postgres
psql --host=myvm-ent-02 -c "alter system set multimaster.referee_connstring = 'dbname=mydb user=myuser host=myvm-ent-03 port=5433 sslmode=require'"

--- Postgres Pro Enterprise 9.6/10
psql --host=myvm-ent-02 -c "alter system set multimaster.conn_strings = 'dbname=mydb user=myuser host=myvm-ent-01 port=5433 sslmode=require arbiter_port=5555,dbname=mydb user=myuser host=myvm-ent-02 port=5433 sslmode=require arbiter_port=5555'"

--- Postgres Pro Enterprise 9.6/10/11
exit

sudo systemctl restart postgrespro-enterprise-9.6.service
или
sudo systemctl restart postgrespro-ent-10.service
или
sudo systemctl restart postgrespro-ent-11.service
  • Finally, drop replication slots and finalize settings on 'myvm-ent-03' referee node:
--- Postgres Pro Enterprise 9.6/10/11
sudo su - postgres

--- Postgres Pro Enterprise 9.6/10
psql --host=myvm-ent-03 -c "select * from pg_replication_slots"
psql --host=myvm-ent-03 -c "select pg_drop_replication_slot('mtm_slot_1')"
psql --host=myvm-ent-03 -c "select pg_drop_replication_slot('mtm_slot_2')"
psql --host=myvm-ent-03 -c "select * from pg_replication_slots"

--- Postgres Pro Enterprise 9.6/10/11
sed -i '/^#/!d' $PGDATA/postgresql.auto.conf
echo "shared_preload_libraries = 'pg_stat_statements, pg_buffercache, pg_wait_sampling'" >> $PGDATA/postgresql.auto.conf
exit

sudo systemctl restart postgrespro-enterprise-9.6.service
или
sudo systemctl restart postgrespro-ent-10.service
или
sudo systemctl restart postgrespro-ent-11.service
  • Check the status of Multimaster on each node:
--- Postgres Pro Enterprise 9.6/10/11
sudo su - postgres

--- Postgres Pro Enterprise 9.6/10
psql --dbname=mydb --username=myuser --host=myvm-ent-01 --port=5433 -c "select mtm.collect_cluster_info()"
psql --dbname=mydb --username=myuser --host=myvm-ent-02 --port=5433 -c "select mtm.collect_cluster_info()"

--- Postgres Pro Enterprise 11
psql --dbname=mydb --username=myuser --host=myvm-ent-01 --port=5433 -x -c "select mtm.nodes()"
psql --dbname=mydb --username=myuser --host=myvm-ent-02 --port=5433 -x -c "select mtm.nodes()"

--- Postgres Pro Enterprise 9.6/10/11
psql --dbname=mydb --username=myuser --host=myvm-ent-03 --port=5433 -c "select * from referee.decision"

exit

 

Postgres Pro Enterprise CFS (compressed file system):

In order to use functionality of Postgres Pro Enterprise CFS (compressed file system) extension, proceed with the following commands.

  • Create OS filesystem directory for 'cfs_ts' tablespace and 'cfs_ts' tablespace in the database: 
sudo su - postgres
mkdir $PGDATA/../cfs_ts
chmod 0700 $PGDATA/../cfs_ts

psql -c "create tablespace cfs_ts location '/var/lib/pgproee/9.6/cfs_ts' with (compression=true)"
or
psql -c "create tablespace cfs_ts location '/var/lib/pgpro/ent-10/cfs_ts' with (compression=true)"
or
psql -c "create tablespace cfs_ts location '/var/lib/pgpro/ent-11/cfs_ts' with (compression=true)"

exit
  • Make sure the 'cfs_ts' tablespace has been created with 'compression=true' option: 
sudo su - postgres
psql -c "select * from pg_tablespace"
exit

Use one of the following ways to utilise 'cfs_ts' tablespace for new database objects:

  • Upon database object creation: 
sudo su - postgres
psql -c "create table t1 (t int) tablespace cfs_ts"
psql -c "select tablename, tablespace from pg_tables where schemaname = 'public'"
exit
  • Setting default tablespace for current database connection: 
sudo su - postgres
psql
set default_tablespace=cfs_ts;
show default_tablespace;
create table t2 (t int);
select tablename, tablespace from pg_tables where schemaname = 'public';
\q
exit
  • Setting default tablespace for particular database: 
sudo su - postgres
psql --dbname=postgres -c "alter database mydb set tablespace cfs_ts"
psql -c "select datname, dattablespace from pg_database"
exit
  • Setting default tablespace for particular user/role: 
sudo su - postgres
psql --username=postgres -c "alter user myuser set default_tablespace to 'cfs_ts'"
psql -c "select usename, useconfig from pg_user"
psql -c "select rolname, rolconfig from pg_roles"
exit

Use one of the following ways to move existing database objects from one tablespace ('pg_default') to another ('cfs_ts'):

  • One by one:
sudo su - postgres
psql -c "create table t3 (t int)"
psql -c "alter table t3 set tablespace cfs_ts"
psql -c "select tablename, tablespace from pg_tables where schemaname = 'public'"
exit
  • All together:
sudo su - postgres
psql -c "alter table all in tablespace pg_default set tablespace cfs_ts"
psql -c "select tablename, tablespace from pg_tables where schemaname = 'public'"
exit

Depending on the data stored in tablespace 'cfs_ts', commpression ratio may vary.

 

Postgres Pro Enterprise 9.6/10/11 VM content:

Linux CentOS 7.x (64-bit)

  • OS-account - ‘postgres’
  • OS-account - ‘zabbix’
  • OS-account - ‘mamonsu’

OS-disk size - 10 GB

  • xfs filesystem ‘/’ (/dev/sda1) - 10 GB

Main database - Postgres Pro Enterprise

  • DB version: 9.6/10/11
  • TCP-port: 5433 (opened in OS-firewall settings)
  • configuration file: /var/lib/pgsql/.pgpro_profile
  • database account: ’postgres’

Database monitoring (server)

  • zabbix-server version: 3.4
  • TCP-ports: 80/443 (opened in OS-firewall settings)
  • account: ‘Admin’

Database monitoring (agent)

  • zabbix-agent version: 3.4
  • mamonsu-agent version: 2.4.1
  • configuration file: /etc/mamonsu/agent.conf

Auxiliary database PostgreSQL (as a zabbix-server database)

  • DB version: 9.6/10/11
  • TCP-port: 5432
  • configuration file: /var/lib/pgsql/.pgsql_profile
  • database account: 'postgres'

 

Documentation links