Postgres Pro Enterprise in Yandex.Cloud
Postgres Pro Enterprise in Yandex.Cloud Quick Start Guide
Internet access and valid Yandex.Cloud account are required to use Postgres Pro Enterprise 10/11/12/13/14 database in Yandex.Cloud.
Postgres Pro Enterprise 10/11/12/13/14 virtual machine (VM) image is available in Yandex.Cloud Marketplace.
Software required for installation:
- 'yc' as part of Yandex.Cloud CLI for cloud management
- 'psql' or 'Pgadmin' for database connection
Yandex.Cloud Console https://console.cloud.yandex.ru can be used as well.
Yandex.Cloud CLI installation guide: https://cloud.yandex.ru/docs/cli/quickstart
Yandex.Cloud CLI reference guide: https://cloud.yandex.ru/docs/cli/
Connection to Yandex.Cloud and environment check
- Connect to Yandex.Cloud and configure environment with:
yc init
- Verify 'yc' version (should be the latest available):
yc version
- A list of regions and zones for VM:
yc compute zone list
yc config set compute-default-zone ru-central1-a
yc config get compute-default-zone
‘ru-central1’ region and 'ru-central1-a' zone will be used further.
- A list of available disk types:
yc compute disk-type list
‘network-hdd’ disk type will be used further.
VM creation
- Create network infrastructure for VM:
NET=my-vpc
yc vpc network create \
--name $NET \
--description "$NET" \
--async
yc vpc network list
yc vpc subnet create \
--name $NET-subnet \
--range 192.168.0.0/24 \
--network-name $NET \
--description "$NET-subnet" \
--async
yc vpc subnet list
- Create a pair of private/public ssh-keys in ~/.ssh directory for VM connection:
ssh-keygen -t rsa -b 2048
- Use the following commands to create VM from the latest Postgres Pro Enterprise 14 image version available in Yandex.Cloud Marketplace:
IMAGE_FAMILY=postgreprof-postgres-pro-enterprise-database-14
VM_NAME=myvm-ent-xx
yc compute instance create \
--name $VM_NAME \
--hostname $VM_NAME \
--cores 2 \
--memory 2 \
--create-boot-disk name=$VM_NAME-osdisk,image-folder-id=standard-images,image-family=$IMAGE_FAMILY \
--network-interface subnet-name=$NET-subnet,nat-ip-version=ipv4 \
--ssh-key ~/.ssh/id_rsa.pub \
--async
yc compute instances show $VM_NAME
Replace ‘xx’ by '01', '02', '03’ and so on.
Connection to VM
- Connect to VM with private ssh-key using VM's public IP-address ($VM_IP_ADDRESS):
yc compute instances list
VM_IP_ADDRESS=$(yc compute instance show --name $VM_NAME | grep -E ' +address' | tail -n 1 | awk '{print $2}')
echo $VM_IP_ADDRESS
ssh yc-user@$VM_IP_ADDRESS
Postgres Pro database service status
- Verify Postgres Pro database service status:
sudo systemctl -l status postgrespro-ent-14.service
- To stop/start Postgres Pro database service use the following commands:
sudo systemctl stop postgrespro-ent-14.service
sudo systemctl start postgrespro-ent-14.service
Connection to Postgres Pro database
- Switch to ‘postgres’ account:
sudo su - postgres
- To connect to Postgres Pro database use the following command:
psql
- To exit from ‘psql’ use the following command:
\q
- To return to 'yc' interface run 'exit' command twice
External connection to VM
- TCP-port 5433 has to be opened for external connection to Postgres Pro database:
ssh yc-user@$VM_IP_ADDRESS
sudo firewall-cmd --zone=public --add-port=5433/tcp
sudo firewall-cmd --zone=public --permanent --add-port=5433/tcp
sudo firewall-cmd --zone=public --list-ports
sudo systemctl restart firewalld.service
sudo systemctl -l status firewalld.service
exit
- TCP-ports 80 and 443 have to be opened for external connection to database monitoring server:
ssh yc-user@$VM_IP_ADDRESS
sudo firewall-cmd --zone=public --add-port=80/tcp
sudo firewall-cmd --zone=public --permanent --add-port=80/tcp
sudo firewall-cmd --zone=public --add-port=443/tcp
sudo firewall-cmd --zone=public --permanent --add-port=443/tcp
sudo firewall-cmd --zone=public --list-ports
sudo systemctl restart firewalld.service
sudo systemctl -l status firewalld.service
exit
External connection to Postgres Pro database
- For external connection to Postgres Pro database set up 'postgres' user password:
ssh yc-user@$VM_IP_ADDRESS
sudo su - postgres
psql -c "alter user postgres with encrypted password 'YOUR_POSTGRES_USER_PASSWORD'"
exit
exit
- For external connection to Postgres Pro database using ‘psql’ utility use the following command:
psql --host=$VM_IP_ADDRESS --port=5433 --username=postgres --dbname=postgres
- For external connection to Postgres Pro database using ‘Pgadmin’ utility configure the following server settings in ‘Pgadmin’ menu:
- ‘mydb-xx’ for ‘Name’
- ‘$VM_IP_ADDRESS’ for ‘Host’
- ‘5433’ for ‘Port’
- ‘postgres’ for ‘Maintenance DB’
- ‘postgres’ for ‘Username’
Replace ‘xx’ by '01', '02', '03’ and so on.
Replace '$VM_IP_ADDRESS' by VM's external IP-address.
External connection to database monitoring server
- For connection to database monitoring server set up 'Admin' user password:
ssh yc-user@$VM_IP_ADDRESS
sudo su - postgres
source .pgsql_profile
psql --dbname=zabbix --username=zabbix -c "update users set passwd=md5('YOUR_ZABBIX_ADMIN_PASSWORD') where alias='Admin'"
exit
exit
- External connection to database monitoring server is via the following link:
Replace ‘$VM_IP_ADDRESS’ by VM's external IP-address.
VM configuration change
Let's have some examples of VM configuration change:
1) Change VM type to increase VM computing power
- $PGDATA/postgresql.tune.lock file can be deleted in order to automatically adjust Postgres Pro database parameters values before increasing VM size:
ssh yc-user@$VM_IP_ADDRESS
sudo su - postgres
cp $PGDATA/postgresql.auto.conf $PGDATA/postgresql.auto.conf.ORIG
rm $PGDATA/postgresql.tune.lock
exit
exit
- To change VM type use the following commands:
yc compute instances show $VM_NAME
yc compute instances stop $VM_NAME
yc compute instances update $VM_NAME \
--cores 4 \
--memory 4
yc compute instances start $VM_NAME
yc compute instances list
VM_IP_ADDRESS=$(yc compute instance show --name $VM_NAME | grep -E ' +address' | tail -n 1 | awk '{print $2}')
echo $VM_IP_ADDRESS
ssh yc-user@$VM_IP_ADDRESS
cat /proc/cpuinfo
cat /proc/meminfo
exit
2) Increase OS-disk size up to 80 GB
- Obtain OS-disk size details:
yc compute disks show $VM_NAME-osdisk
- Stop VM temporarily:
yc compute instances stop $VM_NAME
- Increase OS-disk size:
yc compute disks update $VM_NAME-osdisk \
--size 80
- Verify new OS-disk size:
yc compute disks show $VM_NAME-osdisk
- Start VM:
yc compute instances start $VM_NAME
- Connect to VM:
yc compute instances list
VM_IP_ADDRESS=$(yc compute instance show --name $VM_NAME | grep -E ' +address' | tail -n 1 | awk '{print $2}')
echo $VM_IP_ADDRESS
ssh yc-user@$VM_IP_ADDRESS
df -h /
exit
3) Use dedicated datadisk for Postgres Pro database files to improve database performance
- Let's create new 200 GB datadisk and attach it to VM:
yc compute disks create \
--name $VM_NAME-datadisk \
--size 200 \
--type network-hdd \
--async
yc compute disks list
yc compute instances attach-disk \
--name $VM_NAME \
--disk-name $VM_NAME-datadisk \
--async
yc compute disks list
- Connect to VM:
ssh yc-user@$VM_IP_ADDRESS
- Stop Postgres Pro database service and verify its status:
sudo systemctl stop postgrespro-ent-14.service
sudo systemctl -l status postgrespro-ent-14.service
- Create new filesystem mountpoint:
sudo mkdir /PGDATA
- Use 'fdisk' utility to find out datadisk device name (in this case it is '/dev/vdb'):
sudo fdisk -l
- Create datadisk single partition of maximum size, create new filesystem on top of it:
(echo n; echo p; echo 1; echo ; echo ; echo w) | sudo fdisk /dev/vdb
sudo mkfs -t ext4 /dev/vdb1
- Amend /etc/fstab file for new filesystem automount and mount it:
sudo sh -c "echo '`sudo blkid -o export /dev/vdb1 | grep UUID` /PGDATA ext4 defaults,nofail,barrier=0 1 2' >> /etc/fstab"
sudo mount /PGDATA
- Create 'data' directory on the new filesystem and set its permissions:
sudo mkdir /PGDATA/data
sudo chown postgres:postgres /PGDATA/data
sudo chmod 0700 /PGDATA/data
- Switch to ‘postgres’ account and move Postgres Pro database files to the new filesystem:
sudo su - postgres
mv /var/lib/pgpro/ent-14/data/* /PGDATA/data; rmdir /var/lib/pgpro/ent-14/data; ln -s /PGDATA/data /var/lib/pgpro/ent-14/data
exit
- Start Postgres Pro database service and verify its status:
sudo systemctl start postgrespro-ent-14.service
sudo systemctl -l status postgrespro-ent-14.service
- Restart VM, check filesystem automount and verify Postgres Pro database service status:
sudo reboot
ssh yc-user@$VM_IP_ADDRESS
sudo mount | grep /PGDATA
sudo df -h /PGDATA
sudo systemctl -l status postgrespro-ent-14.service
Replace ‘xx’ by '01', '02', '03’ and so on.
4) Database service auto restart in case of database failure
- Edit database service systemd file and restart database service:
sudo sed -i '/KillSignal=/a Restart=on-failure' /usr/lib/systemd/system/postgrespro-ent-14.service
sudo systemctl daemon-reload
sudo systemctl restart postgrespro-ent-14.service
sudo systemctl -l status postgrespro-ent-14.service
VM backup/restore
Postgres Pro Enterprise Multimaster
Multimaster is a Postgres Pro Enterprise extension with a set of patches that turns Postgres Pro Enterprise into a synchronous shared-nothing cluster to provide Online Transaction Processing (OLTP) scalability for read transactions and high availability with automatic disaster recovery.
You can find more information about Postgres Pro Enterprise Multimaster at:
https://postgrespro.com/docs/enterprise/14/multimaster
Let's proceed with 3-node Multimaster installation and configuration. Assume Multimaster node names are 'myvm-ent-01', 'myvm-ent-02' and 'myvm-ent-03'.
- Create three VMs:
for i in `seq 1 3`; do
yc compute instance create \
--name myvm-ent-0$i \
--hostname myvm-ent-0$i \
--cores 2 \
--memory 4 \
--create-boot-disk name=myvm-ent-0$i-osdisk,image-folder-id=standard-images,image-family=$IMAGE_FAMILY \
--network-interface subnet-name=$NET-subnet,nat-ip-version=ipv4,ipv4-address=192.168.0.10$i \
--ssh-key ~/.ssh/id_rsa.pub \
--async
done
Private IP-addresses assignment: 192.168.0.101 (myvm-ent-01), 192.168.0.102 (myvm-ent-02), 192.168.0.103 (myvm-ent-03).
- Now connect to each of three VMs:
for i in `seq 1 3`; do
export VM_IP_ADDRESS_0$i=$(yc compute instance show --name myvm-ent-0$i | grep -E ' +address' | tail -n 1 | awk '{print $2}')
done
ssh yc-user@$VM_IP_ADDRESS_01
ssh yc-user@$VM_IP_ADDRESS_02
ssh yc-user@$VM_IP_ADDRESS_03
- and run the same set of commands on all of them:
for i in `seq 1 3`; do
sudo sh -c "echo '192.168.0.10$i myvm-ent-0$i' >> /etc/hosts"
done
- Configure replicated 'mydb' database:
sudo su - postgres
psql -c "create user myuser with superuser encrypted password 'myuserpassword'"
psql --username=myuser -c "create database mydb"
sed -i 's/PGDATABASE=postgres/PGDATABASE=mydb/' .pgpro_profile
sed -i 's/PGUSER=postgres/PGUSER=myuser/' .pgpro_profile
source .pgpro_profile
for i in `seq 1 3`; do
echo "hostssl replication myuser myvm-ent-0$i md5" >> $PGDATA/pg_hba.conf
echo "myvm-ent-0$i:5433:mydb:myuser:myuserpassword" >> ~/.pgpass
done
chmod 0600 ~/.pgpass
pg_ctl reload
echo "" >> $PGDATA/postgresql.conf
echo "" >> $PGDATA/postgresql.conf
echo "#------------------------------------------------------------------------------" >> $PGDATA/postgresql.conf
echo "# MULTIMASTER SETTINGS" >> $PGDATA/postgresql.conf
echo "#------------------------------------------------------------------------------" >> $PGDATA/postgresql.conf
echo "" >> $PGDATA/postgresql.conf
echo "multimaster.max_nodes = 3" >> $PGDATA/postgresql.conf
echo "" >> $PGDATA/postgresql.conf
echo "" >> $PGDATA/postgresql.conf
psql -c "alter system set default_transaction_isolation to 'read committed'"
psql -c "alter system set wal_level to logical"
psql -c "alter system set max_connections to 100"
psql -c "alter system set max_prepared_transactions to 300"
psql -c "alter system set max_wal_senders to 10"
psql -c "alter system set max_replication_slots to 10"
psql -c "alter system set max_worker_processes to 250"
psql -c "alter system set shared_preload_libraries to multimaster,pg_stat_statements,pg_buffercache,pg_wait_sampling"
psql -c "alter system set wal_sender_timeout to 0"
exit
- Configure mamonsu agent for 'mydb' database and restart mamonsu service:
sudo sed -i 's|user = mamonsu|user = myuser|' /etc/mamonsu/agent.conf
sudo sed -i 's|database = mamonsu|database = mydb|' /etc/mamonsu/agent.conf
sudo systemctl restart mamonsu.service
- Restart Postgres Pro database service and verify its status:
sudo systemctl restart postgrespro-ent-14.service
sudo systemctl -l status postgrespro-ent-14.service
exit
- Now connect to the first VM:
ssh yc-user@$VM_IP_ADDRESS_01
- and create Multimaster extension:
sudo su - postgres
psql
create extension if not exists multimaster;
select mtm.init_cluster('dbname=mydb user=myuser host=myvm-ent-01 port=5433 sslmode=require','{"dbname=mydb user=myuser host=myvm-ent-02 port=5433 sslmode=require", "dbname=mydb user=myuser host=myvm-ent-03 port=5433 sslmode=require"}');
\q
- Create other extensions required for mamonsu service:
psql -c "create extension if not exists pg_buffercache"
psql -c "create extension if not exists pg_stat_statements"
psql -c "create extension if not exists pg_wait_sampling"
- Verify the extensions have been successfully created:
psql --host=myvm-ent-01 -c "select * from pg_extension"
psql --host=myvm-ent-02 -c "select * from pg_extension"
psql --host=myvm-ent-03 -c "select * from pg_extension"
- Configure mamonsu for Multimaster:
mamonsu bootstrap --dbname mydb --username postgres --host 127.0.0.1 --port 5433 --mamonsu-username=myuser
psql --host=myvm-ent-01 -c "select mtm.make_table_local('mamonsu_config')"
psql --host=myvm-ent-01 -c "select mtm.make_table_local('mamonsu_timestamp_master_2_7_1')"
- Use the following commands to monitor Multimaster status:
psql --host=myvm-ent-01 -x -c "select mtm.status()"
psql --host=myvm-ent-02 -x -c "select mtm.status()"
psql --host=myvm-ent-03 -x -c "select mtm.status()"
psql --host=myvm-ent-01 -x -c "select mtm.nodes()"
psql --host=myvm-ent-02 -x -c "select mtm.nodes()"
psql --host=myvm-ent-03 -x -c "select mtm.nodes()"
- To return to 'yc' interface run 'exit' command twice
External connection to Postgres Pro Enterprise Multimaster database
Postgres Pro Enterprise Multimaster (2-node + referee node configuration)
If 3-node Multimaster is an overkill configuration (triple storage size for database), it's still possible to have 2-node Multimaster configuration either by nominating one of two nodes a 'major' node (multimaster.major_node=on) or by using a light-weight node as a referee instead of the third node.
The way to migrate from 3-node Multimaster to 2-node + referee node Multimaster is the following.
- Exclude 'myvm-ent-03' node from Multimaster configuration, remove Multimster settings from 'myvm-ent-03' referee node and change Multimaster settings on 'myvm-ent-01' and 'myvm-ent-02' nodes:
sudo su - postgres
sed -i '/multimaster/d' $PGDATA/postgresql.conf
psql --host=myvm-ent-03 -c "alter system set shared_preload_libraries to pg_stat_statements,pg_buffercache,pg_wait_sampling"
psql --host=myvm-ent-01 -c "select mtm.drop_node(3)"
psql --host=myvm-ent-01 -x -c "select mtm.nodes()"
psql --host=myvm-ent-01 -c "alter system set multimaster.referee_connstring = 'dbname=mydb user=myuser host=myvm-ent-03 port=5433 sslmode=require'"
psql --host=myvm-ent-02 -c "alter system set multimaster.referee_connstring = 'dbname=mydb user=myuser host=myvm-ent-03 port=5433 sslmode=require'"
exit
sudo systemctl restart postgrespro-ent-14.service
sudo su - postgres
psql --host=myvm-ent-03 -c "drop extension multimaster"
psql --host=myvm-ent-03 -c "drop publication if exists multimaster"
psql --host=myvm-ent-03 -c "create extension referee"
exit
sudo systemctl restart postgrespro-ent-14.service
- Apply new Multimster settings on 'myvm-ent-01' and 'myvm-ent-02' nodes:
sudo systemctl restart postgrespro-ent-14.service
- Finally, drop replication slots and finalize settings on 'myvm-ent-03' referee node:
sudo su - postgres
sed -i '/^#/!d' $PGDATA/postgresql.auto.conf
echo "shared_preload_libraries = 'pg_stat_statements, pg_buffercache, pg_wait_sampling'" >> $PGDATA/postgresql.auto.conf
psql --host=myvm-ent-03 -c "select pg_drop_replication_slot('mtm_slot_1')"
psql --host=myvm-ent-03 -c "select pg_drop_replication_slot('mtm_filter_slot_1')"
psql --host=myvm-ent-03 -c "select pg_drop_replication_slot('mtm_slot_2')"
psql --host=myvm-ent-03 -c "select pg_drop_replication_slot('mtm_filter_slot_2')"
psql --host=myvm-ent-03 -c "select pg_replication_origin_drop('mtm_slot_1')"
psql --host=myvm-ent-03 -c "select pg_replication_origin_drop('mtm_slot_2')"
exit
sudo systemctl restart postgrespro-ent-14.service
- Check the status of Multimaster on each node:
sudo su - postgres
psql --dbname=mydb --username=myuser --host=myvm-ent-01 --port=5433 -x -c "select mtm.nodes()"
psql --dbname=mydb --username=myuser --host=myvm-ent-02 --port=5433 -x -c "select mtm.nodes()"
psql --dbname=mydb --username=myuser --host=myvm-ent-03 --port=5433 -c "select * from referee.decision"
exit
Postgres Pro Enterprise CFS (compressed file system):
In order to use functionality of Postgres Pro Enterprise CFS (compressed file system) extension, proceed with the following commands.
- Create OS filesystem directory for 'cfs_ts' tablespace and 'cfs_ts' tablespace in the database:
sudo su - postgres
mkdir $PGDATA/../cfs_ts
chmod 0700 $PGDATA/../cfs_ts
psql -c "create tablespace cfs_ts location '/var/lib/pgpro/ent-14/cfs_ts' with (compression=true)"
exit
- Make sure the 'cfs_ts' tablespace has been created with 'compression=true' option:
sudo su - postgres
psql -c "select * from pg_tablespace"
exit
Use one of the following ways to utilise 'cfs_ts' tablespace for new database objects:
- Upon database object creation:
sudo su - postgres
psql -c "create table t1 (t int) tablespace cfs_ts"
psql -c "select tablename, tablespace from pg_tables where schemaname = 'public'"
exit
- Setting default tablespace for current database connection:
sudo su - postgres
psql
set default_tablespace=cfs_ts;
show default_tablespace;
create table t2 (t int);
select tablename, tablespace from pg_tables where schemaname = 'public';
\q
exit
- Setting default tablespace for particular database:
sudo su - postgres
psql --dbname=postgres -c "alter database mydb set tablespace cfs_ts"
psql -c "select datname, dattablespace from pg_database"
exit
- Setting default tablespace for particular user/role:
sudo su - postgres
psql --username=postgres -c "alter user myuser set default_tablespace to 'cfs_ts'"
psql -c "select usename, useconfig from pg_user"
psql -c "select rolname, rolconfig from pg_roles"
exit
Use one of the following ways to move existing database objects from one tablespace ('pg_default') to another ('cfs_ts'):
- One by one:
sudo su - postgres
psql -c "create table t3 (t int)"
psql -c "alter table t3 set tablespace cfs_ts"
psql -c "select tablename, tablespace from pg_tables where schemaname = 'public'"
exit
- All together:
sudo su - postgres
psql -c "alter table all in tablespace pg_default set tablespace cfs_ts"
psql -c "select tablename, tablespace from pg_tables where schemaname = 'public'"
exit
Depending on the data stored in tablespace 'cfs_ts', commpression ratio may vary.
Postgres Pro Enterprise 10/11/12/13/14 VM content:
Linux Debian 10 (64-bit)
- OS-account - ‘postgres’
- OS-account - ‘zabbix’
- OS-account - ‘mamonsu’
OS-disk size - 10 GB
- xfs filesystem ‘/’ (/dev/vda1) - 10 GB
Main database - Postgres Pro Enterprise
- DB version: 10/11/12/13/14
- TCP-port: 5433 (opened in OS-firewall settings)
- configuration file: /var/lib/pgsql/.pgpro_profile
- database account: ’postgres’
Database monitoring (server)
- zabbix-server version: 4.x
- TCP-ports: 80/443 (opened in OS-firewall settings)
- account: ‘Admin’
Database monitoring (agent)
- zabbix-agent version: 4.x
- mamonsu-agent version: 2.x
- configuration file: /etc/mamonsu/agent.conf
Auxiliary database PostgreSQL (as a zabbix-server database)
- DB version: 10/11/12/13/14
- TCP-port: 5432
- configuration file: /var/lib/pgsql/.pgsql_profile
- database account: 'postgres'
Documentation links
- Postgres Pro Enterprise: https://postgrespro.com/docs/enterprise
- Zabbix: https://www.zabbix.com/documentation
- Mamonsu: https://github.com/postgrespro/mamonsu/#mamonsu-monitoring-agent-for-postgresql
- Pgadmin: https://www.pgadmin.org/docs