1.3. Quickstart Guide #
- 1.3.1. Cluster Configuration
- 1.3.2. Preparation
- 1.3.3. Deploy an etcd One-Node Cluster
- 1.3.4. Deploy Shardman Nodes
- 1.3.5. Initialize the Shardman Cluster
- 1.3.6. Add Nodes to the Shardman Cluster
- 1.3.7. Check the Shardman Cluster Status
- 1.3.8. Connect to the Shardman Cluster
- 1.3.9. Create Sharded Tables
- 1.3.10. Example: Deploy a Multi-Node etcd Cluster
- 1.3.2. Preparation
Shardman is composed of several software components:
PostgreSQL 14 DBMS with a set of patches.
Shardman extension.
Management tools and services, including built-in stolon manager to provide high availability.
Postgres Pro Shardman and stolon store their configuration in an etcd cluster. Therefore, we can use an existing etcd cluster, or we can deploy a simple one-node etcd cluster.
The shardmand daemon monitors the cluster configuration and manages stolon clusters, which are used to guarantee high availability of all shards. The common Shardman configuration (shardmand, stolon) is stored in an etcd cluster.
Currently Shardman packages are available for
Ubuntu 20.04/22.04
Debian 10/11/12
Red Hat Enterprise Linux 7/8/9
Red OS 7.3/7.3.1/7.3.2
Alt 9/10
AstraLinux 1.7 (Smolensk)
1.3.1. Cluster Configuration #
Assume that we have three nodes for deploying Postgres Pro Shardman. Let’s make the first one for the etcd one-node cluster and the other two nodes for the Postgres Pro Shardman two-node cluster.
Let’s suppose that we have the following node names and IP addresses:
192.0.1.1 etcd - etcd one-node cluster 192.0.1.20 sdm01 - Shardman node1 192.0.1.21 sdm02 - Shardman node2
Each node has 4Gb RAM, 20GB HDD, 2CPU and Ubuntu 22.04 installed.
1.3.2. Preparation #
1.3.2.1. Add host names to /etc/hosts
#
This step must be performed on all nodes.
sudo /bin/sh -c 'cat << EOF >> /etc/hosts 192.0.1.1 etcd 192.0.1.20 sdm01 192.0.1.21 sdm02 EOF'
1.3.2.2. Time Synchronization #
This step must be performed on all nodes.
Deploy and start chrony daemon on all hosts.
sudo apt install -y chrony
By default, chrony gets the time from available servers on internet or the local time server. You can check available time servers as follows:
chronyc sources MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^? 192.0.1.100 1 6 7 1 -98us[ -98us] +/- 11ms ^* time.cloudflare.com 3 6 7 1 +139us[ +163us] +/- 11ms ^+ tms04.deltatelesystems.ru 1 6 7 1 -381us[ -357us] +/- 17ms
It is desirable to synchronize time with your server or the local server for the cluster. To do this, make changes similar to the following to chrony configuration:
cat /etc/chrony/chrony.conf server 192.0.1.100 iburst keyfile /etc/chrony.keys driftfile /var/lib/chrony/chrony.drift log tracking measurements statistics logdir /var/log/chrony systemctl restart chrony
Check that chrony is connected to the appropriate server.
chronyc sources MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^? 192.0.1.100 8 6 17 37 +14us[ +70us] +/- 161us chronyc tracking Reference ID : 0A80000C (ntp.local) Stratum : 9 Ref time (UTC) : Wed Nov 15 11:58:52 2023 System time : 0.000000004 seconds slow of NTP time Last offset : -0.000056968 seconds RMS offset : 0.000056968 seconds Frequency : 10.252 ppm fast Residual freq : -2.401 ppm Skew : 364.419 ppm Root delay : 0.000455358 seconds Root dispersion : 0.010503666 seconds Update interval : 2.1 seconds Leap status : Normal
1.3.3. Deploy an etcd One-Node Cluster #
Note also a Deploy a Multi-Node etcd cluster section.
Install the following packages:
sudo apt install -y vim curl
To connect a Postgres Pro Shardman repository:
Run
curl -fsSL -u "<user>:<password>" https://repo.postgrespro.ru/sdm/sdm-14/keys/pgpro-repo-add.sh > pgpro-repo-add.sh chmod +x pgpro-repo-add.sh
Open the file
pgpro-repo-add.sh
and specify the repository password in thePASSWORD
variable.Run
sudo pgpro-repo-add.sh
.
Install etcd-sdm packages:
sudo apt install -y etcd-sdm
In the file that lists environment variables, insert specific values for them:
sudo vim /etc/default/etcd-sdm ETCD_NAME=etcd ETCD_LISTEN_CLIENT_URLS=http://0.0.0.0:2379 ETCD_ADVERTISE_CLIENT_URLS=http://192.0.1.1:2379 ETCD_MAX_SNAPSHOTS=5 ETCD_MAX_WALS=5 ETCD_AUTO_COMPACTION_MODE=periodic ETCD_AUTO_COMPACTION_RETENTION=5m ETCD_QUOTA_BACKEND_BYTES=6442450944 ETCD_DATA_DIR=/var/lib/etcd-sdm/sdm-14
This file will be loaded at etcd start.
Clear the etcd data directory:
sudo rm -rf /var/lib/etcd-sdm/sdm-14/*
Restart the etcd-sdm service:
sudo systemctl restart etcd-sdm
For your user, add /opt/pgpro/sdm-14/bin
to the PATH
environment variable:
echo "export PATH=$PATH:/opt/pgpro/sdm-14/bin" >> .bashrc source .bashrc
Check that etcd is properly configured:
etcdctl endpoint --endpoints=http://192.0.1.1:2379 status health -w table +------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------------------------------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------------------------------+ | http://192.0.1.1:2379 | 9324a99282752a09 | 3.5.9 | 2.1 GB | true | false | 14 | 91459207 | 91459207 | memberID:10602785869456026121 | | | | | | | | | | | alarm:NOSPACE | +------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------------------------------+
etcd one-node cluster is properly configured and ready to serve requests.
To prevent bloat when etcd is intensively used, add a defragmentation command to cron:
sudo sh -c ' { crontab -l; echo "@hourly /opt/pgpro/sdm-14/bin/etcdctl defrag"; } | crontab'
1.3.4. Deploy Shardman Nodes #
Let’s add a Postgres Pro Shardman repository on each node:
Set a user and password as in Section 1.3.3.
Run
curl -fsSL -u "<user>:<password>" https://repo.postgrespro.ru/sdm/sdm-14/keys/pgpro-repo-add.sh > pgpro-repo-add.sh | bash chmod +x pgpro-repo-add.sh
Open the file
pgpro-repo-add.sh
and specify the repository password in thePASSWORD
variable.Run
pgpro-repo-add.sh
.
Next step is installation of packages (on each node):
sudo apt update sudo apt install -y postgrespro-sdm-14-server postgrespro-sdm-14-client postgrespro-sdm-14-contrib postgrespro-sdm-14-libs pg-probackup-sdm-14 shardman-services shardman-tools
Suppose we have chosen a default cluster name of cluster0
. The next step is to put Shardman environment vars into the /etc/shardman
directory (on each node):
sudo sh -c 'cat << EOF > /etc/shardman/shardmand-cluster0.env SDM_CLUSTER_NAME=cluster0 SDM_LOG_LEVEL=info SDM_STORE_ENDPOINTS=http://etcd:2379 EOF'
The file and directory are created with sudo
, but later shardmanctl
does not use sudo
, thus cannot access the file with the environment variables. To access it, either add the variables to the system with export
, or grant user with access rights to the file and the directory.
For your user, add /opt/pgpro/sdm-14/bin
to the PATH
environment variable:
echo "export PATH=$PATH:/opt/pgpro/sdm-14/bin" >> .bashrc source .bashrc
Let’s generate a sample configuration with the Shardman utilities (only on one node).
shardmanctl config generate > spec.json
In this step, you can make some changes to the cluster specification (configuration), i.e., change the password or PostgreSQL shared_buffers
parameter and so on.
1.3.5. Initialize the Shardman Cluster #
Now we have some final steps. First, let's initialize the cluster configuration in etcd (only on one [any] node).
shardmanctl init -f spec.json
The expected output is:
2023-04-18T12:30:03.043Z DEBUG cmd/common.go:100 Waiting for metadata lock... 2023-04-18T12:30:03.048Z DEBUG cluster/cluster.go:365 DataDir is not specified, setting to default /var/lib/pgpro/sdm-14/data
Enable and start the shardmand service (on each node):
sudo systemctl enable --now shardmand@cluster0 sudo systemctl status shardmand@cluster0
● shardmand@cluster0.service - deployment daemon for shardman Loaded: loaded (/lib/systemd/system/shardmand@.service; enabled; vendor preset: enabled) Active: active (running) since Tue 2023-04-18 12:28:18 UTC; 2min 13s ago Docs: https://github.com/postgrespro/shardman Main PID: 618 (shardmand) Tasks: 10 (limit: 4571) Memory: 32.0M CPU: 422ms CGroup: /system.slice/system-shardmand.slice/shardmand@cluster0.service └─618 /opt/pgpro/sdm-14/bin/shardmand --cluster-name cluster0 --system-bus --user postgres
1.3.6. Add Nodes to the Shardman Cluster #
In this step we assume that all previous steps were executed successfully: etcd cluster is working properly, the time on all hosts is synchronized, and the daemon is launched on sdm01
and sdm02
. The final step should be executed with shardmanctl
command as follows:
shardmanctl nodes add -n sdm01,sdm02 \ --cluster-name cluster0 \ --log-level debug \ --store-endpoints=http://etcd:2379
The expected output should be:
2023-04-18T12:43:11.300Z DEBUG cmd/common.go:100 Waiting for metadata lock... 2023-04-18T12:43:11.306Z INFO cluster/store.go:277 Checking if shardmand on all nodes have applied current cluster configuration ✓ Waiting for shardmand on node sdm01 to apply current configuration: success 0.000s ✓ Waiting for shardmand on node sdm02 to apply current configuration: success 0.000s 2023-04-18T12:43:11.307Z INFO add/case.go:112 Initting Stolon instances... 2023-04-18T12:43:11.312Z INFO add/case.go:170 Waiting for Stolon daemons to start... make sure shardmand daemons are running on the nodes ✓ Waiting for Stolon daemons of rg clover-1-sdm01: success 31.012s ✓ Waiting for Stolon daemons of rg clover-1-sdm02: success 0.012s 2023-04-18T12:43:42.336Z INFO add/case.go:187 Adding repgroups... ✓ waiting rg 1 config apply: done 7.014s 2023-04-18T12:43:49.444Z DEBUG broadcaster/worker.go:33 start broadcaster worker for repgroup id=1 2023-04-18T12:43:49.453Z DEBUG broadcaster/worker.go:51 repgroup 1 connect established 2023-04-18T12:43:49.453Z DEBUG commands/addrepgroup.go:575 waiting for extension lock... 2023-04-18T12:43:49.453Z DEBUG commands/addrepgroup.go:137 Loading schema into replication group rg 1 ... 2023-04-18T12:44:25.665Z DEBUG rebalance/service.go:528 wait all tasks finish 2023-04-18T12:44:25.666Z DEBUG broadcaster/worker.go:75 finish broadcaster worker for repgroup id=1 2023-04-18T12:44:25.666Z DEBUG broadcaster/worker.go:75 finish broadcaster worker for repgroup id=2 2023-04-18T12:44:25.666Z INFO add/case.go:221 Successfully added nodes sdm01, sdm02 to the cluster
The “Successfully added nodes sdm01, sdm02 to the cluster” message means that everything is fine and nodes sdm01
and sdm02
are working properly.
1.3.7. Check the Shardman Cluster Status #
Let's check the status of the cluster nodes
shardmanctl status ┌──────────────────────────────────────────────────────────────────────────────────────────────────┐ │ == STORE STATUS == │ ├───────────┬──────────────────────────────────────────┬─────────────────────┬─────────────────────┤ │ STATUS │ MESSAGE │ REPLICATION GROUP │ NODE │ ├───────────┼──────────────────────────────────────────┼─────────────────────┼─────────────────────┤ │ Warning │ Store has only one member, consider │ │ │ │ │ deploying store cluster │ │ │ └───────────┴──────────────────────────────────────────┴─────────────────────┴─────────────────────┘ ┌─────────────────────────────────────────────────────────────────────────────────────────────────┐ │ == TOPOLOGY STATUS == │ ├───────────┬─────────────────────────────────────────┬─────────────────────┬─────────────────────┤ │ STATUS │ MESSAGE │ REPLICATION GROUP │ NODE │ ├───────────┼─────────────────────────────────────────┼─────────────────────┼─────────────────────┤ │ CROSS │ Topology placement policy is CROSS │ │ │ └───────────┴─────────────────────────────────────────┴─────────────────────┴─────────────────────┘ ┌─────────────────────────────────────────────────────────────────────────────────────────────────┐ │ == METADATA STATUS == │ ├───────────┬─────────────────────────────────────────┬─────────────────────┬─────────────────────┤ │ STATUS │ MESSAGE │ REPLICATION GROUP │ NODE │ ├───────────┼─────────────────────────────────────────┼─────────────────────┼─────────────────────┤ │ OK │ Metadata is OK │ │ │ └───────────┴─────────────────────────────────────────┴─────────────────────┴─────────────────────┘ ┌─────────────────────────────────────────────────────────────────────────────────────────────────┐ │ == SHARDMAND STATUS == │ ├───────────┬─────────────────────────────────────────┬─────────────────────┬─────────────────────┤ │ STATUS │ MESSAGE │ REPLICATION GROUP │ NODE │ ├───────────┼─────────────────────────────────────────┼─────────────────────┼─────────────────────┤ │ OK │ shardmand on node sdm01 is OK │ │ sdm01 │ ├───────────┼─────────────────────────────────────────┼─────────────────────┼─────────────────────┤ │ OK │ shardmand on node sdm02 is OK │ │ sdm02 │ └───────────┴─────────────────────────────────────────┴─────────────────────┴─────────────────────┘ ┌─────────────────────────────────────────────────────────────────────────────────────────────────┐ │ == REPLICATION GROUP STATUS == │ ├───────────┬─────────────────────────────────────────┬─────────────────────┬─────────────────────┤ │ STATUS │ MESSAGE │ REPLICATION GROUP │ NODE │ ├───────────┼─────────────────────────────────────────┼─────────────────────┼─────────────────────┤ │ OK │ Replication group clover-1-sdm01 is OK │ clover-1-sdm01 │ │ ├───────────┼─────────────────────────────────────────┼─────────────────────┼─────────────────────┤ │ OK │ Replication group clover-1-sdm02 is OK │ clover-1-sdm02 │ │ └───────────┴─────────────────────────────────────────┴─────────────────────┴─────────────────────┘ ┌──────────────────────────────────────────────────────────────────────────────────────────────────┐ │ == MASTER STATUS == │ ├───────────┬──────────────────────────────────────────┬─────────────────────┬─────────────────────┤ │ STATUS │ MESSAGE │ REPLICATION GROUP │ NODE │ ├───────────┼──────────────────────────────────────────┼─────────────────────┼─────────────────────┤ │ OK │ Replication group clover-1-sdm01 master │ clover-1-sdm01 │ sdm01:5432 │ │ │ is running on sdm01:5432 │ │ │ ├───────────┼──────────────────────────────────────────┼─────────────────────┼─────────────────────┤ │ OK │ Replication group clover-1-sdm02 master │ clover-1-sdm02 │ sdm02:5432 │ │ │ is running on sdm02:5432 │ │ │ └───────────┴──────────────────────────────────────────┴─────────────────────┴─────────────────────┘ ┌──────────────────────────────────────────────────────────────────────────────────────────────────┐ │ == DICTIONARY STATUS == │ ├───────────┬──────────────────────────────────────────┬─────────────────────┬─────────────────────┤ │ STATUS │ MESSAGE │ REPLICATION GROUP │ NODE │ ├───────────┼──────────────────────────────────────────┼─────────────────────┼─────────────────────┤ │ OK │ Replication group clover-1-sdm01 │ clover-1-sdm01 │ │ │ │ dictionary is OK │ │ │ ├───────────┼──────────────────────────────────────────┼─────────────────────┼─────────────────────┤ │ OK │ Replication group clover-1-sdm02 │ clover-1-sdm02 │ │ │ │ dictionary is OK │ │ │ └───────────┴──────────────────────────────────────────┴─────────────────────┴─────────────────────┘
1.3.8. Connect to the Shardman Cluster #
To connect to the cluster we should get the cluster connection string on any cluster node (sdm01
or sdm02
):
shardmanctl getconnstr dbname=postgres host=sdm01,sdm02 password=!!!CHANGE_ME!!! port=5432,5432 user=postgres
And then let’s try to connect:
psql -d 'dbname=postgres host=sdm01,sdm02 password=!!!CHANGE_ME!!! port=5432,5432 user=postgres' psql (14.7) Type "help" for help. postgres=#
1.3.9. Create Sharded Tables #
Let's try to create a sharded table and check if everything is working properly.
postgres=# create table x(id int primary key, t text) with (distributed_by='id',num_parts=2); CREATE TABLE postgres=# \d List of relations Schema | Name | Type | Owner --------+---------+-------------------+---------- public | x | partitioned table | postgres public | x_0 | table | postgres public | x_1_fdw | foreign table | postgres (3 rows) postgres=# \d x_0 Table "public.x_0" Column | Type | Collation | Nullable | Default --------+---------+-----------+----------+--------- id | integer | | not null | t | text | | | Partition of: x FOR VALUES WITH (modulus 2, remainder 0) Indexes: "x_0_pkey" PRIMARY KEY, btree (id) postgres=# \d x_1_fdw Foreign table "public.x_1_fdw" Column | Type | Collation | Nullable | Default | FDW options --------+---------+-----------+----------+---------+------------- id | integer | | not null | | t | text | | | | Partition of: x FOR VALUES WITH (modulus 2, remainder 1) Server: shardman_rg_2 FDW options: (table_name 'x_1') postgres=# insert into x values (1,'t'),(2,'t'),(3,'t'); INSERT 0 3 postgres=# select * from x_0; id | t ----+--- 1 | t 2 | t (2 rows) postgres=# select * from x_1_fdw; id | t ----+--- 3 | t (1 row)
Everything works as expected.
1.3.10. Example: Deploy a Multi-Node etcd Cluster #
The process is described for the following servers:
192.0.1.1 etcd1 192.0.1.2 etcd2 192.0.1.3 etcd3
Install the needed packages on each server:
sudo apt install -y vim curl
To connect the repository, on each server, run:
sudo curl -fsSL https://repo.postgrespro.ru/sdm/sdm-14/keys/pgpro-repo-add.sh | bash
Install etcd-sdm packages on each server:
sudo apt install -y etcd-sdm
For each server, edit the file that lists environment variables, replacing placeholders in angle brackets with specific values:
sudo vim /etc/default/etcd-sdm ETCD_NAME=<hostname> ETCD_LISTEN_PEER_URLS=http://0.0.0.0:2380 ETCD_LISTEN_CLIENT_URLS=http://0.0.0.0:2379 ETCD_ADVERTISE_CLIENT_URLS=http://<host ip address>:2379 ETCD_INITIAL_ADVERTISE_PEER_URLS=http://<host ip address>:2380 ETCD_INITIAL_CLUSTER_TOKEN=etcd-cluster-1 ETCD_INITIAL_CLUSTER_STATE=new ETCD_MAX_SNAPSHOTS=5 ETCD_MAX_WALS=5 ETCD_AUTO_COMPACTION_MODE=periodic ETCD_AUTO_COMPACTION_RETENTION=5m ETCD_QUOTA_BACKEND_BYTES=6442450944 ETCD_DATA_DIR=/var/lib/etcd-sdm/sdm-14 ETCD_INITIAL_CLUSTER=etcd1=http://<ip etcd1>:2380,etcd2=http://<ip etcd2>:2380,etcd3=http://<ip etcd3>:2380
This file will be loaded at etcd start with its own start settings on each server.
Clear the etcd data directory:
sudo rm -rf /var/lib/etcd-sdm/sdm-14/*
Restart the etcd-sdm service on each server:
sudo systemctl restart etcd-sdm
For your user, add /opt/pgpro/sdm-14/bin
to the PATH
environment variable:
echo "export PATH=$PATH:/opt/pgpro/sdm-14/bin" >> .bashrc source .bashrc
Check that etcd is properly configured:
etcdctl member list -w table +------------------+---------+-------+----------------------------+----------------------------+------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER | +------------------+---------+-------+----------------------------+----------------------------+------------+ | 318be6342e6d9ac | started | etcd1 | http://192.0.1.1:2380 | http://192.0.1.1:2379 | false | | 9e49480544aedb89 | started | etcd2 | http://192.0.1.2:2380 | http://192.0.1.2:2379 | false | | bb3772bfa22482d7 | started | etcd3 | http://192.0.1.3:2380 | http://192.0.1.3.4:2379 | false | +------------------+---------+-------+----------------------------+----------------------------+------------+ $ etcdctl --endpoints=http://192.0.1.1:2380,http://192.0.1.2:2380,http://192.0.1.3:2380 endpoint status health -w table +----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | http://192.0.1.1:2380 | 318be6342e6d9ac | 3.5.9 | 5.7 MB | true | false | 13 | 425686 | 425686 | | | http://192.0.1.2:2380 | 9e49480544aedb89 | 3.5.9 | 5.7 MB | false | false | 13 | 425686 | 425686 | | | http://192.0.1.3:2380 | bb3772bfa22482d7 | 3.5.9 | 5.7 MB | false | false | 13 | 425686 | 425686 | | +----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ -------+
The etcd cluster is properly configured and ready to serve requests.
To prevent bloat when etcd is intensively used, add a defragmentation command to cron:
sudo { crontab -l; echo "@hourly /opt/pgpro/sdm-14/bin/etcdctl defrag"; } | crontab
The final endpoints string of the etcd cluster:
etcd1=http://<ip etcd1>:2380,etcd2=http://<ip etcd2>:2380,etcd3=http://<ip etcd3>:2380
It should be specified in /etc/shardman
configuration file and as a --store-endpoints
parameter of shardmanctl.