33.2. Scaling the Cluster #

The Postgres Pro Shardman architecture allows you to scale out your cluster without any downtime. This section describes how you can add more nodes to your Postgres Pro Shardman cluster in order to improve query performance/scalability. If a Postgres Pro Shardman cluster does not meet your performance expectations or storage capacity, you can add new nodes to the cluster.

33.2.1. Adding and Removing a Node #

In the manual-topology mode, to add a primary to a cluster, use the shardmanctl nodes add command, which adds the list of nodes to the cluster as primaries with a separate replication group for each primary. Create a cluster with three primary nodes and manual topology (PlacementPolicy=manual in sdmspec.json):

                        $ shardmanctl --store-endpoints http://etcd1:2379,http://etcd2:2379,http://etcd3:2379 init -f sdmspec.json
                        $ shardmanctl --store-endpoints http://etcd1:2379,http://etcd2:2379,http://etcd3:2379 nodes add -n n1,n2,n3

To view the topology of a cluster, use the shardmanctl cluster topology command:

                        $ shardmanctl --store-endpoints http://etcd1:2379,http://etcd2:2379,http://etcd3:2379 cluster topology

The command output is as follows:

                        ┌────────────────────────────────────────────────────────────────────────┐
                        │              == REPLICATION GROUP clover-1-n1, RGID - 1 ==             │
                        ├──────────────────────────┬──────────────────┬──────────────────────────┤
                        │           HOST           │       PORT       │          STATUS          │
                        ├──────────────────────────┼──────────────────┼──────────────────────────┤
                        │            n1            │       5432       │          PRIMARY         │
                        └──────────────────────────┴──────────────────┴──────────────────────────┘
                        ┌────────────────────────────────────────────────────────────────────────┐
                        │              == REPLICATION GROUP clover-2-n2, RGID - 2 ==             │
                        ├──────────────────────────┬──────────────────┬──────────────────────────┤
                        │           HOST           │       PORT       │          STATUS          │
                        ├──────────────────────────┼──────────────────┼──────────────────────────┤
                        │            n2            │       5432       │          PRIMARY         │
                        └──────────────────────────┴──────────────────┴──────────────────────────┘
                        ┌────────────────────────────────────────────────────────────────────────┐
                        │              == REPLICATION GROUP clover-3-n3, RGID - 3 ==             │
                        ├──────────────────────────┬──────────────────┬──────────────────────────┤
                        │           HOST           │       PORT       │          STATUS          │
                        ├──────────────────────────┼──────────────────┼──────────────────────────┤
                        │            n3            │       5432       │          PRIMARY         │
                        └──────────────────────────┴──────────────────┴──────────────────────────┘

Add n4, n5, n6 nodes as replicas using the shardmanctl shard add command:

                        $ shardmanctl --store-endpoints http://etcd1:2379,http://etcd2:2379,http://etcd3:2379 shard --shard clover-1-n1 add -n n4
                        $ shardmanctl --store-endpoints http://etcd1:2379,http://etcd2:2379,http://etcd3:2379 shard --shard clover-2-n2 add -n n5
                        $ shardmanctl --store-endpoints http://etcd1:2379,http://etcd2:2379,http://etcd3:2379 shard --shard clover-3-n3 add -n n6

In manual-topology mode, one node can be added to more than one replication group.

As a result, we get the following cluster configuration:

                        ┌─────────────────────────────────────────────────────────────────────┐
                        │             == REPLICATION GROUP clover-1-n1, RGID - 1 ==           │
                        ├─────────────────────────┬─────────────────┬─────────────────────────┤
                        │           HOST          │       PORT      │          STATUS         │
                        ├─────────────────────────┼─────────────────┼─────────────────────────┤
                        │            n1           │       5432      │         PRIMARY         │
                        ├─────────────────────────┼─────────────────┼─────────────────────────┤
                        │            n4           │       5432      │         STANDBY         │
                        └─────────────────────────┴─────────────────┴─────────────────────────┘
                        ┌─────────────────────────────────────────────────────────────────────┐
                        │             == REPLICATION GROUP clover-2-n2, RGID - 2 ==           │
                        ├─────────────────────────┬─────────────────┬─────────────────────────┤
                        │           HOST          │       PORT      │          STATUS         │
                        ├─────────────────────────┼─────────────────┼─────────────────────────┤
                        │            n2           │       5432      │         PRIMARY         │
                        ├─────────────────────────┼─────────────────┼─────────────────────────┤
                        │            n5           │       5432      │         STANDBY         │
                        └─────────────────────────┴─────────────────┴─────────────────────────┘
                        ┌─────────────────────────────────────────────────────────────────────┐
                        │             == REPLICATION GROUP clover-3-n3, RGID - 3 ==           │
                        ├─────────────────────────┬─────────────────┬─────────────────────────┤
                        │           HOST          │       PORT      │          STATUS         │
                        ├─────────────────────────┼─────────────────┼─────────────────────────┤
                        │            n3           │       5432      │         PRIMARY         │
                        ├─────────────────────────┼─────────────────┼─────────────────────────┤
                        │            n6           │       5432      │         STANDBY         │
                        └─────────────────────────┴─────────────────┴─────────────────────────┘
                    

To remove a replica, just run the shardmanctl shard rm command. For example:

                        $ shardmanctl --store-endpoints http://etcd1:2379,http://etcd2:2379,http://etcd3:2379 shard --shard clover-1-n1 rm -n n4

To remove the master, first run the shardmanctl shard switch command to switch the master to the replica; then delete the old master.

                        $ shardmanctl --store-endpoints http://etcd1:2379,http://etcd2:2379,http://etcd3:2379 shard --shard clover-1-n1 switch --new-primary n4