2.2. Scaling the Cluster #

The Shardman architecture allows you to scale out your cluster without any downtime. This section describes how you can add more nodes to your Shardman cluster in order to improve query performance/scalability. If a Shardman cluster does not meet your performance expectations or storage capacity, you can add new nodes to the cluster.

2.2.1. Adding and Removing a Node #

How nodes are added to a cluster and where replicas will be located depends on the type of a highly available configuration. Shardman supports two types of configurations: cross-replication mode and manual-topology mode. The PlacementPolicy parameter in sdmspec.json allows you to select the cluster behavior. The parameter supports two values: cross and manual. The default is cross. For example:

                    {
                        "PlacementPolicy": "cross",
                        "Repfactor": 1,
                        ...
                    }

2.2.1.1. Cross Replication #

The shardmanctl nodes add command is used to add new nodes to a Shardman cluster. With cross placement policy, nodes are added to a cluster by clovers. Each node in a clover runs the primary DBMS instance and replicas of other nodes in the clover. The number of replicas is determined by the Repfactor configuration parameter. So, each clover consists of Repfactor + 1 nodes and can stand loss of Repfactor nodes. An example of creating a cluster of four nodes with Repfactor=1 and cross replication is shown below:

                        $ shardmanctl --store-endpoints http://etcd1:2379,http://etcd2:2379,http://etcd3:2379 init -f sdmspec.json
                        $ shardmanctl --store-endpoints http://etcd1:2379,http://etcd2:2379,http://etcd3:2379 nodes add -n n1,n2,n3,n4

View the topology of a cluster:

                        $ shardmanctl --store-endpoints http://etcd1:2379,http://etcd2:2379,http://etcd3:2379 cluster topology

The command output is as follows:

                        ┌─────────────────────────────────────────────────────────────────────┐
                        │             == REPLICATION GROUP clover-1-n1, RGID - 1 ==           │
                        ├─────────────────────────┬─────────────────┬─────────────────────────┤
                        │           HOST          │       PORT      │          STATUS         │
                        ├─────────────────────────┼─────────────────┼─────────────────────────┤
                        │            n1           │       5432      │         PRIMARY         │
                        ├─────────────────────────┼─────────────────┼─────────────────────────┤
                        │            n2           │       5433      │         STANDBY         │
                        └─────────────────────────┴─────────────────┴─────────────────────────┘
                        ┌─────────────────────────────────────────────────────────────────────┐
                        │             == REPLICATION GROUP clover-1-n2, RGID - 2 ==           │
                        ├─────────────────────────┬─────────────────┬─────────────────────────┤
                        │           HOST          │       PORT      │          STATUS         │
                        ├─────────────────────────┼─────────────────┼─────────────────────────┤
                        │            n1           │       5433      │         STANDBY         │
                        ├─────────────────────────┼─────────────────┼─────────────────────────┤
                        │            n2           │       5432      │         PRIMARY         │
                        └─────────────────────────┴─────────────────┴─────────────────────────┘
                        ┌─────────────────────────────────────────────────────────────────────┐
                        │             == REPLICATION GROUP clover-2-n3, RGID - 1 ==           │
                        ├─────────────────────────┬─────────────────┬─────────────────────────┤
                        │           HOST          │       PORT      │          STATUS         │
                        ├─────────────────────────┼─────────────────┼─────────────────────────┤
                        │            n3           │       5432      │         PRIMARY         │
                        ├─────────────────────────┼─────────────────┼─────────────────────────┤
                        │            n4           │       5433      │         STANDBY         │
                        └─────────────────────────┴─────────────────┴─────────────────────────┘
                        ┌─────────────────────────────────────────────────────────────────────┐
                        │             == REPLICATION GROUP clover-2-n4, RGID - 2 ==           │
                        ├─────────────────────────┬─────────────────┬─────────────────────────┤
                        │           HOST          │       PORT      │          STATUS         │
                        ├─────────────────────────┼─────────────────┼─────────────────────────┤
                        │            n3           │       5433      │         STANDBY         │
                        ├─────────────────────────┼─────────────────┼─────────────────────────┤
                        │            n4           │       5432      │         PRIMARY         │
                        └─────────────────────────┴─────────────────┴─────────────────────────┘

The shardmanctl nodes rm command is used to remove nodes from a Shardman cluster. This command removes clovers containing the specified nodes from the cluster. The last clover in the cluster cannot be removed. Any data (such as partitions of sharded relations) on removed replication groups is migrated to the remaining replication groups using logical replication, and all references to the removed replication groups (including definitions of foreign servers) are removed from the metadata of the remaining replication groups. Finally, the metadata in etcd is updated.

                        $ shardmanctl --store-endpoints http://etcd1:2379,http://etcd2:2379,http://etcd3:2379 nodes rm -n n3

View the topology of a cluster:

                        $ shardmanctl --store-endpoints http://etcd1:2379,http://etcd2:2379,http://etcd3:2379 cluster topology

The command output is as follows:

                        ┌─────────────────────────────────────────────────────────────────────┐
                        │             == REPLICATION GROUP clover-1-n1, RGID - 1 ==           │
                        ├─────────────────────────┬─────────────────┬─────────────────────────┤
                        │           HOST          │       PORT      │          STATUS         │
                        ├─────────────────────────┼─────────────────┼─────────────────────────┤
                        │            n1           │       5432      │         PRIMARY         │
                        ├─────────────────────────┼─────────────────┼─────────────────────────┤
                        │            n2           │       5433      │         STANDBY         │
                        └─────────────────────────┴─────────────────┴─────────────────────────┘
                        ┌─────────────────────────────────────────────────────────────────────┐
                        │             == REPLICATION GROUP clover-1-n2, RGID - 2 ==           │
                        ├─────────────────────────┬─────────────────┬─────────────────────────┤
                        │           HOST          │       PORT      │          STATUS         │
                        ├─────────────────────────┼─────────────────┼─────────────────────────┤
                        │            n1           │       5433      │         STANDBY         │
                        ├─────────────────────────┼─────────────────┼─────────────────────────┤
                        │            n2           │       5432      │         PRIMARY         │
                        └─────────────────────────┴─────────────────┴─────────────────────────┘

2.2.1.2. Manual Topology #

In the manual-topology mode, to add a primarу to a cluster, use the shardmanctl nodes add command, which adds the list of nodes to the cluster as primaries with a separate replication group for each primary. Create a cluster with three primary nodes and manual topology (PlacementPolicy=manual in sdmspec.json):

                        $ shardmanctl --store-endpoints http://etcd1:2379,http://etcd2:2379,http://etcd3:2379 init -f sdmspec.json
                        $ shardmanctl --store-endpoints http://etcd1:2379,http://etcd2:2379,http://etcd3:2379 nodes add -n n1,n2,n3

To view the topology of a cluster, use the shardmanctl cluster topology command:

                        $ shardmanctl --store-endpoints http://etcd1:2379,http://etcd2:2379,http://etcd3:2379 cluster topology

The command output is as follows:

                        ┌────────────────────────────────────────────────────────────────────────┐
                        │              == REPLICATION GROUP clover-1-n1, RGID - 1 ==             │
                        ├──────────────────────────┬──────────────────┬──────────────────────────┤
                        │           HOST           │       PORT       │          STATUS          │
                        ├──────────────────────────┼──────────────────┼──────────────────────────┤
                        │            n1            │       5432       │          PRIMARY         │
                        └──────────────────────────┴──────────────────┴──────────────────────────┘
                        ┌────────────────────────────────────────────────────────────────────────┐
                        │              == REPLICATION GROUP clover-2-n2, RGID - 2 ==             │
                        ├──────────────────────────┬──────────────────┬──────────────────────────┤
                        │           HOST           │       PORT       │          STATUS          │
                        ├──────────────────────────┼──────────────────┼──────────────────────────┤
                        │            n2            │       5432       │          PRIMARY         │
                        └──────────────────────────┴──────────────────┴──────────────────────────┘
                        ┌────────────────────────────────────────────────────────────────────────┐
                        │              == REPLICATION GROUP clover-3-n3, RGID - 3 ==             │
                        ├──────────────────────────┬──────────────────┬──────────────────────────┤
                        │           HOST           │       PORT       │          STATUS          │
                        ├──────────────────────────┼──────────────────┼──────────────────────────┤
                        │            n3            │       5432       │          PRIMARY         │
                        └──────────────────────────┴──────────────────┴──────────────────────────┘

Add n4, n5, n6 nodes as replicas using the shardmanctl shard add command:

                        $ shardmanctl --store-endpoints http://etcd1:2379,http://etcd2:2379,http://etcd3:2379 shard --shard clover-1-n1 add -n n4
                        $ shardmanctl --store-endpoints http://etcd1:2379,http://etcd2:2379,http://etcd3:2379 shard --shard clover-2-n2 add -n n5
                        $ shardmanctl --store-endpoints http://etcd1:2379,http://etcd2:2379,http://etcd3:2379 shard --shard clover-3-n3 add -n n6

In manual-topology mode, one node can be added to more than one replication group.

As a result, we get the following cluster configuration:

                        ┌─────────────────────────────────────────────────────────────────────┐
                        │             == REPLICATION GROUP clover-1-n1, RGID - 1 ==           │
                        ├─────────────────────────┬─────────────────┬─────────────────────────┤
                        │           HOST          │       PORT      │          STATUS         │
                        ├─────────────────────────┼─────────────────┼─────────────────────────┤
                        │            n1           │       5432      │         PRIMARY         │
                        ├─────────────────────────┼─────────────────┼─────────────────────────┤
                        │            n4           │       5432      │         STANDBY         │
                        └─────────────────────────┴─────────────────┴─────────────────────────┘
                        ┌─────────────────────────────────────────────────────────────────────┐
                        │             == REPLICATION GROUP clover-2-n2, RGID - 2 ==           │
                        ├─────────────────────────┬─────────────────┬─────────────────────────┤
                        │           HOST          │       PORT      │          STATUS         │
                        ├─────────────────────────┼─────────────────┼─────────────────────────┤
                        │            n2           │       5432      │         PRIMARY         │
                        ├─────────────────────────┼─────────────────┼─────────────────────────┤
                        │            n5           │       5432      │         STANDBY         │
                        └─────────────────────────┴─────────────────┴─────────────────────────┘
                        ┌─────────────────────────────────────────────────────────────────────┐
                        │             == REPLICATION GROUP clover-3-n3, RGID - 3 ==           │
                        ├─────────────────────────┬─────────────────┬─────────────────────────┤
                        │           HOST          │       PORT      │          STATUS         │
                        ├─────────────────────────┼─────────────────┼─────────────────────────┤
                        │            n3           │       5432      │         PRIMARY         │
                        ├─────────────────────────┼─────────────────┼─────────────────────────┤
                        │            n6           │       5432      │         STANDBY         │
                        └─────────────────────────┴─────────────────┴─────────────────────────┘
                    

To remove a replica, just run the shardmanctl shard rm command. For example:

                        $ shardmanctl --store-endpoints http://etcd1:2379,http://etcd2:2379,http://etcd3:2379 shard --shard clover-1-n1 rm -n n4

To remove the master, first run the shardmanctl shard switch command to switch the master to the replica; then delete the old master.

                        $ shardmanctl --store-endpoints http://etcd1:2379,http://etcd2:2379,http://etcd3:2379 shard --shard clover-1-n1 switch --new-primary n4

pdf