27.2. Setting Up a BiHA Cluster #

The BiHA cluster is set up by means of the bihactl utility. There are several scenarios of using the bihactl utility:

Before you start setting up the BiHA cluster, carefully read Section 27.2.1.

27.2.1. Prerequisites and Considerations #

Before you begin to set up the BiHA cluster, read the following information and perform the required actions if needed:

  • Ensure network connectivity between all nodes of your future BiHA cluster.

    If network isolation is required, when both the control channel and WAL transmission operate in one network, while client sessions with the database operate in another network, configure the BiHA cluster as follows:

    • Use host names resolving to IP addresses of the network for the control channel and WAL.

    • Add the IP address for client connections to the listen_addresses configuration parameter.

  • BiHA creates a number of auxiliary files and configures some Postgres Pro configuration parameters to ensure proper operation. For more information, see Postgres Pro Configuration.

  • To avoid any biha-background-worker issues related to system time settings on cluster nodes, configure time synchronization on all nodes.

  • It is not recommended to execute the bihactl commands in the PGDATA directory. The bihactl utility may create the biha_init.log and biha_add.log files in the directory where it is executed. However, the target PGDATA directory must be empty for proper execution of the bihactl commands.

  • The password for the biha_replication_user role in the password file must be the same on all nodes of the BiHA cluster. It is required for connection between the leader node and follower nodes. You can specify the password using one of the following approaches:

    • The secure and recommended way is adding a separate line for each node:

      echo 'hostname:port:biha_db:biha_replication_user:password' >> ~/.pgpass
      
      echo 'hostname:port:replication:biha_replication_user:password' >> ~/.pgpass
      
    • The simple way is adding a single line for all nodes:

      echo '*:*:*:biha_replication_user:password' >> ~/.pgpass
      
  • During operation, BiHA uses its own mechanism to modify the Postgres Pro configuration dynamically. Some Postgres Pro parameters are managed by biha and cannot be modified using ALTER SYSTEM, as they are essential for biha operation. These parameters are the following:

    For more information, see Postgres Pro Configuration.

  • In some operating systems, user session management may be handled by systemd. In this case, if your server is started using pg_ctl and managed remotely, be aware that all background processes initiated within an SSH session will be terminated by the systemd daemon when the session ends. To avoid such behavior, you can do one of the following:

    • Use the postgrespro-ent-17 systemd unit file to start the DBMS server on the cluster node.

    • Modify the configuration of the user session management service called systemd-logind in the /etc/systemd/logind.conf file, specifically, set the KillUserProcesses parameter to no.

27.2.2. Setting Up a BiHA Cluster from Scratch #

To set up a BiHA cluster from scratch, perform the following procedures.

Prerequisites

  1. On all nodes of your future cluster, install the postgrespro-ent-17-contrib package. Do not create a database instance.

  2. Ensure that you execute the bihactl command as the same user that will start the Postgres Pro Enterprise server.

    For example, if you start the server as user postgres, the bihactl command must also be run by user postgres.

  3. If you plan to use pg_probackup with biha, install the pg-probackup-ent-17 package.

Initializing the cluster

Use the bihactl init command to initialize the cluster and create the leader node.

  1. Execute the bihactl init command with the necessary options:

    bihactl init \
        --biha-node-id=1 \
        --host=node_1 \
        --port=node_port_number \
        --biha-port=biha_port_number \
        --nquorum=number_of_nodes \
        --pgdata=leader_PGDATA_directory
    

    At this stage, you can also enable SSL for cluster service connections. For more information, see SSL Configuration.

  2. Specify a password for the biha_replication_user role.

    The initdb utility is accessed, postgresql.conf and pg_hba.conf files are modified.

    When initializing the BiHA cluster, the magic string is generated. For more information on how to use the magic string, see Section 27.2.8.

  3. Start the DBMS using pg_ctl:

    pg_ctl start -D leader_PGDATA_directory -l leader_log_file
    
  4. Check the node status in the biha.status_v view:

    SELECT * FROM biha.status_v;
    

Adding a follower node

  1. Ensure that the leader node is in the LEADER_RO or LEADER_RW state.

  2. Ensure that the password for the biha_replication_user role in the password file matches the password for the same role on the leader node.

  3. Execute the bihactl add command with the necessary options:

    bihactl add \
        --biha-node-id=2 \
        --host=node_2 \
        --port=node_port_number \
        --biha-port=biha_port_number \
        --use-leader "host=leader_host port=leader_port biha-port=leader_biha_port" \
        --pgdata=follower_PGDATA_directory
    

    A backup of the leader node is created by means of pg_basebackup or pg_probackup depending on the value set in the --backup-method option. Besides, postgresql.conf and pg_hba.conf files are modified.

    Note

    During this process, all files are copied from the leader to the new node. The larger the database size, the longer it takes to add the follower.

    You can also add the leader node connection data using the magic string. For more information on how to use the magic string, see Section 27.2.8.

  4. Start the DBMS using pg_ctl:

    pg_ctl start -D follower_PGDATA_directory -l follower_log_file
    
  5. Check the node status in the biha.status_v view:

    SELECT * FROM biha.status_v;
    

27.2.3. Setting Up a BiHA Cluster from the Existing Cluster with Streaming Replication #

Convert your existing Postgres Pro Enterprise 17 cluster with streaming replication and a configured database instance into a BiHA cluster. After conversion, the primary node of the existing cluster becomes the leader node, and standby nodes become follower nodes.

Converting the existing primary node into the leader node

  1. Stop the existing primary node using pg_ctl:

    pg_ctl stop -D primary_PGDATA_directory
    
  2. Execute the bihactl init command with the --convert option:

    bihactl init --convert \
        --biha-node-id=1 \
        --host=node_1 \
        --port=PostgresPro_port \
        --biha-port=biha_port_number \
        --nquorum=number_of_nodes \
        --pgdata=leader_PGDATA_directory
    

    At this stage, you can also enable SSL for cluster service connections. For more information, see SSL Configuration.

    When converting the cluster, the magic string is generated. For more information on how to use the magic string, see Section 27.2.8.

  3. Specify a password for the biha_replication_user role.

  4. Start the DBMS using pg_ctl:

    pg_ctl start -D leader_PGDATA_directory -l leader_log_file
    
  5. Check the node status in the biha.status_v view:

    SELECT * FROM biha.status_v;
    

Converting the existing standby node into the follower node

  1. Ensure that the password for the biha_replication_user role in the password file matches the password for the same role on the leader node.

  2. Stop the existing standby node using pg_ctl:

    pg_ctl stop -D standby_PGDATA_directory
    
  3. Execute the bihactl add command with the --convert-standby option:

    bihactl add --convert-standby \
      --biha-node-id=2 \
      --host=node_2 \
      --port=PostgresPro_port \
      --biha-port=5435 \
      --use-leader "host=leader_host port=leader_port biha-port=leader_biha_port" \
      --pgdata=follower_PGDATA_directory
    

    When converting an existing standby node into the follower node, biha creates the follower_PGDATA_directory/pg_biha/biha.conf and follower_PGDATA_directory/pg_biha/biha.state files required for the node to be connected to the cluster and modifies postgresql.conf and pg_hba.conf.

    You can also add the leader node connection data using the magic string. For more information on how to use the magic string, see Section 27.2.8.

  4. Start the DBMS using pg_ctl:

    pg_ctl start -D follower_PGDATA_directory -l follower_log_file
    
  5. Check the node status in the biha.status_v view:

    SELECT * FROM biha.status_v;
    

27.2.4. Setting Up a BiHA Cluster from the Existing Database Server #

If your existing Postgres Pro Enterprise 17 server with a configured database has only one node, you can convert it into the leader and then add more nodes to your BiHA cluster using the bihactl add command.

Converting the existing node into the leader node

  1. Stop the existing node using pg_ctl:

    pg_ctl stop -D server_PGDATA_directory
    
  2. Execute the bihactl init command with the --convert option:

    bihactl init --convert \
        --biha-node-id=1 \
        --host=node_1 \
        --port=PostgresPro_port \
        --biha-port=biha_port_number \
        --nquorum=number_of_nodes \
        --pgdata=leader_PGDATA_directory
    

    The postgresql.conf and pg_hba.conf files are modified.

    When converting the node, the magic string is generated. For more information on how to use the magic string, see Section 27.2.8.

  3. Start the DBMS using pg_ctl:

    pg_ctl start -D leader_PGDATA_directory -l leader_log_file
    
  4. Check the node status in the biha.status_v view:

    SELECT * FROM biha.status_v;
    

Adding the follower node

  1. Ensure that the leader node is in the LEADER_RO or LEADER_RW state.

  2. Ensure that the password for the biha_replication_user role in the password file matches the password for the same role on the leader node.

  3. Execute the bihactl add command with the necessary options:

     bihactl add \
         --biha-node-id=2 \
         --host=node_2 \
         --port=node_port_number \
         --biha-port=biha_port_number \
         --use-leader "host=leader_host port=leader_port biha-port=leader_biha_port" \
         --pgdata=follower_PGDATA_directory
     

    A backup of the leader node is created by means of pg_basebackup or pg_probackup depending on the value set in the --backup-method option. Besides, postgresql.conf and pg_hba.conf files are modified.

    You can also add the leader node connection data using the magic string. For more information on how to use the magic string, see Section 27.2.8.

  4. Start the DBMS using pg_ctl:

    pg_ctl start -D follower_PGDATA_directory -l follower_log_file
    
  5. Check the node status in the biha.status_v view:

    SELECT * FROM biha.status_v;
    

27.2.5. Setting Up the Referee Node in the BiHA Cluster #

The referee node participates in the elections and helps to manage split-brain issues.

Note

  • You can use only pg_basebackup when adding the referee node to your cluster.

  • Only the biha_db database and system tables are copied to the referee node. The postgres database and user data are not copied.

To set up a referee node:

  1. Execute the bihactl add command with the relevant value of the --mode option:

    bihactl add \
        --biha-node-id=3 \
        --host=node_3 \
        --port=node_port_number \
        --biha-port=biha_port_number \
        --use-leader "host=leader_host port=leader_port biha-port=leader_biha_port" \
        --pgdata=referee_PGDATA_directory \
        --mode=referee
    

    or

    bihactl add \
        --biha-node-id=3 \
        --host=node_3 \
        --port=node_port_number \
        --biha-port=biha_port_number \
        --use-leader "host=leader_host port=leader_port biha-port=leader_biha_port" \
        --pgdata=referee_PGDATA_directory \
        --mode=referee_with_wal
    

  2. Using pg_ctl, start the Postgres Pro instance where you have set up the referee:

    pg_ctl start -D referee_PGDATA_directory
    
  3. Check the node status in the biha.status_v view:

    SELECT * FROM biha.status_v;
    

27.2.6. Setting Up a BiHA Cluster with proxima #

To extend high availability capabilities with a proxy server and a connection pooler functionality, you can enable the proxima extension while setting up your BiHA cluster.

  1. When initializing the BiHA cluster from scratch, specify the bihactl init command with the --enable-proxima option. For example:

    bihactl init \
        --biha-node-id=1 \
        --host=node_1 \
        --port=node_port_number \
        --biha-port=biha_port_number \
        --nquorum=number_of_nodes \
        --pgdata=leader_PGDATA_directory
        --enable-proxima
    

    The bihactl utility adds proxima to the shared_preload_libraries list in the postgresql.conf file, creates the postgresql.proxima.conf file and includes it into the postgresql.biha.conf file.

  2. Add followers.

    As followers are added by taking and restoring a backup of the leader, the proxima configuration is copied to the followers, just like other BiHA configuration parameters.

    If you add a referee node, the proxima configuration is not copied to the referee.

When your BiHA cluster is ready, you can modify the proxima configuration parameters.

27.2.7. SSL Configuration (Optional) #

When initializing your BiHA cluster, you can enable SSL for the cluster service connections by means of the --use-ssl option. To enable or disable SSL in the initialized BiHA cluster, use the procedure described in Managing SSL.

Preparing a Certificate and Key Pair

  • Using the OpenSSL utility, generate a certificate and a private key and save them in the /PGDATA/pg_biha directory on each cluster node:

    openssl req -x509 -newkey rsa:4096 -keyout path_to_key -out path_to_certificate -sha256 -days period_of_validity -nodes -subj "/CN=certificate_domain"
    

    For example:

    openssl req -x509 -newkey rsa:4096 -keyout /PGDATA/pg_biha/biha_priv_key.pem -out /PGDATA/pg_biha/biha_pub_cert.pem -sha256 -days 365 -nodes -subj "/CN=localhost"
    

    The following files are generated:

    • biha_priv_key.pem is a private key with read and write user access (0600)

    • biha_pub_cert.pem is a self-signed certificate issued for the specified time period and domain

    Important

    Ensure that you use the above mentioned names for your certificate and private key files as BiHA searches for the files by these names.

Enabling SSL

  1. When you initialize a BiHA cluster with bihactl init, specify the --use-ssl option:

    bihactl init \
        --biha-node-id=1 \
        --host=node_1 \
        --port=node_port_number \
        --biha-port=biha_port_number \
        --nquorum=number_of_nodes \
        --pgdata=leader_PGDATA_directory
        --use-ssl
    
  2. To ensure that SSL is enabled, when your BiHA cluster is set up, check that the biha.use_ssl parameter is set to true using the SHOW command:

    SHOW biha.use_ssl;
    

27.2.8. Using the Magic String (Optional) #

The magic string is a special string generated automatically when you initiate a BiHA cluster. The magic string is used in the BiHA cluster setup scripts. It contains the data needed to connect follower nodes to the leader node.

You can use magic string to avoid entering the leader node connection data manually when adding follower nodes.

Here is an example of how to use the magic string:

  1. When initializing the cluster, redirect the bihactl output to a file:

    bihactl init \
       --biha-node-id=1 \
       --host=node_1 \
       --port=node_port_number \
       --biha-port=biha_port_number \
       --nquorum=number_of_nodes \
       --pgdata=leader_PGDATA_directory > /tmp/magic-file
    
  2. When adding a follower node, do the following:

    1. Set up an environment variable:

      export MAGIC_STRING="$(cat /tmp/magic-file)"
      
    2. Add --magic-string as a bihactl add option:

      bihactl add \
       --biha-node-id=2 \
       --host=node_2 \
       --port=node_port_number \
       --biha-port=biha_port_number \
       --magic-string=$MAGIC_STRING \
       --pgdata=follower_PGDATA_directory
      

    The follower node will now use the encoded data from the magic string to connect to the leader node.