27.2. Setting Up a BiHA Cluster #
- 27.2.1. Prerequisites and Considerations
- 27.2.2. Setting Up a BiHA Cluster from Scratch
- 27.2.3. Setting Up a BiHA Cluster from the Existing Cluster with Streaming Replication
- 27.2.4. Setting Up a BiHA Cluster from the Existing Database Server
- 27.2.5. Setting Up the Referee Node in the BiHA Cluster
- 27.2.6. Using the Magic String (Optional)
- 27.2.2. Setting Up a BiHA Cluster from Scratch
The BiHA cluster is set up by means of the bihactl utility. There are several scenarios of using the bihactl utility:
Before you start setting up the BiHA cluster, carefully read Section 27.2.1.
27.2.1. Prerequisites and Considerations #
Before you begin to set up the BiHA cluster, read the following information and perform the required actions if needed:
Ensure network connectivity between all nodes of your future BiHA cluster.
If network isolation is required, when both the control channel and WAL transmission operate in one network, while client sessions with the database operate in another network, configure the BiHA cluster as follows:
Use host names resolving to IP addresses of the network for the control channel and WAL.
Add the IP address for client connections to the
listen_addresses
configuration parameter.
To avoid any
biha-background-worker
issues related to system time settings on cluster nodes, configure time synchronization on all nodes.It is not recommended to execute the bihactl commands in the
PGDATA
directory. Thebihactl
utility may create thebiha_init.log
andbiha_add.log
files in the directory where it is executed. However, the targetPGDATA
directory must be empty for proper execution of thebihactl
commands.The password for the
biha_replication_user
role in the password file must be the same on all nodes of the BiHA cluster. It is required for connection between the leader node and follower nodes. You can specify the password using one of the following approaches:The secure and recommended way is adding a separate line for each node:
echo '
hostname
:port
:biha_db:biha_replication_user:password
' >> ~/.pgpass echo 'hostname
:port
:replication:biha_replication_user:password
' >> ~/.pgpassThe simple way is adding a single line for all nodes:
echo '*:*:*:biha_replication_user:
password
' >> ~/.pgpass
During operation, biha creates the following service files in the database directory:
standby.signal — a file used to start nodes in standby mode. It is required to make biha read-only at the start of Postgres Pro. This file is deleted from the leader node when its state changes to
LEADER_RW
.biha.state
andbiha.conf
— files in thepg_biha
directory required to save the internal state and configuration of biha.
During operation, biha uses its own mechanism to modify the Postgres Pro configuration dynamically. Some Postgres Pro parameters are managed by biha and cannot be modified using ALTER SYSTEM, as they are essential for biha operation. These parameters are the following:
These parameters are stored in the
pg_biha/biha.conf
file, as well as in the shared memory of the biha process. When these parameters are modified, biha sends theSIGHUP
signal for other processes to be informed about the changes. If you modify any other parameters during this change and do not send a signal to reread the configuration, the parameters that you have changed may be unexpectedly reread.Postgres Pro behaves as described above only when biha is loaded and configured, i.e., when the extension is present in the
shared_preload_libraries
variable and the required biha.* parameters are configured. Otherwise, Postgres Pro operates normally.When the BiHA cluster is initialized, biha modifies the Postgres Pro configuration in postgresql.conf and pg_hba.conf. The changes are first included in biha service files
postgresql.biha.conf
andpg_hba.biha.conf
and then processed by the server after the following include directives are specified in postgresql.conf and pg_hba.conf, respectively:include 'postgresql.biha.conf' include "pg_hba.biha.conf"
In some operating systems, user session management may be handled by systemd. In this case, if your server is started using
pg_ctl
and managed remotely, be aware that all background processes initiated within an SSH session will be terminated by the systemd daemon when the session ends. To avoid such behavior, you can do one of the following:Use the
postgrespro-ent-17
systemd unit file to start the DBMS server on the cluster node.Modify the configuration of the user session management service called
systemd-logind
in the/etc/systemd/logind.conf
file, specifically, set theKillUserProcesses
parameter tono
.
27.2.2. Setting Up a BiHA Cluster from Scratch #
To set up a BiHA cluster from scratch, perform the following procedures.
Prerequisites
On all nodes of your future cluster, install the
postgrespro-ent-17-contrib
package. Do not create a database instance.Ensure that you execute the
bihactl
command as the same user that will start the Postgres Pro Enterprise server.For example, if you start the server as user
postgres
, thebihactl
command must also be run by userpostgres
.If you plan to use pg_probackup with biha, install the
pg-probackup-ent-17
package.
Initializing the cluster
Use the bihactl
init command to initialize the cluster and create the leader node.
Execute the
bihactl
init command with the necessary options:bihactl init \ --biha-node-id=1 \ --host=
node_1
\ --port=node_port_number
\ --biha-port=biha_port_number
\ --nquorum=number_of_nodes
\ --pgdata=leader_PGDATA_directory
The initdb utility is accessed, postgresql.conf and pg_hba.conf files are modified.
When initializing the BiHA cluster, the magic string is generated. For more information on how to use the magic string, see Section 27.2.6.
Start the DBMS:
pg_ctl start -D
leader_PGDATA_directory
-lleader_log_file
Check the node status in the biha.status_v view:
SELECT * FROM biha.status_v;
Adding the follower node
Ensure that the leader node is in the
LEADER_RO
orLEADER_RW
state.Ensure that the password for the
biha_replication_user
role in the password file matches the password for the same role on the leader node.Execute the
bihactl
add command with the necessary options:bihactl add \ --biha-node-id=2 \ --host=
node_2
\ --port=node_port_number
\ --biha-port=biha_port_number
\ --use-leader "host=leader_host
port=leader_port
biha-port=leader_biha_port
" \ --pgdata=follower_PGDATA_directory
A backup of the leader node is created by means of pg_basebackup or pg_probackup depending on the value set in the
--backup-method
parameter. Besides, postgresql.conf and pg_hba.conf files are modified.Note
During this process, all files are copied from the leader to the new node. The larger the database size, the longer it takes to add the follower.
You can also add the leader node connection data using the magic string. For more information on how to use the magic string, see Section 27.2.6.
Start the DBMS:
pg_ctl start -D
follower_PGDATA_directory
-lfollower_log_file
Check the node status in the biha.status_v view:
SELECT * FROM biha.status_v;
27.2.3. Setting Up a BiHA Cluster from the Existing Cluster with Streaming Replication #
Convert your existing Postgres Pro Enterprise 17 cluster with streaming replication and a configured database instance into a BiHA cluster. After conversion, the primary node of the existing cluster becomes the leader node, and standby nodes become follower nodes.
Prerequisites
If your existing cluster is synchronous, i.e. its synchronous_standby_names parameter is not empty (for example, synchronous_standby_names=walreceiver
), do the following before conversion into the BiHA cluster:
Reset the
synchronous_standby_names
parameter:ALTER SYSTEM RESET synchronous_standby_names;
From the postgresql.conf file and all the include directives, manually remove the
synchronous_standby_names
values.
Converting the existing primary node into the leader node
Stop the existing primary node:
pg_ctl stop -D
primary_PGDATA_directory
Execute the
bihactl
init command with the--convert
option:bihactl init --convert \ --biha-node-id=1 \ --host=
node_1
\ --port=PostgresPro_port
\ --biha-port=biha_port_number
\ --nquorum=number_of_nodes
\ --pgdata=leader_PGDATA_directory
When converting the cluster, the magic string is generated. For more information on how to use the magic string, see Section 27.2.6.
Start the DBMS:
pg_ctl start -D
leader_PGDATA_directory
-lleader_log_file
Check the node status in the biha.status_v view:
SELECT * FROM biha.status_v;
Converting the existing standby node into the follower node
Ensure that the password for the
biha_replication_user
role in the password file matches the password for the same role on the leader node.Stop the existing standby node:
pg_ctl stop -D
standby_PGDATA_directory
Execute the
bihactl
add command with the--convert-standby
option:bihactl add --convert-standby \ --biha-node-id=2 \ --host=
node_2
\ --port=PostgresPro_port
\ --biha-port=5435 \ --use-leader "host=leader_host
port=leader_port
biha-port=leader_biha_port
" \ --pgdata=follower_PGDATA_directory
When converting an existing standby node into the follower node, biha creates the
andfollower_PGDATA_directory
/pg_biha/biha.conf
files required for the node to be connected to the cluster and modifies postgresql.conf and pg_hba.conf.follower_PGDATA_directory
/pg_biha/biha.stateYou can also add the leader node connection data using the magic string. For more information on how to use the magic string, see Section 27.2.6.
Start the DBMS:
pg_ctl start -D
follower_PGDATA_directory
-lfollower_log_file
Check the node status in the biha.status_v view:
SELECT * FROM biha.status_v;
27.2.4. Setting Up a BiHA Cluster from the Existing Database Server #
If your existing Postgres Pro Enterprise 17 server with a configured database has only one node, you can convert it into the leader and then add more nodes to your BiHA cluster using the bihactl
add command.
Converting the existing node into the leader node
Stop the existing node:
pg_ctl stop -D
server_PGDATA_directory
Execute the
bihactl
init command with the--convert
option:bihactl init --convert \ --biha-node-id=1 \ --host=
node_1
\ --port=PostgresPro_port
\ --biha-port=biha_port_number
\ --nquorum=number_of_nodes
\ --pgdata=leader_PGDATA_directory
The postgresql.conf and pg_hba.conf files are modified.
When converting the node, the magic string is generated. For more information on how to use the magic string, see Section 27.2.6.
Start the DBMS:
pg_ctl start -D
leader_PGDATA_directory
-lleader_log_file
Check the node status in the biha.status_v view:
SELECT * FROM biha.status_v;
Adding the follower node
Ensure that the leader node is in the
LEADER_RO
orLEADER_RW
state.Ensure that the password for the
biha_replication_user
role in the password file matches the password for the same role on the leader node.Execute the
bihactl
add command with the necessary options:bihactl add \ --biha-node-id=2 \ --host=
node_2
\ --port=node_port_number
\ --biha-port=biha_port_number
\ --use-leader "host=leader_host
port=leader_port
biha-port=leader_biha_port
" \ --pgdata=follower_PGDATA_directory
A backup of the leader node is created by means of pg_basebackup or pg_probackup depending on the value set in the
--backup-method
parameter. Besides, postgresql.conf and pg_hba.conf files are modified.You can also add the leader node connection data using the magic string. For more information on how to use the magic string, see Section 27.2.6.
Start the DBMS:
pg_ctl start -D
follower_PGDATA_directory
-lfollower_log_file
Check the node status in the biha.status_v view:
SELECT * FROM biha.status_v;
27.2.5. Setting Up the Referee Node in the BiHA Cluster #
The referee node participates in the elections and helps to manage split-brain issues.
Note
You can use only pg_basebackup when adding the referee node to your cluster.
Only the
biha_db
database and system tables are copied to the referee node. Thepostgres
database and user data are not copied.
To set up a referee node:
Execute the
bihactl
add command with the relevant value of the--mode
option:bihactl add \ --biha-node-id=3 \ --host=
node_3
\ --port=node_port_number
\ --biha-port=biha_port_number
\ --use-leader "host=leader_host
port=leader_port
biha-port=leader_biha_port
" \ --pgdata=referee_PGDATA_directory
\ --mode=refereeor
bihactl add \ --biha-node-id=3 \ --host=
node_3
\ --port=node_port_number
\ --biha-port=biha_port_number
\ --use-leader "host=leader_host
port=leader_port
biha-port=leader_biha_port
" \ --pgdata=referee_PGDATA_directory
\ --mode=referee_with_walStart the Postgres Pro instance where you have set up the referee:
pg_ctl start -D
referee_PGDATA_directory
Check the node status in the biha.status_v view:
SELECT * FROM biha.status_v;
27.2.6. Using the Magic String (Optional) #
The magic string is a special string generated automatically when you initiate a BiHA cluster. The magic string is used in the BiHA cluster setup scripts. It contains the data needed to connect follower nodes to the leader node.
You can use magic string to avoid entering the leader node connection data manually when adding follower nodes.
Here is an example of how to use the magic string:
When initializing the cluster, redirect the
bihactl
output to a file:bihactl init \ --biha-node-id=1 \ --host=
node_1
\ --port=node_port_number
\ --biha-port=biha_port_number
\ --nquorum=number_of_nodes
\ --pgdata=leader_PGDATA_directory
> /tmp/magic-fileWhen adding a follower node, do the following:
Set up an environment variable:
export MAGIC_STRING="$(cat /tmp/magic-file)"
Add
--magic-string
as abihactl
add option:bihactl add \ --biha-node-id=2 \ --host=
node_2
\ --port=node_port_number
\ --biha-port=biha_port_number
\ --magic-string=$MAGIC_STRING
\ --pgdata=follower_PGDATA_directory
The follower node will now use the encoded data from the magic string to connect to the leader node.