27.3. Administration #

27.3.1. Changing Cluster Composition #

You can change the cluster composition as follows:

  • To add a node, use the bihactl add command with the relevant options.

  • To remove a node, use the biha.remove_node function.

  • To change the leader manually, use the biha.set_leader function. For more information, see Switchover.

27.3.2. Changing Configuration Parameters #

You can change cluster configuration parameters as follows:

  • Some BiHA configuration parameters are common for all cluster nodes. They must have the same value on all cluster nodes and can only be set up on the leader node by special functions. For example, use the biha.set_heartbeat_max_lost function to set the biha.heartbeat_max_lost parameter value:

    SELECT biha.set_heartbeat_max_lost(7);
    

    All available functions for setting up common cluster configuration parameters are listed in Common Cluster Configuration.

  • The BiHA parameters that can vary on different cluster nodes can be set with the ALTER SYSTEM command. For example, you can change the biha.can_vote parameter with ALTER SYSTEM:

    ALTER SYSTEM SET biha.can_vote = true;
    SELECT pg_reload_conf();
    

For more information about available BiHA configuration parameters and the ways they can be set up, see Section 27.4.1.

For more information about decreasing Postgres Pro configuration parameter values, see Postgres Pro Configuration.

27.3.3. Switchover #

In addition to the built-in failover capabilities, the high-availability cluster in Postgres Pro allows for the switchover. The difference between failover and switchover is that the former is performed automatically when the leader node fails and the latter is done manually by the system administrator. To switch over the leader node, use the biha.set_leader function. When you set the new leader, the following happens:

  • All attempts to perform elections are blocked and the timeout is set.

  • The current leader node becomes the follower node.

  • The newly selected node becomes the new leader.

  • If the switchover process does not end within the established timeout, the selected node becomes the follower and new elections are performed to choose the new cluster leader.

27.3.4. Managing SSL for Service Connections #

You can enable or disable SSL to secure service connections in the initialized BiHA cluster by means of the biha.use_ssl configuration parameter.

  1. Prepare a certificate and key pair.

  2. To enable SSL, on all cluster nodes, set biha.use_ssl to true:

    ALTER SYSTEM SET biha.use_ssl = true;
    

    To disable SSL, set the value to false.

  3. Stop and start the nodes using pg_ctl.

    Important

    The leader node must be the last to stop and first to start. Since enabling SSL in a BiHA cluster involves using the standard TLS handshake process, it is recommended to minimize the time lag between stopping and starting nodes.

27.3.5. Roles #

When you initialize the high-availability cluster, the biha_db database is created as well as the biha extension is created in the biha scheme of the biha_db database. Besides, the following roles are created and used:

  • BIHA_CLUSTER_MANAGEMENT_ROLE allows execution of all biha extension functions.

  • BIHA_REPLICATION_ROLE is used when running pg_rewind and pg_probackup.

  • biha_replication_user is a member of the BIHA_REPLICATION_ROLE and BIHA_CLUSTER_MANAGEMENT_ROLE roles and can request both client and replication connections. It is used by the bihactl utility as well as when the follower node is connected to the leader node. This role owns the biha_db database. The password for the biha_replication_user role in the password file must be the same on all nodes of the BiHA cluster. The password for the role is prompted upon the BiHA cluster creation.

  • biha_callbacks_user is the default user for callback execution. This user can connect to a database but has no privileges.

  • The predefined pg_monitor is used to monitor the state of the BiHA cluster.

27.3.6. Automatic Cluster Synchronization after Failover #

BiHA provides capabilities for automatic cluster synchronization after timelines of nodes have diverged. For example, to bring the old leader back to the cluster after failover.

You can manage automatic synchronization in your BiHA cluster as follows:

  • Enable automatic rewind by setting the biha.autorewind configuration parameter to true. The automatic rewind is performed if it may complete successfully, meaning that preliminary launching of the pg_rewind tool with the --dry-run option was a success.

    If the automatic rewind fails, the node state changes to NODE_ERROR. In this case, you can find the actual rewind state of the node in the biha.state file as described in Section 27.3.6.1.

    Important

    The rewind may cause the loss of some node WAL records.

  • To avoid running pg_rewind when WAL divergence is caused only by redundant heartbeat records, you can use the WAL trimming functionality managed by the biha.autowaltrim configuration parameter. When enabled, the WAL trimming algorithm finds the last common point in WALs of the diverged node and the new leader, verifies that all subsequent records are only heartbeats, and automatically deletes redundant heartbeat records from the diverged node so it can resume replication. It is useful after a split-brain, when the former leader reconnects to the cluster as a follower, but cannot start replication because its WAL contains heartbeat records generated when it was the leader.

    If the WAL trimming procedure fails, the node state changes to NODE_ERROR.

    You can view the WAL trimming parameters in the biha.state file. For more information, see Section 27.3.6.2.

    Important

    If the cluster fails during WAL trimming and WAL is changed incorrectly, it may lead to the loss of WAL records.

27.3.6.1. Monitoring the Rewind Results #

You can check the results of the pg_rewind operation, i.e. the rewind state of cluster nodes, in the rewind_state field of the biha.state file. The field contains the enum values, which are interpreted as follows:

Table 27.1. The Rewind State

ValueInterpretation
0 The rewind is not required.
1 The server is stopped, the biha.autorewind configuration parameter was enabled, the rewind will be performed after the server restart.
2 The rewind failed.
3 The rewind was performed successfully.
4 The biha.autorewind configuration parameter was not enabled, and the rewind must be performed manually as described in Restoring the Node from the NODE_ERROR State.

27.3.6.2. Monitoring WAL Trimming Parameters #

You can check the WAL trimming parameters of cluster nodes in the following fields of the biha.state file:

  • waltrim_state: current state of WAL trimming execution

  • waltrim_common_tli: timeline of the last common point

  • waltrim_start_lsn: start of the WAL desynchronization range caused by redundant heartbeats

  • waltrim_end_lsn: end of the WAL desynchronization range caused by redundant heartbeats

27.3.7. Restoring the Node from the NODE_ERROR State #

Errors occurred in biha or server instance processes listed below may cause the node failure, i.e. it will not be able to restart and will damage the WAL:

  • A rewind automatically performed by biha using pg_rewind if node timelines diverge.

  • The walreceiver process in case of timeline divergence. The follower node WAL may be partially rewritten by the WAL received from the leader node.

Note

  • If the NODE_ERROR state is caused by other issues, for example, replication slot overflow, you can fix the error root cause and call the biha.reset_node_error function to reset the NODE_ERROR state.

  • It is not recommended to modify any cluster configuration parameters while one of the nodes is in the NODE_ERROR state, as the changes may fail to be applied on that node.

When the node goes into the NODE_ERROR state, the WAL recovery is paused and the walreceiver process is stopped. Nodes in the NODE_ERROR state are unable to vote during elections. Reading from such nodes is prohibited. Besides, the error details are saved to the biha.state file and checked upon the node restart, so the node will go into the same state when the biha-background-worker process is launched.

To restore the node from the NODE_ERROR state, take the following steps:

  1. Save the most recent files from the pg_wal directory, since some of the files unique to this node will be rewritten by pg_rewind.

  2. To save biha configuration files, run pg_rewind with the --biha option, for example:

    pg_rewind --biha --target-pgdata=path_to_PGDATA_of_the_NODE_ERROR_node --source-server='user=biha_replication_user host=leader_host port=leader_port dbname=postgres'
    

    Important

    The --biha option is essential for saving biha configuration files. Using pg_rewind in a BiHA cluster without --biha option may cause cluster configuration inconsistency.

    If the rewind has been successful, information about the NODE_ERROR state is deleted from the biha.state file. Besides, when you specify the connection string in the --source-server option of pg_rewind, it also automatically saves this string for the primary_conninfo configuration parameter in the postgresql.auto.conf file. This is important for the node to continue restoring after the restart and reach the consistency point, which is the number of the last record in the source server WAL at the time of the rewind.

  3. (Optional) If the node was offline for a long time, to prevent the risk of data corruption and obsolete data reads, set the biha.flw_ro parameter of the restored node to off.

27.3.8. Replication Configuration #

By default, BiHA clusters are asynchronous. However, BiHA allows you to create a cluster with quorum-based synchronous replication.

Quorum-based synchronous replication is enabled by setting the synchronous_standby_names parameter that specifies the list of synchronous standby names and the number of synchronous standbys (quorum) that transactions must wait for replies from with the ANY method. For more information, see Multiple Synchronous Standbys. The synchronous_commit parameter is used with the default value on.

For more information, see the following sections:

27.3.8.1. Synchronous Replication and the Referee #

There are a few points that need to be taken into account with regard to synchronous replication and the referee node. If the --mode option is set to referee, the referee does not participate in synchronous replication. When set to referee_with_wal, the node can synchronously receive data. This mode allows the cluster to continue to be available in the 2+1 configuration with --sync-standbys=1 if the follower node is down and the referee node starts confirming transactions for the leader node. The referee behavior depends on the synchronous_commit parameter value. Note that with this parameter set to remote_apply the referee does not confirm transactions.

27.3.8.2. Enabling Quorum-Based Synchronous Replication #

You can enable quorum-based synchronous replication either when initializing a BiHA cluster or later in the existing BiHA cluster.

Important

Enabling quorum-based synchronous replication in your BiHA cluster is irreversible. Once you enable quorum-based synchronous replication, you cannot make your BiHA cluster asynchronous again. However, you can exclude specific nodes from the list of synchronous standbys using the biha.remove_from_ssn function. The excluded node will run asynchronously.

You can enable quorum-based synchronous replication as follows:

  • When you create a BiHA cluster from scratch or convert the existing cluster, enable quorum-based synchronous replication using the --sync-standbys option of bihactl init.

    For example, when setting the --sync-standbys value to 2 for a three-node cluster, synchronous_standby_names looks as follows:

    ANY 2 (biha_node_1,biha_node_2,biha_node_3)
    
  • To enable quorum-based synchronous replication in your existing asynchronous BiHA cluster, use the biha.set_sync_standbys function.

27.3.8.3. Managing Replication #

You can manage replication in a BiHA cluster as follows:

  • To view the current value of the synchronous_standby_names parameter, use the biha.get_ssn function.

  • To modify the quorum of synchronous standbys, i.e. the value set for the ANY method in the synchronous_standby_names parameter, use the biha.set_sync_standbys function. This is usually required after you change BiHA cluster composition by adding nodes with the bihactl add command or removing nodes with the biha.remove_node function.

  • To add a specific node to the list of synchronous standbys, use the biha.add_to_ssn function.

  • To add multiple nodes as the list of synchronous standbys, use the biha.set_ssn function.

  • To exclude a specific node from the list of synchronous standbys, use the biha.remove_from_ssn function. The excluded node will run asynchronously.

27.3.8.4. Relaxing Restrictions of Quorum-Based Synchronous Replication #

You can relax restrictions of quorum-based synchronous replication to allow the leader node to continue operation while some of the synchronous standbys are temporary unavailable. Enabling relaxed quorum-based synchronous replication means specifying the MIN field value of the synchronous_standby_names parameter, which is the minimum number of synchronous standbys that must be available for the leader node to continue operation. The synchronous_standby_gap parameter remains unchanged and keeps its default value of 0.

You can manage relaxed quorum-based synchronous replication in a BiHA cluster with quorum-based synchronous replication as follows:

  • To enable relaxed quorum-based synchronous replication when initializing a cluster with the bihactl init command, specify the --sync-standbys-min option.

    For example, when setting the --sync-standbys-min value to 0 for a three-node cluster, synchronous_standby_names looks as follows:

      ANY 2 MIN 0 (biha_node_1,biha_node_2,biha_node_3)
      
  • To enable relaxed quorum-based synchronous replication in the existing cluster or modify the minimum number of quorum-based synchronous standbys, use the biha.set_sync_standbys_min function.

  • To disable relaxed quorum-based synchronous replication, set the minimum number of synchronous standbys to -1 using the biha.set_sync_standbys_min function.

27.3.9. Logging #

biha logs messages sent by its components, i.e. the control channel and the node controller. The control channel is used to exchange service information between the nodes and is marked as BCP in the log. The node controller is the biha core component, which is responsible for the node operation logic and is marked as NC in the log. You can determine the types of messages to be logged by setting the appropriate logging level. biha supports both standard Postgres Pro message severity levels and the extension logging levels.

Postgres Pro message severity levels are used by biha in the following cases:

  • A biha process ends with an error (ERROR and FATAL levels).

  • A biha component is not covered by any logging level of the extension.

  • A message should be logged when component logging is disabled. For example, LOG level messages sent by the control channel, which are displayed only when the component is successfully initialized.

biha logging levels are mapped to the Postgres Pro message severity levels. If you want messages of the required level to appear in the log, the value set for this level should correspond to the value in log_min_messages. You can configure biha logging levels by specifying the corresponding configuration parameters in the postgresql.biha.conf file.

The recommended configuration of logging levels looks as follows:

biha.BcpTransportWarn_log_level = WARNING
biha.BcpTransportLog_log_level = LOG
biha.BcpTransportDetails_log_level = DEBUG1
biha.BcpTransportDebug_log_level = DEBUG1
biha.BcpTransportSSLDebug_log_level = DEBUG2
biha.NodeControllerLog_log_level = LOG
biha.NodeControllerWarn_log_level = WARNING
biha.NodeControllerDetails_log_level = DEBUG1
biha.NodeControllerDebug_log_level = DEBUG1
log_min_messages = DEBUG1

27.3.10. Callbacks #

A callback is an SQL function that notifies users or external services about events in the BiHA cluster, for example, about election of a new leader or change of cluster configuration. As a user, you create an SQL function and register it as a callback. Under certain conditions, the biha extension calls this function.

Timely alerting helps external services to provide proper reaction to events in the BiHA cluster. For example, after receiving information about the leader change, the proxy server redirects traffic to the new leader.

The following callback types are available:

Table 27.2. Callback Types

NameDescription
CANDIDATE_TO_LEADER

Called on the node elected as the new leader.

Signature:

my_callback() RETURNS void
LEADER_CHANGE_STARTED

Called on all nodes when the old leader is not available, but the new leader is not yet elected. You can use this callback to fence the old leader.

The callback is activated when the number of nodes reaches the biha.nquorum value, and it becomes possible to hold elections.

This callback is also called on all cluster nodes when the new leader is set manually using the biha.set_leader function.

Important

When you set the new leader by biha.set_leader, the old leader immediately restarts and the LEADER_CHANGE_STARTED callback may fail on it. In this case, the callback is delayed and then called on the old leader right after its restart. If the restart takes longer than the specified heartbeat timeout, the callback execution is canceled. The heartbeat timeout value is calculated as biha.heartbeat_max_lost * biha.heartbeat_send_period.

Signature:

my_callback(id integer, host text, port integer) RETURNS void

In this signature, biha passes the lost leader details: ID, host name, and node port number.

LEADER_CHANGED

Called on every node when the BiHA cluster leader changes.

Signature:

my_callback(id integer, host text, port integer) RETURNS void

In this signature, biha passes the new leader details: ID, host name, and node port number.

LEADER_STATE_IS_RW

Called on the leader and other nodes when the leader changes its state to LEADER_RW.

Signature:

my_callback(id integer, host text, port integer) RETURNS void

In this signature, biha passes to the callback the leader details: ID, host name, and node port number.

LEADER_TO_FOLLOWER

Called on the old leader returned to the cluster after demotion.

Signature:

my_callback() RETURNS void
NODE_ADDED

Called on every node when a new node is added to the BiHA cluster.

Signature:

my_callback(id integer, host text, port integer, mode integer) RETURNS void

In this signature, biha passes to the callback the new node details, such as its host name, node port number, and operation mode: regular, referee, or referee_with_wal. For more information, see The Referee Node in the BiHA Cluster.

NODE_REMOVED

Called on every node when a node is removed from the BiHA cluster.

Signature:

my_callback(id integer) RETURNS void

In this signature, biha passes to the callback the ID of the removed node.

OFFERED_TO_LEADER

Called on the node manually set to leader when it is about to become the leader.

Signature:

my_callback() RETURNS void
STATUS_CHANGED

Called on every node whenever values in the fields of the biha.status_v view change (except since_last_hb).

Note

It is not recommended to execute other callbacks when using STATUS_CHANGED as due to event intersections one of the callbacks may fail.

Signature:

my_callback() RETURNS void
TERM_CHANGED

Called on the node when its term value increases, for example, when failover or switchover happens, nodes are added or removed, as well as when parameters are changed by the biha.set_* functions.

Signature:

my_callback(old_term integer, new_term integer) RETURNS void

In this signature, biha passes the old and the new term values.


27.3.10.1. Considerations and Limitations #

  • When an event happens, callbacks are executed in sequence. At this time, biha cannot change its state, for example, initiate elections.

  • If callback execution takes longer than the biha.callbacks_timeout value, biha stops callback execution and continues normal operation.

  • In clusters with asynchronous replication, the biha.register_callback function does not wait for all nodes to receive callbacks. This may lead to a situation where callbacks are present on the leader, but not present on the follower as it lags behind.

  • Normally, callbacks are not executed on the referee in the referee mode. However, if you have registered callbacks on the leader before adding the referee, callbacks may be executed on the referee and cannot be removed from it.

  • It is not recommended to call BiHA functions during callback execution as other callbacks in the queue may fail.

Note

Do not call biha functions inside of a callback as this can cause unexpected behavior.

27.3.10.2. Managing Callbacks #

To manage callbacks, you can perform the following actions:

  • register one or several callbacks for a single event

  • view the list of registered callbacks

  • unregister callbacks

Registering Callbacks

Write an SQL function and then register it as a callback using biha.register_callback.

In this example, you create several SQL functions in PL/pgSQL and register them as different callback types.

  1. Ensure that the leader node of your BiHA cluster is in the LEADER_RW state.

  2. On the leader node, use psql and connect to the biha_db database:

    postgres=# \c biha_db
    
  3. Create the following callback functions:

    -- log the node term change
    CREATE FUNCTION log_term_changed(old_term integer, new_term integer)
    RETURNS void AS $$
    BEGIN
        RAISE LOG 'Callback: Term changed from % to %', old_term, new_term;
    END;
    $$ LANGUAGE plpgsql;
    
    -- log the election of the new leader
    CREATE FUNCTION log_leader_changed(id integer, host text, port integer)
    RETURNS void AS $$
    BEGIN
        RAISE LOG 'Callback: New leader is % %:%', id, host, port;
    END;
    $$ LANGUAGE plpgsql;
    
    -- log that the leader was demoted
    CREATE FUNCTION log_leader_to_follower()
    RETURNS void AS $$
    BEGIN
        RAISE LOG 'Callback: demote';
    END;
    $$ LANGUAGE plpgsql;
    
  4. Register the created functions:

    SELECT biha.register_callback('TERM_CHANGED', 'log_term_changed', 'biha_db');
    
    SELECT biha.register_callback('LEADER_CHANGED', 'log_leader_changed', 'biha_db');
    
    SELECT biha.register_callback('LEADER_TO_FOLLOWER', 'log_leader_to_follower', 'biha_db');
    

    You can also specify the user on which behalf the callbacks are executed or determine the order of callbacks execution. For more information, see the description of the biha.register_callback function.

Viewing Callbacks

Registered callbacks are added to the biha.callbacks table located in the biha_db database.

To view all registered callbacks:

  1. On the leader node, use psql and connect to the biha_db database:

    postgres=# \c biha_db
    
  2. Display the content of the biha.callbacks table:

    SELECT * FROM biha.callbacks;
    
    1 | log_term_changed
    2 | log_leader_changed
    3 | log_leader_to_follower
    (3 rows)
    

Unregistering Callbacks

An unregistered callback is deleted from the biha.callbacks table.

  1. Ensure that the leader node of your BiHA cluster is in the LEADER_RW state.

  2. On the leader node, use psql and connect to the biha_db database:

    postgres=# \c biha_db
    
  3. Get an ID of the callback that you want to unregister, for example, log_leader_changed:

    SELECT id FROM biha.callbacks WHERE func = 'log_leader_changed';
    

    The callback ID is returned, for example, 2.

  4. Unregister the callback:

    SELECT biha.unregister_callback(2);
    

    The callback is now unregistered.

27.3.11. Recovering from a Backup #

If your database instance was restored from one of the nodes of the BiHA cluster to a separate node and/or using Point-in-Time Recovery (PITR), there must be no connection between the restored node and the operating BiHA cluster nodes. To prevent the connection, take the following steps on the restored node before you start it:

  1. Remove the include 'postgresql.biha.conf' include directive from the postgresql.conf configuration file.

  2. Ensure that biha is not present in the shared_preload_libraries of the postgresql.conf file and, if applicable, of the postgresql.auto.conf file.

If you want to add the restored node to the cluster, take the following steps:

  1. On the restored node, manually configure streaming replication from the leader.

  2. Synchronize the restored node with the leader.

  3. Stop the restored node using pg_ctl:

    pg_ctl stop -D restored_node_PGDATA_directory
    
  4. Add the restored node to the cluster using the bihactl add command with the --convert-standby option.

  5. Start the restored node using pg_ctl:

    pg_ctl start -D restored_node_PGDATA_directory
    

27.3.12. Migration #

Depending on the current and target Postgres Pro Enterprise versions, you can either upgrade your BiHA cluster in the LEADER_RO, or LEADER_RW state.

For more details, see the sections below.

27.3.12.1. Migrating the BiHA Cluster in the LEADER_RW State #

Use this procedure for the following migrations:

  • from version 16.4, 16.6, 16.8, or 16.9 to version 16.10

  • from version 16.4, 16.6, or 16.8 to version 16.9

  • from version 16.4 or 16.6 to version 16.8

  • from version 16.4 to version 16.6

  • from versions 16.1 or 16.2 to version 16.3

Take the following steps:

  1. Stop one of the followers using the pg_ctl command.

  2. Upgrade the follower.

  3. Start the follower using the pg_ctl command.

  4. Promote the upgraded follower using the biha.set_leader function.

  5. Stop, upgrade, and start the remaining followers and the old leader.

  6. Promote the old leader node using the biha.set_leader function.

Important

Note that if a node with the Postgres Pro Enterprise 16.1 version goes into the NODE_ERROR state, nodes with newer versions may determine its state incorrectly, for example, as REFEREE. In this case, it is recommended to stop the node, upgrade its version, synchronize it using pg_rewind, and start it once again.

27.3.12.2. Migrating the BiHA Cluster in the LEADER_RO State #

Use this procedure to migrate from versions 16.1, 16.2, or 16.3 to versions 16.4, 16.6, 16.8, 16.9, or 16.10.

  1. Use the biha.set_nquorum_and_minnodes function to set the nquorum and minnodes parameters to the value greater than the number of nodes in the cluster.

    For example, if your cluster has 3 nodes, set the above mentioned parameters to 4. This is required to avoid unexpected leader elections and to change the leader state from LEADER_RW to LEADER_RO.

  2. Wait for the followers to catch up with the leader and ensure that the replay_lag column in the pg_stat_replication view is NULL.

  3. Stop one of the followers using the pg_ctl command.

  4. Upgrade the follower.

  5. Start the follower using the pg_ctl command.

  6. Promote the upgraded follower using the biha.set_leader function.

  7. Stop, upgrade, and start the remaining followers and the old leader.

  8. Promote the old leader using the biha.set_leader function.

  9. Use the biha.set_nquorum_and_minnodes function to set nquorum and minnodes to the values, which were used before starting the Postgres Pro Enterprise upgrade.

Important

Note that if a node with the Postgres Pro Enterprise 16.1 version goes into the NODE_ERROR state, nodes with newer versions may determine its state incorrectly, for example, as REFEREE. In this case, it is recommended to stop the node, upgrade its version, synchronize it using pg_rewind, and start it once again.

27.3.13. Disabling biha #

You can temporarily disable the biha extension in your cluster. The disabling procedure differs depending of the node role.

Disabling biha on the Follower

Important

When you disable biha on the follower, physical replication stops.

  1. On the leader in the LEADER_RW state, call the biha.remove_node function to exclude the node where you want to disable biha from the cluster:

    SELECT biha.remove_node(node_id);
    
  2. On the follower node, do the following:

    1. Remove the include 'postgresql.biha.conf' include directive from the postgresql.conf configuration file.

    2. Remove biha from the shared_preload_libraries of the postgresql.conf file and, if applicable, of the postgresql.auto.conf file.

    3. Stop and start the follower using pg_ctl.

Disabling biha on the Leader

  1. Remove the include 'postgresql.biha.conf' include directive from the postgresql.conf configuration file.

  2. Remove biha from the shared_preload_libraries of the postgresql.conf file and, if applicable, of the postgresql.auto.conf file.

  3. Stop and start the leader using pg_ctl.

27.3.14. Removing biha #

You can completely remove the biha extension and permanently disable BiHA functionality in your cluster.

  1. Depending on the role of the node, disable biha using the instructions.

  2. Execute the DROP EXTENSION command. It must be executed on the leader in the LEADER_RW state and from the biha_db database:

    biha_db=# DROP EXTENSION biha;
    

  3. Remove all files from the pg_biha directory of all nodes.