Chapter 26. Built-in High Availability (BiHA)
Table of Contents
- 26.1. Architecture
- 26.2. Setting Up a BiHA Cluster
- 26.2.1. Prerequisites and Considerations
- 26.2.2. User Authentication Configuration
- 26.2.3. Setting Up a BiHA Cluster from Scratch
- 26.2.4. Setting Up a BiHA Cluster from the Existing Cluster with Streaming Replication
- 26.2.5. Setting Up a BiHA Cluster from the Existing Database Server
- 26.2.6. Setting Up the Referee Node in the BiHA Cluster
- 26.2.7. Setting Up a Multi-Level Geo-Distributed and Disaster-Resilient BiHA Cluster
- 26.2.8. Configuring SSL for Service Connections (Optional)
- 26.2.9. Using the Magic String (Optional)
- 26.2.2. User Authentication Configuration
- 26.2.1. Prerequisites and Considerations
- 26.3. Administration
- 26.3.1. Changing Cluster Composition
- 26.3.2. Changing Configuration Parameters
- 26.3.3. Manual Switchover
- 26.3.4. Managing SSL for Service Connections
- 26.3.5. Roles
- 26.3.6. Using the Service Mode
- 26.3.7. Automatic Cluster Synchronization after Failover
- 26.3.8. Restoring the Node from the
NODE_ERRORState- 26.3.9. Hanging Prevention Mechanism
- 26.3.10. Replication Configuration
- 26.3.11. Logging
- 26.3.12. Callbacks
- 26.3.13. Recovering from a Backup
- 26.3.14. Disabling biha
- 26.3.15. Removing biha
- 26.3.2. Changing Configuration Parameters
- 26.3.1. Changing Cluster Composition
- 26.4. Reference for the biha Extension
- 26.5. Reference for the bihactl Utility
Built-in High Availability (BiHA) is a complex Postgres Pro Standard solution managed by the biha extension and the bihactl utility. Together with a set of core patches, SQL interface, and the biha-background-worker process, which coordinates the cluster nodes, BiHA turns a Postgres Pro cluster into a BiHA cluster — a cluster with physical replication and built-in failover, high availability, and automatic node failure recovery.
As compared to existing cluster solutions, i.e. a standard PostgreSQL primary-standby cluster and a cluster configured with multimaster, the BiHA cluster offers the following benefits:
Physical replication.
Dedicated leader node available for read and write transactions and read-only follower nodes.
Built-in failover including capabilities of automatic node failure detection, response, and subsequent cluster reconfiguration by means of elections.
Referee node to avoid split-brain issues.
Manual switchover.
Autorewind capabilities.
Synchronous and asynchronous node replication.
Cascading replication.
Multi-level geographical distribution and disaster resilience (experimental functionality).
Hanging prevention mechanism.
No additional external cluster software required.