Re: Pgpool with high availability - Mailing list pgsql-general
From | vijay patil |
---|---|
Subject | Re: Pgpool with high availability |
Date | |
Msg-id | CAD5k+7wqiYEVORNanBM5nmxxKv2Zb3Mu5cSOAntxeHVCLvYsBg@mail.gmail.com Whole thread Raw |
In response to | Re: Pgpool with high availability (Adrian Klaver <adrian.klaver@aklaver.com>) |
List | pgsql-general |
After modifying the pgpool.conf configuration to correct a subnet mistake, Previously, the wrong subnet was specified as /26, and it has now been corrected to /24. The configuration changes were as follows:
Previous Configuration:
delegate_ip = '10.127.1.18'
if_up_cmd = '/sbin/ip addr add $_IP_$/26 dev eth0 label eth0:1'
if_down_cmd = '/sbin/ip addr del $_IP_$/26 dev eth0'
arping_cmd = '/usr/sbin/arping -U $_IP_$ -w 1 -I eth0'
Updated Configuration:
delegate_ip = '10.127.1.18'
if_up_cmd = '/sbin/ip addr add $_IP_$/24 dev eth0 label eth0:1'
if_down_cmd = '/sbin/ip addr del $_IP_$/24 dev eth0'
arping_cmd = '/usr/sbin/arping -U $_IP_$ -w 1 -I eth0'
Current Issue:-
Following the subnet correction, the Virtual IP (VIP) 10.127.1.18
is now reachable only from the leader node (ha0002
), while it remains unreachable from the standby nodes (ha0001
and ha0003
). Below are the details of the connectivity status and the commands executed:
Leader Node (ha0002
)
[root@staging-ha0002 PG_LOGS]# ping 10.127.1.18
PING 10.127.1.18 (10.127.1.18) 56(84) bytes of data.
64 bytes from 10.127.1.18: icmp_seq=1 ttl=64 time=0.041 ms
64 bytes from 10.127.1.18: icmp_seq=2 ttl=64 time=0.058 ms
64 bytes from 10.127.1.18: icmp_seq=3 ttl=64 time=0.060 ms
--- 10.127.1.18 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2080ms
rtt min/avg/max/mdev = 0.041/0.053/0.060/0.008 ms
[root@staging-ha0002 PG_LOGS]# ping 10.127.1.18
PING 10.127.1.18 (10.127.1.18) 56(84) bytes of data.
64 bytes from 10.127.1.18: icmp_seq=1 ttl=64 time=0.041 ms
64 bytes from 10.127.1.18: icmp_seq=2 ttl=64 time=0.058 ms
64 bytes from 10.127.1.18: icmp_seq=3 ttl=64 time=0.060 ms
--- 10.127.1.18 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2080ms
rtt min/avg/max/mdev = 0.041/0.053/0.060/0.008 ms
[pgbigboss@staging-ha0002 ~]$ pcp_watchdog_info -h 10.127.1.18 -p 9898 -U pgbigboss -W
Password:
3 3 YES ha0002:9999 Linux staging-ha0002 ha0002
ha0002:9999 Linux staging-ha0002 ha0002 9999 9000 4 LEADER 0 MEMBER
ha0001:9999 Linux staging-ha0001 ha0001 9999 9000 7 STANDBY 0 MEMBER
ha0003:9999 Linux staging-ha0003 ha0003 9999 9000 7 STANDBY 0 MEMBER
Standby Node (ha0001
)
[root@staging-ha0001 ~]# ping 10.127.1.18
PING 10.127.1.18 (10.127.1.18) 56(84) bytes of data.
From 10.127.1.10 icmp_seq=1 Destination Host Unreachable
From 10.127.1.10 icmp_seq=2 Destination Host Unreachable
From 10.127.1.10 icmp_seq=3 Destination Host Unreachable
--- 10.127.1.18 ping statistics ---
5 packets transmitted, 0 received, +3 errors, 100% packet loss, time 4126ms
pipe 3
[pgbigboss@staging-ha0001 ~]$ pcp_watchdog_info -h 10.127.1.18 -p 9898 -U pgbigboss -W
Password:
ERROR: connection to host "10.127.1.18" failed with error "No route to host"
The VIP 10.127.1.18 is accessible from the leader node (ha0002) but not from the standby nodes (ha0001 and ha0003).
Thanks
Vijay
On 5/28/24 1:31 AM, vijay patil wrote:
>
> HI Team,
>
> "I'm encountering challenges while configuring Pgpool with high
> availability. The initial setup is completed, and Pgpool is operational
> on a single node, functioning without issues. However, upon attempting
> to start Pgpool on any additional nodes, particularly node 2, it becomes
> immediately unreachable.
And how we are supposed to arrive at an answer with essentially no
information provided?
Need:
1) Configuration for initial setup.
2) A more detailed explanation of what "... upon attempting
to start Pgpool on any additional nodes" means? Include configuration
changes.
3) The error messages.
4) Where the nodes are located?
>
> I'm seeking assistance to address this issue. My setup consists of three
> nodes, each hosting both PostgreSQL and Pgpool services."
>
>
> Thanks
>
> Vijay
>
--
Adrian Klaver
adrian.klaver@aklaver.com
pgsql-general by date: