Thread: Re: [BUGS] BUG #7534: walreceiver takes long time to detect n/w breakdown
On Thu, Sep 13, 2012 at 1:22 PM, Amit Kapila <amit.kapila@huawei.com> wrote: > On Wednesday, September 12, 2012 10:15 PM Fujii Masao > On Wed, Sep 12, 2012 at 8:54 PM, <amit.kapila@huawei.com> wrote: >>> The following bug has been logged on the website: >>> >>> Bug reference: 7534 >>> Logged by: Amit Kapila >>> Email address: amit.kapila@huawei.com >>> PostgreSQL version: 9.2.0 >>> Operating system: Suse 10 >>> Description: >> >>> 1. Both master and standby machine are connected normally, >>> 2. then you use the command: ifconfig ip down; make the network card of >>> master and standby down, >> >>> Observation >>> master can detect connect abnormal, but the standby can't detect connect >>> abnormal and show a connected channel long time. > >> What about setting keepalives_xxx libpq parameters? >> > http://www.postgresql.org/docs/devel/static/libpq-connect.html#LIBPQ-PARAMKE > YWORDS > >> Keepalives are not a perfect solution for the termination of connection, > but >> it would help to a certain extent. > > We have tried by enabling keepalive, but it didn't worked maybe because > walreceiver is trying to send reveiver status. > It fails in sending that after many attempts of same. > >> If you need something like walreceiver-version of replication_timeout, > such feature has not been implemented yet. >> Please feel free to implement that! > > I would like to implement such feature for walreceiver, but there is one > confusion that whether to use > same configuration parameter(replication_timeout) for walrecevier as for > master or introduce a new > configuration parameter (receiver_replication_timeout). I like the latter. I believe some users want to set the different timeout values, for example, in the case where the master and standby servers are placed in the same room, but cascaded standby is placed in other continent. Regards, -- Fujii Masao
On Thursday, September 13, 2012 10:57 PM Fujii Masao On Thu, Sep 13, 2012 at 1:22 PM, Amit Kapila <amit.kapila@huawei.com> wrote: > On Wednesday, September 12, 2012 10:15 PM Fujii Masao > On Wed, Sep 12, 2012 at 8:54 PM, <amit.kapila@huawei.com> wrote: >>>> The following bug has been logged on the website: >>> >>>> Bug reference: 7534 >>>> Logged by: Amit Kapila >>>> Email address: amit.kapila@huawei.com >>>> PostgreSQL version: 9.2.0 >>>> Operating system: Suse 10 >>>> Description: >> >>>> 1. Both master and standby machine are connected normally, >>>> 2. then you use the command: ifconfig ip down; make the network card of >>>> master and standby down, >> >>>> Observation >>>> master can detect connect abnormal, but the standby can't detect connect >>>> abnormal and show a connected channel long time. > > >> I would like to implement such feature for walreceiver, but there is one >> confusion that whether to use >> same configuration parameter(replication_timeout) for walrecevier as for >> master or introduce a new >> configuration parameter (receiver_replication_timeout). >I like the latter. I believe some users want to set the different >timeout values, >for example, in the case where the master and standby servers are placed in >the same room, but cascaded standby is placed in other continent. Thank you for your suggestion. I have implemented as per your suggestion to have separate timeout parameter for walreceiver. The main changes are: 1. Introduce a new configuration parameter wal_receiver_replication_timeout for walreceiver. 2. In function WalReceiverMain(), check if there is no communication till wal_receiver_replication_timeout, exit the walreceiver. This is same as walsender functionality. As this is a feature, So I am uploading the attached patch in coming CommitFest. Suggestions/Comments? With Regards, Amit Kapila.
Attachment
On Fri, Sep 14, 2012 at 10:01 PM, Amit kapila <amit.kapila@huawei.com> wrote: > > On Thursday, September 13, 2012 10:57 PM Fujii Masao > On Thu, Sep 13, 2012 at 1:22 PM, Amit Kapila <amit.kapila@huawei.com> wrote: >> On Wednesday, September 12, 2012 10:15 PM Fujii Masao >> On Wed, Sep 12, 2012 at 8:54 PM, <amit.kapila@huawei.com> wrote: >>>>> The following bug has been logged on the website: >>>> >>>>> Bug reference: 7534 >>>>> Logged by: Amit Kapila >>>>> Email address: amit.kapila@huawei.com >>>>> PostgreSQL version: 9.2.0 >>>>> Operating system: Suse 10 >>>>> Description: >>> >>>>> 1. Both master and standby machine are connected normally, >>>>> 2. then you use the command: ifconfig ip down; make the network card of >>>>> master and standby down, >>> >>>>> Observation >>>>> master can detect connect abnormal, but the standby can't detect connect >>>>> abnormal and show a connected channel long time. >> >> >>> I would like to implement such feature for walreceiver, but there is one >>> confusion that whether to use >>> same configuration parameter(replication_timeout) for walrecevier as for >>> master or introduce a new >>> configuration parameter (receiver_replication_timeout). > >>I like the latter. I believe some users want to set the different >>timeout values, >>for example, in the case where the master and standby servers are placed in >>the same room, but cascaded standby is placed in other continent. > > Thank you for your suggestion. I have implemented as per your suggestion to have separate timeout parameter for walreceiver. > The main changes are: > 1. Introduce a new configuration parameter wal_receiver_replication_timeout for walreceiver. > 2. In function WalReceiverMain(), check if there is no communication till wal_receiver_replication_timeout, exit the walreceiver. > This is same as walsender functionality. > > As this is a feature, So I am uploading the attached patch in coming CommitFest. > > Suggestions/Comments? You also need to change walsender so that it periodically sends the heartbeat message, like walreceiver does each wal_receiver_status_interval. Otherwise, walreceiver will detect the timeout wrongly whenever there is no traffic in the master. Regards, -- Fujii Masao
On Saturday, September 15, 2012 11:27 AM Fujii Masao wrote: On Fri, Sep 14, 2012 at 10:01 PM, Amit kapila <amit.kapila@huawei.com> wrote: > > On Thursday, September 13, 2012 10:57 PM Fujii Masao > On Thu, Sep 13, 2012 at 1:22 PM, Amit Kapila <amit.kapila@huawei.com> wrote: >> On Wednesday, September 12, 2012 10:15 PM Fujii Masao >> On Wed, Sep 12, 2012 at 8:54 PM, <amit.kapila@huawei.com> wrote: >>>>> The following bug has been logged on the website: >>>> I would like to implement such feature for walreceiver, but there is one >>>> confusion that whether to use >>>> same configuration parameter(replication_timeout) for walrecevier as for >>>> master or introduce a new >>>> configuration parameter (receiver_replication_timeout). > >>>I like the latter. I believe some users want to set the different >>>timeout values, >>>for example, in the case where the master and standby servers are placed in >>>the same room, but cascaded standby is placed in other continent. > >> Thank you for your suggestion. I have implemented as per your suggestion to have separate timeout parameter for walreceiver. >> The main changes are: >> 1. Introduce a new configuration parameter wal_receiver_replication_timeout for walreceiver. >> 2. In function WalReceiverMain(), check if there is no communication till wal_receiver_replication_timeout, exit the walreceiver. >> This is same as walsender functionality. > >> As this is a feature, So I am uploading the attached patch in coming CommitFest. > >> Suggestions/Comments? > You also need to change walsender so that it periodically sends the heartbeat > message, like walreceiver does each wal_receiver_status_interval. Otherwise, > walreceiver will detect the timeout wrongly whenever there is no traffic in the > master. Doesn't current keepalive message from walsender will suffice that need? With Regards, Amit Kapila.
On Sat, Sep 15, 2012 at 4:26 PM, Amit kapila <amit.kapila@huawei.com> wrote: > On Saturday, September 15, 2012 11:27 AM Fujii Masao wrote: > On Fri, Sep 14, 2012 at 10:01 PM, Amit kapila <amit.kapila@huawei.com> wrote: >> >> On Thursday, September 13, 2012 10:57 PM Fujii Masao >> On Thu, Sep 13, 2012 at 1:22 PM, Amit Kapila <amit.kapila@huawei.com> wrote: >>> On Wednesday, September 12, 2012 10:15 PM Fujii Masao >>> On Wed, Sep 12, 2012 at 8:54 PM, <amit.kapila@huawei.com> wrote: >>>>>> The following bug has been logged on the website: > >>>>> I would like to implement such feature for walreceiver, but there is one >>>>> confusion that whether to use >>>>> same configuration parameter(replication_timeout) for walrecevier as for >>>>> master or introduce a new >>>>> configuration parameter (receiver_replication_timeout). >> >>>>I like the latter. I believe some users want to set the different >>>>timeout values, >>>>for example, in the case where the master and standby servers are placed in >>>>the same room, but cascaded standby is placed in other continent. >> >>> Thank you for your suggestion. I have implemented as per your suggestion to have separate timeout parameter for walreceiver. >>> The main changes are: >>> 1. Introduce a new configuration parameter wal_receiver_replication_timeout for walreceiver. >>> 2. In function WalReceiverMain(), check if there is no communication till wal_receiver_replication_timeout, exit thewalreceiver. >>> This is same as walsender functionality. >> >>> As this is a feature, So I am uploading the attached patch in coming CommitFest. >> >>> Suggestions/Comments? > >> You also need to change walsender so that it periodically sends the heartbeat >> message, like walreceiver does each wal_receiver_status_interval. Otherwise, >> walreceiver will detect the timeout wrongly whenever there is no traffic in the >> master. > > Doesn't current keepalive message from walsender will suffice that need? No. Though the keepalive interval should be smaller than the timeout, IIRC there is no way to specify the keepalive interval now. Regards, -- Fujii Masao
On Sunday, September 16, 2012 12:14 AM Fujii Masao wrote: On Sat, Sep 15, 2012 at 4:26 PM, Amit kapila <amit.kapila@huawei.com> wrote: > On Saturday, September 15, 2012 11:27 AM Fujii Masao wrote: > On Fri, Sep 14, 2012 at 10:01 PM, Amit kapila <amit.kapila@huawei.com> wrote: >> >> On Thursday, September 13, 2012 10:57 PM Fujii Masao >> On Thu, Sep 13, 2012 at 1:22 PM, Amit Kapila <amit.kapila@huawei.com> wrote: >>> On Wednesday, September 12, 2012 10:15 PM Fujii Masao >>> On Wed, Sep 12, 2012 at 8:54 PM, <amit.kapila@huawei.com> wrote: >>>>>>> The following bug has been logged on the website: > >>>>>> I would like to implement such feature for walreceiver, but there is one >>>>>> confusion that whether to use >>>>>> same configuration parameter(replication_timeout) for walrecevier as for >>>>>> master or introduce a new >>>>>> configuration parameter (receiver_replication_timeout). >> >>>>>I like the latter. I believe some users want to set the different >>>>>timeout values, >>>>>for example, in the case where the master and standby servers are placed in >>>>>the same room, but cascaded standby is placed in other continent. >> >>>> Thank you for your suggestion. I have implemented as per your suggestion to have separate timeout parameter for walreceiver. >>>> The main changes are: >>>> 1. Introduce a new configuration parameter wal_receiver_replication_timeout for walreceiver. >>>> 2. In function WalReceiverMain(), check if there is no communication till wal_receiver_replication_timeout, exit thewalreceiver. >>> > This is same as walsender functionality. >> >>>> As this is a feature, So I am uploading the attached patch in coming CommitFest. >> >>>> Suggestions/Comments? > >>> You also need to change walsender so that it periodically sends the heartbeat >>> message, like walreceiver does each wal_receiver_status_interval. Otherwise, >>> walreceiver will detect the timeout wrongly whenever there is no traffic in the >>> master. > >> Doesn't current keepalive message from walsender will suffice that need? >No. Though the keepalive interval should be smaller than the timeout, >IIRC there is >no way to specify the keepalive interval now. Currently AFAICS in the code on idle system, it should send keepalive after 10s which is hardcoded value as sleeptime. You are right that if its not configurable, and somebody configures replication_timeout as value lower than 10s then thelogic will fail. So is it okay if a new config parameter similar to wal_receiver_status_interval be added and map it directly to sleeptimein the current code. There will be no need for any new heartbeat message, existing keepalive will sufice that purpose. With Regards, Amit Kapila.
On Sunday, September 16, 2012 12:14 AM Fujii Masao wrote: On Sat, Sep 15, 2012 at 4:26 PM, Amit kapila <amit.kapila@huawei.com> wrote: > On Saturday, September 15, 2012 11:27 AM Fujii Masao wrote: > On Fri, Sep 14, 2012 at 10:01 PM, Amit kapila <amit.kapila@huawei.com> wrote: >> >> On Thursday, September 13, 2012 10:57 PM Fujii Masao >> On Thu, Sep 13, 2012 at 1:22 PM, Amit Kapila <amit.kapila@huawei.com> wrote: >>> On Wednesday, September 12, 2012 10:15 PM Fujii Masao >>> On Wed, Sep 12, 2012 at 8:54 PM, <amit.kapila@huawei.com> wrote: >>>>>>> The following bug has been logged on the website: > >>>>>> I would like to implement such feature for walreceiver, but there is one >>>>>> confusion that whether to use >>>>>> same configuration parameter(replication_timeout) for walrecevier as for >>>>>> master or introduce a new >>>>>> configuration parameter (receiver_replication_timeout). >> >>>>>I like the latter. I believe some users want to set the different >>>>>timeout values, >>>>>for example, in the case where the master and standby servers are placed in >>>>>the same room, but cascaded standby is placed in other continent. >> >>>> Thank you for your suggestion. I have implemented as per your suggestion to have separate timeout parameter for walreceiver. >>>> The main changes are: >>>> 1. Introduce a new configuration parameter wal_receiver_replication_timeout for walreceiver. >>>> 2. In function WalReceiverMain(), check if there is no communication till wal_receiver_replication_timeout, exit the walreceiver. >>>> This is same as walsender functionality. >> >>>> As this is a feature, So I am uploading the attached patch in coming CommitFest. >> >>>> Suggestions/Comments? > >>> You also need to change walsender so that it periodically sends the heartbeat >>> message, like walreceiver does each wal_receiver_status_interval. Otherwise, >>> walreceiver will detect the timeout wrongly whenever there is no traffic in the >>> master. > >> Doesn't current keepalive message from walsender will suffice that need? > No. Though the keepalive interval should be smaller than the timeout, > IIRC there is > no way to specify the keepalive interval now. To define the behavior correctly, according to me there are 2 options now: Approach-1 : Document that both(sender and receiver) the timeout parameters should be greater than wal_receiver_status_interval. If both are greater, then I think it might never timeout due to Idle. Approach-2 : Provide a variable wal_send_status_interval, such that if this is 0, then the current behavior would prevail and if its non-zero then KeepAlive message would be send maximum after that time. The modified code of WALSendLoop will be as follows: TimestampTz timeout = 0; long sleeptime = 10000; /* 10 s */ int wakeEvents; /* sleeptime should be equal to wal send interval if it is not zero otherwise default as 10 sec*/ if (wal_send_status_interval > 0) { sleeptime = wal_send_status_interval; } wakeEvents = WL_LATCH_SET | WL_POSTMASTER_DEATH | WL_SOCKET_READABLE| WL_TIMEOUT; if (pq_is_send_pending()) wakeEvents |= WL_SOCKET_WRITEABLE; else if (wal_send_status_interval > 0) { WalSndKeepalive(output_message); /* Try to flush pending output to the client */ if (pq_flush_if_writable() != 0) break; } /* Determine time until replication timeout */ if (replication_timeout > 0) { timeout = TimestampTzPlusMilliseconds(last_reply_timestamp, replication_timeout); if (wal_send_status_interval <= 0) { sleeptime = 1 + (replication_timeout / 10); } } /* Sleep until something happens or replication timeout */ WaitLatchOrSocket(&MyWalSnd->latch, wakeEvents, MyProcPort->sock, sleeptime); /* * Check for replication timeout. Note we ignore the corner case * possibility that the client replied just as we reached the * timeout ... he's supposed to reply *before* that. */ if (replication_timeout > 0 && GetCurrentTimestamp()>= timeout) { /* * Since typically expiration of replication timeout means * communication problem, we don't send the error message to * the standby. */ ereport(COMMERROR, (errmsg("terminating walsender process due to replication timeout"))); break; } } Which way you think is better or you have any other idea to handle. With Regards, Amit Kapila.
On Mon, Sep 17, 2012 at 4:03 PM, Amit Kapila <amit.kapila@huawei.com> wrote: > To define the behavior correctly, according to me there are 2 options now: > > Approach-1 : > Document that both(sender and receiver) the timeout parameters should be > greater than wal_receiver_status_interval. > If both are greater, then I think it might never timeout due to Idle. In this approach, keepalive messages are sent each wal_receiver_status_interval? > Approach-2 : > Provide a variable wal_send_status_interval, such that if this is 0, then > the current behavior would prevail and if its non-zero then KeepAlive > message would be send maximum after that time. > The modified code of WALSendLoop will be as follows: <snip> > Which way you think is better or you have any other idea to handle. I think #2 is better because it's more intuitive to a user. Regards, -- Fujii Masao
On Tuesday, September 18, 2012 6:03 PM Fujii Masao wrote: On Mon, Sep 17, 2012 at 4:03 PM, Amit Kapila <amit.kapila@huawei.com> wrote: >> To define the behavior correctly, according to me there are 2 options now: > >> Approach-1 : >> Document that both(sender and receiver) the timeout parameters should be >> greater than wal_receiver_status_interval. >> If both are greater, then I think it might never timeout due to Idle. > In this approach, keepalive messages are sent each wal_receiver_status_interval? wal_receiver_status_interval or sleeptime whichever is smaller. >> Approach-2 : >> Provide a variable wal_send_status_interval, such that if this is 0, then >> the current behavior would prevail and if its non-zero then KeepAlive >> message would be send maximum after that time. >> The modified code of WALSendLoop will be as follows: <snip> >> Which way you think is better or you have any other idea to handle. > I think #2 is better because it's more intuitive to a user. I shall update the Patch as per Approach-2 and upload the same. With Regards, Amit Kapila.
On Tuesday, September 18, 2012 6:02 PM Fujii Masao wrote: On Mon, Sep 17, 2012 at 4:03 PM, Amit Kapila <amit.kapila@huawei.com> wrote: >> Approach-2 : >> Provide a variable wal_send_status_interval, such that if this is 0, then >> the current behavior would prevail and if its non-zero then KeepAlive >> message would be send maximum after that time. >> The modified code of WALSendLoop will be as follows: <snip> >> Which way you think is better or you have any other idea to handle. >I think #2 is better because it's more intuitive to a user. Please find a patch attached for implementation of Approach-2. With Regards, Amit Kapila.
Attachment
Re: [BUGS] BUG #7534: walreceiver takes long time to detect n/w breakdown
From
Heikki Linnakangas
Date:
On 21.09.2012 14:18, Amit kapila wrote: > On Tuesday, September 18, 2012 6:02 PM Fujii Masao wrote: > On Mon, Sep 17, 2012 at 4:03 PM, Amit Kapila<amit.kapila@huawei.com> wrote: > >>> Approach-2 : >>> Provide a variable wal_send_status_interval, such that if this is 0, then >>> the current behavior would prevail and if its non-zero then KeepAlive >>> message would be send maximum after that time. >>> The modified code of WALSendLoop will be as follows: > > <snip> >>> Which way you think is better or you have any other idea to handle. > >> I think #2 is better because it's more intuitive to a user. > > Please find a patch attached for implementation of Approach-2. Hmm, I think we need to step back a bit. I've never liked the way replication_timeout works, where it's the user's responsibility to set wal_receiver_status_interval < replication_timeout. It's not very user-friendly. I'd rather not copy that same design to this walreceiver timeout. If there's two different timeouts like that, it's even worse, because it's easy to confuse the two. So let's think how this should ideally work from a user's point of view. I think there should be just two settings: walsender_timeout and walreceiver_timeout. walsender_timeout specifies how long a walsender will keep a connection open if it doesn't hear from the walreceiver, and walreceiver_timeout is the same for walreceiver. The system should figure out itself how often to send keepalive messages so that those timeouts are not reached. In walsender, after half of walsender_timeout has elapsed and we haven't received anything from the client, the walsender process should send a "ping" message to the client. Whenever the client receives a Ping, it replies. The walreceiver does the same; when half of walreceiver_timeout has elapsed, send a Ping message to the server. Each Ping-Pong roundtrip resets the timer in both ends, regardless of which side initiated it, so if e.g walsender_timeout < walreceiver_timeout, the client will never have to initiate a Ping message, because walsender will always reach the walsender_timeout/2 point first and initiate the heartbeat message. The Ping/Pong messages don't necessarily need to be new message types, we can use the message types we currently have, perhaps with an additional flag attached to them, to request the other side to reply immediately. - Heikki
On Mon, Oct 1, 2012 at 6:38 AM, Heikki Linnakangas <hlinnakangas@vmware.com> wrote: > Hmm, I think we need to step back a bit. I've never liked the way > replication_timeout works, where it's the user's responsibility to set > wal_receiver_status_interval < replication_timeout. It's not very > user-friendly. I'd rather not copy that same design to this walreceiver > timeout. If there's two different timeouts like that, it's even worse, > because it's easy to confuse the two. I agree, but also note that wal_receiver_status_interval serves another user-visible purpose as well. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
On Mon, Oct 1, 2012 at 7:38 PM, Heikki Linnakangas <hlinnakangas@vmware.com> wrote: > Hmm, I think we need to step back a bit. I've never liked the way > replication_timeout works, where it's the user's responsibility to set > wal_receiver_status_interval < replication_timeout. It's not very > user-friendly. I'd rather not copy that same design to this walreceiver > timeout. If there's two different timeouts like that, it's even worse, > because it's easy to confuse the two. Agreed. I'd like to specify the replication timeout like we do TCP keepalives, i.e., what about introducing something like following parameters? walsender_keepalives_idle walsender_keepalives_interval walsender_keeaplives_count walreceiver_keepalives_idle walreceiver_keepalives_interval walreceiver_keepalives_count I believe many users are basically familiar with TCP keepalives and how to specify it. So I think that this approach would be intuitive to users. Also this approach includes your proposal. If you specify walsender_keepalives_idle = walsender_timeout / 2 walsender_keepalives_interval = -1 (disable; Ping is never sent again if there is no reply after first Ping is sent) walsender_keepalives_count = 1 the replication timeout works as you proposed. But of course the downside of this approach is that the number of parameter for replication timeout is increased from two (replication_timeout and wal_receiver_status_interval) to six, and those parameters are confusingly similar to existing tcp_keepalives parameters, which might cause another confusion to users. One idea to solve this problem is to use existing tcp_keepalives paramters values for the replication timeout. Regards, -- Fujii Masao
On Mon, Oct 1, 2012 at 12:57 PM, Fujii Masao <masao.fujii@gmail.com> wrote: > I believe many users are basically familiar with TCP keepalives and how to > specify it. So I think that this approach would be intuitive to users. My experience is that many users are unfamiliar with TCP keepalives and that when given the options they tend to do it wrong. I think a simpler system would be better. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
Excerpts from Robert Haas's message of lun oct 01 21:02:54 -0300 2012: > On Mon, Oct 1, 2012 at 12:57 PM, Fujii Masao <masao.fujii@gmail.com> wrote: > > I believe many users are basically familiar with TCP keepalives and how to > > specify it. So I think that this approach would be intuitive to users. > > My experience is that many users are unfamiliar with TCP keepalives > and that when given the options they tend to do it wrong. I think a > simpler system would be better. +1 -- Álvaro Herrera http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services
On Monday, October 01, 2012 4:08 PM Heikki Linnakangas wrote: On 21.09.2012 14:18, Amit kapila wrote: > On Tuesday, September 18, 2012 6:02 PM Fujii Masao wrote: > On Mon, Sep 17, 2012 at 4:03 PM, Amit Kapila<amit.kapila@huawei.com> wrote: > >>>> Approach-2 : >>>> Provide a variable wal_send_status_interval, such that if this is 0, then >>>> the current behavior would prevail and if its non-zero then KeepAlive >>>> message would be send maximum after that time. >>>> The modified code of WALSendLoop will be as follows: > > <snip> >>>> Which way you think is better or you have any other idea to handle. > >>> I think #2 is better because it's more intuitive to a user. > >> Please find a patch attached for implementation of Approach-2. >So let's think how this should ideally work from a user's point of view. >I think there should be just two settings: walsender_timeout and >walreceiver_timeout. walsender_timeout specifies how long a walsender >will keep a connection open if it doesn't hear from the walreceiver, and >walreceiver_timeout is the same for walreceiver. The system should >figure out itself how often to send keepalive messages so that those >timeouts are not reached. By this it implies that we should remove wal_receiver_status_interval. Currently it is also used incase of reply message of data sent by sender which contains till what point receiver has flushed. So if we remove thisvariable receiver might start sending that message sonner than required. Is that okay behavior? >In walsender, after half of walsender_timeout has elapsed and we haven't >received anything from the client, the walsender process should send a >"ping" message to the client. Whenever the client receives a Ping, it >replies. The walreceiver does the same; when half of walreceiver_timeout >has elapsed, send a Ping message to the server. Each Ping-Pong roundtrip >resets the timer in both ends, regardless of which side initiated it, so >if e.g walsender_timeout < walreceiver_timeout, the client will never >have to initiate a Ping message, because walsender will always reach the >walsender_timeout/2 point first and initiate the heartbeat message. Just to clarify, walsender should reset timer after it gets reply from receiver of the message it sent. walreceiver should reset timer after sending reply for heartbeat message. Similar to above timers will be reset when receiver sent the heartbeat message. >The Ping/Pong messages don't necessarily need to be new message types, >we can use the message types we currently have, perhaps with an >additional flag attached to them, to request the other side to reply >immediately. Can't we make the decision to send reply immediately based on message type, because these message types will be unique. To clarify my understanding, 1. the heartbeat message from walsender side will be keepalive message ('k') and from walreceiver side it will be Hot Standbyfeedback message ('h'). 2. the reply message from walreceiver side will be current reply message ('r'). 3. currently there is no reply kind of message from walsender, so do we need to introduce one new message for it or can usesome existing message only? if new, do we need to send any additional information along with it, for existing messagescan we use keepalive message it self as reply message but with an additional byte to indicate it is reply? With Regards, Amit Kapila.
On Monday, October 01, 2012 8:36 PM Robert Haas wrote: On Mon, Oct 1, 2012 at 6:38 AM, Heikki Linnakangas <hlinnakangas@vmware.com> wrote: >> Hmm, I think we need to step back a bit. I've never liked the way >> replication_timeout works, where it's the user's responsibility to set >> wal_receiver_status_interval < replication_timeout. It's not very >> user-friendly. I'd rather not copy that same design to this walreceiver >> timeout. If there's two different timeouts like that, it's even worse, >> because it's easy to confuse the two. > I agree, but also note that wal_receiver_status_interval serves > another user-visible purpose as well. By above do you mean to say that wal_receiver_status_interval is used for reply of data sent by server to indicate till whatpoint receiver has flushed data or something else? With Regards, Amit Kapila.
Re: [BUGS] BUG #7534: walreceiver takes long time to detect n/w breakdown
From
Heikki Linnakangas
Date:
On 02.10.2012 10:36, Amit kapila wrote: > On Monday, October 01, 2012 4:08 PM Heikki Linnakangas wrote: >> So let's think how this should ideally work from a user's point of view. >> I think there should be just two settings: walsender_timeout and >> walreceiver_timeout. walsender_timeout specifies how long a walsender >> will keep a connection open if it doesn't hear from the walreceiver, and >> walreceiver_timeout is the same for walreceiver. The system should >> figure out itself how often to send keepalive messages so that those >> timeouts are not reached. > > By this it implies that we should remove wal_receiver_status_interval. Currently it is also used > incase of reply message of data sent by sender which contains till what point receiver has flushed. So if we remove thisvariable > receiver might start sending that message sonner than required. > Is that okay behavior? I guess we should keep that setting, then, so that you can get status updates more often than would be required for heartbeat purposes. >> In walsender, after half of walsender_timeout has elapsed and we haven't >> received anything from the client, the walsender process should send a >> "ping" message to the client. Whenever the client receives a Ping, it >> replies. The walreceiver does the same; when half of walreceiver_timeout >> has elapsed, send a Ping message to the server. Each Ping-Pong roundtrip >> resets the timer in both ends, regardless of which side initiated it, so >> if e.g walsender_timeout< walreceiver_timeout, the client will never >> have to initiate a Ping message, because walsender will always reach the >> walsender_timeout/2 point first and initiate the heartbeat message. > > Just to clarify, walsender should reset timer after it gets reply from receiver of the message it sent. Right. > walreceiver should reset timer after sending reply for heartbeat message.> Similar to above timers will be reset when receiversent the heartbeat message. walreceiver should reset the timer when it *receives* any message from walsender. If it sends the reply right away, I guess that's the same thing, but I'd phrase it so that it's the reception of a message from the other end that resets the timer. >> The Ping/Pong messages don't necessarily need to be new message types, >> we can use the message types we currently have, perhaps with an >> additional flag attached to them, to request the other side to reply >> immediately. > > Can't we make the decision to send reply immediately based on message type, because these message types will be unique. > > To clarify my understanding, > 1. the heartbeat message from walsender side will be keepalive message ('k') and from walreceiver side it will be Hot Standbyfeedback message ('h'). > 2. the reply message from walreceiver side will be current reply message ('r'). Yep. I wonder why need separate message types for Hot Standby Feedback 'h' and Reply 'r', though. Seems it would be simpler to have just one messasge type that includes all the fields from both messages. > 3. currently there is no reply kind of message from walsender, so do we need to introduce one new message for it or canuse some existing message only? > if new, do we need to send any additional information along with it, for existing messages can we use keepalive messageit self as reply message but with an additional byte > to indicate it is reply? Hmm, I think I'd prefer to use the existing Keepalive message 'k', with an additional flag. - Heikki
On Tuesday, October 02, 2012 1:56 PM Heikki Linnakangas wrote: On 02.10.2012 10:36, Amit kapila wrote: > On Monday, October 01, 2012 4:08 PM Heikki Linnakangas wrote: >>> So let's think how this should ideally work from a user's point of view. >>> I think there should be just two settings: walsender_timeout and >>> walreceiver_timeout. walsender_timeout specifies how long a walsender >>> will keep a connection open if it doesn't hear from the walreceiver, and >>> walreceiver_timeout is the same for walreceiver. The system should >>> The Ping/Pong messages don't necessarily need to be new message types, >>> we can use the message types we currently have, perhaps with an >>> additional flag attached to them, to request the other side to reply >>> immediately. > >> Can't we make the decision to send reply immediately based on message type, because these message types will be unique. > >> To clarify my understanding, >> 1. the heartbeat message from walsender side will be keepalive message ('k') and from walreceiver side it will be HotStandby feedback message ('h'). >> 2. the reply message from walreceiver side will be current reply message ('r'). > Yep. I wonder why need separate message types for Hot Standby Feedback > 'h' and Reply 'r', though. Seems it would be simpler to have just one > messasge type that includes all the fields from both messages. moved the contents for Hot Standby Feedback 'h' to Reply 'r' and use 'h' for heart-beat purpose. >> 3. currently there is no reply kind of message from walsender, so do we need to introduce one new message for it or canuse some existing message only? >> if new, do we need to send any additional information along with it, for existing messages can we use keepalive messageit self as reply message but with an additional byte >> to indicate it is reply? > Hmm, I think I'd prefer to use the existing Keepalive message 'k', with an additional flag. Okay. I have done it in Patch. Thank you for suggestions. I have addressed your suggestions in patch attached with this mail. Following changes are done to support replication timeout in sender as well as receiver: 1. One new configuration parameter wal_receiver_timeout is added to detect timeout at receiver task. 2. Existing parameter replication_timeout is renamed to wal_sender_timeout. 3. Now PrimaryKeepaliveMessage structure is modified to add one more field to indicate whether keep-alive is of type 'r'(i.e. reply) or 'h' (i.e. heart-beat). 4. Now the keep-alive message from sender will be sent to standby if it was idle for more than or equal to half of wal_sender_timeout. In this case it will send keep-alive of type 'h'. 5. Once the standby receiver a keep-alive, it needs to send an immediate reply to primary to indicate connection is alive. 6. Now Reply message to send wal offset and Feedback message to send oldest transaction are merged into single Reply message. So now the structure StandbyReplyMessage is changed to add two more fields as xmin and epoch. Also StandbyHSFeedbackMessage structure is changed to remove xmin and epoch fields (as these are moved to StandbyReplyMessage). 7. Because of changes as in step-6, once receiver task receives some data from primary then it will only send Reply Message. 8. Same Reply message is sent in step-5 and step-7 but incase of step-5, then reply is sent immediately but incase of step-7,reply is sent if wal_receiver_status_interval has lapsed (this part is same as earlier). 9. Similar to sender, if receiver finds itself idle for more than or equal to half of configured wal_receiver_timeout, thenit will send the hot-standby heartbeat. This heart-beat has been modified to send only sendTime. 10. Once sender task receiver heart-beat message from standby then it sends back the reply immediately. In this keep-alivemessage is sent of type 'r'. 11. If even after wal_sender_timeout no message received from standby then it will be considered as network break at sendertask. 12. If even after wal_receiver_timeout no message received from primary then it will be considered as network break at receivertask. With Regards, Amit Kapila.
Attachment
> -----Original Message----- > From: pgsql-bugs-owner@postgresql.org [mailto:pgsql-bugs- > owner@postgresql.org] On Behalf Of Amit kapila > Sent: Thursday, October 04, 2012 3:43 PM > To: Heikki Linnakangas > Cc: Fujii Masao; pgsql-bugs@postgresql.org; pgsql-hackers@postgresql.org > Subject: Re: [BUGS] BUG #7534: walreceiver takes long time to detect n/w > breakdown > > On Tuesday, October 02, 2012 1:56 PM Heikki Linnakangas wrote: > On 02.10.2012 10:36, Amit kapila wrote: > > On Monday, October 01, 2012 4:08 PM Heikki Linnakangas wrote: > >>> So let's think how this should ideally work from a user's point of > view. > >>> I think there should be just two settings: walsender_timeout and > >>> walreceiver_timeout. walsender_timeout specifies how long a > >>> walsender will keep a connection open if it doesn't hear from the > > Thank you for suggestions. > I have addressed your suggestions in patch attached with this mail. > > Following changes are done to support replication timeout in sender as > well as receiver: Testing Done for the Patch -------------------------------- 1. Verified the value of new configuration parameter and changed configuration parameter using the show command (using Show of specific parameter as well as show all). 2. Verified the new configuration parameter in --describe-config. 3. Verified the existing parameter replication_timeout's new name in --describe-config. 4. Start primary and standby node with default timeout, leave it for sometime in idle situation. It should not error out due to network break error. 5. a. Start primary and standby node with default timeout, bring down the network. b. Both sender and receiver should be able to detect network break-down almost at same time. c. Once the network is up again, connection should get re-established successfully. 5. a. Start primary and standby node with wal_sender_timeout less than wal_receiver_timeout, bring down the network. b. Sender should be able to detect network break-down before receiver task. c. Once the network is up again, connection should get re-established successfully. 6. a. Start primary and standby node with wal_receiver_timeout less than wal_sender_timeout, bring down the network. b. Receiver should be able to detect network break-down before sender task. c. Once the network is up again, connection should get re-established successfully. 7. a. In 5th test case, change the value of wal_receiver_status_interval to more than wal_receiver_timeout and hence more than wal_sender_timeout. b. Then bring down the network down. c. Sendertask should be able to detect network break-down once wal_sender_timeout has lapsed. d. Once the network is up again, connection should get re-established successfully. Intent of this test is to check there is no dependency of wal_sender_timeout on wal_receiver_status_interval for detection of Network break. All the above tests are passed. With Regards, Amit Kapila.
On Thu, Oct 4, 2012 at 6:12 AM, Amit kapila <amit.kapila@huawei.com> wrote: > 1. One new configuration parameter wal_receiver_timeout is added to detect timeout at receiver task. > 2. Existing parameter replication_timeout is renamed to wal_sender_timeout. -1 from me on a backward compatibility break here. I don't know what else to call the new GUC (replication_server_timeout?) but I'm not excited about breaking existing conf files, nor do I particularly like the proposed new names. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
> On Monday, October 08, 2012 7:38 PM Robert Haas wrote: > On Thu, Oct 4, 2012 at 6:12 AM, Amit kapila <amit.kapila@huawei.com> > wrote: > > 1. One new configuration parameter wal_receiver_timeout is added to > detect timeout at receiver task. > > 2. Existing parameter replication_timeout is renamed to > wal_sender_timeout. > > -1 from me on a backward compatibility break here. I don't know what > else to call the new GUC (replication_server_timeout?) but I'm not > excited about breaking existing conf files, nor do I particularly like > the proposed new names. How about following: 1. replication_client_timeout -- shouldn't it be client as new configuration is for wal receiver 2. replication_standby_timeout If we introduce a new parameter for wal receiver, wouldn't replication_timeout be confusing for user? With Regards, Amit Kapila.
On Mon, Oct 8, 2012 at 10:42 AM, Amit Kapila <amit.kapila@huawei.com> wrote: > How about following: > 1. replication_client_timeout -- shouldn't it be client as new configuration > is for wal receiver > 2. replication_standby_timeout ISTM that the client and the standby are the same thing. > If we introduce a new parameter for wal receiver, wouldn't > replication_timeout be confusing for user? Maybe. I actually don't think that I understand what problem we're trying to solve here. If the connection between the master and the standby is lost, shouldn't the standby realize that it's no longer receiving keepalives from the master and terminate the connection? I thought I had tested this at some point and it was working, so either it's subsequently gotten broken again or the scenario you're talking about is different in some way that I don't currently understand. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
On Tuesday, October 09, 2012 6:00 PM Robert Haas wrote: > On Mon, Oct 8, 2012 at 10:42 AM, Amit Kapila <amit.kapila@huawei.com> > wrote: > > How about following: > > 1. replication_client_timeout -- shouldn't it be client as new > configuration > > is for wal receiver > > 2. replication_standby_timeout > > ISTM that the client and the standby are the same thing. Yeah same, but may be one (replication_standby_timeout) can be more easily understandable by user. > > If we introduce a new parameter for wal receiver, wouldn't > > replication_timeout be confusing for user? > > Maybe. > I actually don't think that I understand what problem we're > trying to solve here. If the connection between the master and the > standby is lost, shouldn't the standby realize that it's no longer > receiving keepalives from the master and terminate the connection? For wal receiver keepalives are also like one kind of message, so the behavior is such that when it checks that it doesn't receive any message, it tries to send reply/feedback message to master after an interval of wal_receiver_status_interval. So after every wal_receiver_status_interval, wal receiver sends a reply, but still the socket send doesn't fail. It fails only after many send calls as internally might be in send(), until the sockets internal buffer is full, it keeps accumulating even if other side recv has not received the data. So that's the reason we decided to introduce a timeout parameter in wal receiver similar to what we have currently in walsender. > I > thought I had tested this at some point and it was working, so either > it's subsequently gotten broken again or the scenario you're talking > about is different in some way that I don't currently understand. Standby takes quite longer around 15 minutes to detect whereas master is able to detect quite sooner in 2-3 mins and master also mainly detects due to timeout functionality in wal sender. With Regards, Amit Kapila.
Re: [BUGS] BUG #7534: walreceiver takes long time to detect n/w breakdown
From
Heikki Linnakangas
Date:
On 04.10.2012 13:12, Amit kapila wrote: > Following changes are done to support replication timeout in sender as well as receiver: > > 1. One new configuration parameter wal_receiver_timeout is added to detect timeout at receiver task. > 2. Existing parameter replication_timeout is renamed to wal_sender_timeout. Ok. The other option would be to have just one GUC, I'm open to bikeshedding on this one. On one hand, there's no reason the timeouts have to the same, so it would be nice to have separate settings, but on the other hand, I can't imagine a case where a single setting wouldn't work just as well. > 3. Now PrimaryKeepaliveMessage structure is modified to add one more field to indicate whether keep-alive is of type 'r'(i.e. > reply) or 'h' (i.e. heart-beat). > 4. Now the keep-alive message from sender will be sent to standby if it was idle for more than or equal to half of wal_sender_timeout. > In this case it will send keep-alive of type 'h'. > 5. Once the standby receiver a keep-alive, it needs to send an immediate reply to primary to indicate connection is alive. > 6. Now Reply message to send wal offset and Feedback message to send oldest transaction are merged into single Reply message. > So now the structure StandbyReplyMessage is changed to add two more fields as xmin and epoch. Also StandbyHSFeedbackMessage > structure is changed to remove xmin and epoch fields (as these are moved to StandbyReplyMessage). > 7. Because of changes as in step-6, once receiver task receives some data from primary then it will only send Reply Message. Oh I see. That's not what I meant by combining the keep-alive and hs feedback messages, I imagined that the hearbeats would *also* use the same message type. Ie. there would be only a single message type from standby to primary, used for: 1. updating the receive/apply pointer 2. HS feedback 3. for pinging the server when wal_receiver_timeout is approaching 4. to reply to to pings from the server. Since we didn't quite achieve that, it seems best leave out this merging of reply and HS feedback message types, to keep the patch small. We might still want to do that, but better do that as a separate patch. > 8. Same Reply message is sent in step-5 and step-7 but incase of step-5, then reply is sent immediately but incase of step-7,reply is sent > if wal_receiver_status_interval has lapsed (this part is same as earlier). > 9. Similar to sender, if receiver finds itself idle for more than or equal to half of configured wal_receiver_timeout,then it will send the > hot-standby heartbeat. This heart-beat has been modified to send only sendTime. > 10. Once sender task receiver heart-beat message from standby then it sends back the reply immediately. In this keep-alivemessage is > sent of type 'r'. > 11. If even after wal_sender_timeout no message received from standby then it will be considered as network break at sendertask. > 12. If even after wal_receiver_timeout no message received from primary then it will be considered as network break atreceiver task. Attached is an updated patch. I reverted the merging of message types and fixed a bunch of cosmetic issues. There was one bug: in the main loop of walreceiver, you send the "ping" message on every wakeup after enough time has passed since last reception. That means that if the server doesn't reply promptly, you send a new ping message every 100 ms (NAPTIME_PER_CYCLE), until it gets a reply. Walsender had the same issue, but it was not quite as sever there because the naptime was longer. Fixed that. How does this look now? - Heikki
Attachment
On Wednesday, October 10, 2012 9:15 PM Heikki Linnakangas wrote: > On 04.10.2012 13:12, Amit kapila wrote: > > Following changes are done to support replication timeout in sender as > well as receiver: > > > > 1. One new configuration parameter wal_receiver_timeout is added to > detect timeout at receiver task. > > 2. Existing parameter replication_timeout is renamed to > wal_sender_timeout. > > Ok. The other option would be to have just one GUC, I'm open to > bikeshedding on this one. On one hand, there's no reason the timeouts > have to the same, so it would be nice to have separate settings, but on > the other hand, I can't imagine a case where a single setting wouldn't > work just as well. I think for below case, they are required to be separate: 1. M1 (Master), S1 (Standby 1), S2 (Standby 2) 2. S1 is standby for M1, and S2 is standby for S1. Basically a simple case of cascaded replication 3. M1 and S1 are on local network but S2 is placed at geographically different location. (what I want to say is n/w between M1-S1 is of good speed and S1-S2 is very slow) 4. In above case, user might want to configure different timeouts for sender and receiver on S1. > Attached is an updated patch. I reverted the merging of message types > and fixed a bunch of cosmetic issues. There was one bug: in the main > loop of walreceiver, you send the "ping" message on every wakeup after > enough time has passed since last reception. That means that if the > server doesn't reply promptly, you send a new ping message every 100 ms > (NAPTIME_PER_CYCLE), until it gets a reply. Walsender had the same > issue, but it was not quite as sever there because the naptime was > longer. Fixed that. Thanks. > > How does this look now? The Patch is fine and test results are also fine. With Regards, Amit Kapila.
Re: [BUGS] BUG #7534: walreceiver takes long time to detect n/w breakdown
From
Heikki Linnakangas
Date:
On 11.10.2012 13:17, Amit Kapila wrote: >> How does this look now? > > The Patch is fine and test results are also fine. Ok, thanks. Committed. - Heikki
On Thursday, October 11, 2012 8:22 PM Heikki Linnakangas wrote: On 11.10.2012 13:17, Amit Kapila wrote: >>> How does this look now? > >> The Patch is fine and test results are also fine. >Ok, thanks. Committed. Thank you very much. With Regards, Amit Kapila.
On Thu, Oct 11, 2012 at 11:52 PM, Heikki Linnakangas <hlinnakangas@vmware.com> wrote: > On 11.10.2012 13:17, Amit Kapila wrote: >>> >>> How does this look now? >> >> >> The Patch is fine and test results are also fine. > > > Ok, thanks. Committed. I found one typo. The attached patch fixes that typo. ISTM you need to update the protocol.sgml because you added the field 'replyRequested' to WalSndrMessage and StandbyReplyMessage. Is it worth adding the same mechanism (send back the reply immediately if walsender request a reply) into pg_basebackup and pg_receivexlog? Regards, -- Fujii Masao
Attachment
Re: [BUGS] BUG #7534: walreceiver takes long time to detect n/w breakdown
From
Heikki Linnakangas
Date:
On 13.10.2012 19:35, Fujii Masao wrote: > On Thu, Oct 11, 2012 at 11:52 PM, Heikki Linnakangas > <hlinnakangas@vmware.com> wrote: >> Ok, thanks. Committed. > > I found one typo. The attached patch fixes that typo. Thanks, fixed. > ISTM you need to update the protocol.sgml because you added > the field 'replyRequested' to WalSndrMessage and StandbyReplyMessage. Oh, I didn't remember that we've documented the specific structs that we pass around. It's quite bogus anyway to explain the messages the way we do currently, as they are actually dependent on the underlying architecture's endianess and padding. I think we should refactor the protocol to not transmit raw structs, but use pq_sentint and friends to construct the messages. This was discussed earlier (see http://archives.postgresql.org/message-id/4FE2279C.2070506@enterprisedb.com), I think there's consensus that 9.3 would be a good time to do that as we changed the XLogRecPtr format anyway. I'll look into doing that.. > Is it worth adding the same mechanism (send back the reply immediately > if walsender request a reply) into pg_basebackup and pg_receivexlog? Good catch. Yes, they should be taught about this too. I'll look into doing that too. - Heikki
Re: [BUGS] BUG #7534: walreceiver takes long time to detect n/w breakdown
From
Heikki Linnakangas
Date:
On 15.10.2012 13:13, Heikki Linnakangas wrote: > On 13.10.2012 19:35, Fujii Masao wrote: >> ISTM you need to update the protocol.sgml because you added >> the field 'replyRequested' to WalSndrMessage and StandbyReplyMessage. > > Oh, I didn't remember that we've documented the specific structs that we > pass around. It's quite bogus anyway to explain the messages the way we > do currently, as they are actually dependent on the underlying > architecture's endianess and padding. I think we should refactor the > protocol to not transmit raw structs, but use pq_sentint and friends to > construct the messages. This was discussed earlier (see > http://archives.postgresql.org/message-id/4FE2279C.2070506@enterprisedb.com), > I think there's consensus that 9.3 would be a good time to do that as we > changed the XLogRecPtr format anyway. This is what I came up with. The replication protocol is now architecture-independent. The WAL format itself is still architecture-independent, of course, but this is useful if you want to e.g use pg_receivexlog to back up a server that runs on a different platform. I chose the int64 format to transmit timestamps, even when compiled with --disable-integer-datetimes. Please review if you have the time.. - Heikki
Attachment
On Mon, Oct 15, 2012 at 11:27 PM, Heikki Linnakangas <hlinnakangas@vmware.com> wrote: > On 15.10.2012 13:13, Heikki Linnakangas wrote: >> >> On 13.10.2012 19:35, Fujii Masao wrote: >>> >>> ISTM you need to update the protocol.sgml because you added >>> the field 'replyRequested' to WalSndrMessage and StandbyReplyMessage. >> >> >> Oh, I didn't remember that we've documented the specific structs that we >> pass around. It's quite bogus anyway to explain the messages the way we >> do currently, as they are actually dependent on the underlying >> architecture's endianess and padding. I think we should refactor the >> protocol to not transmit raw structs, but use pq_sentint and friends to >> construct the messages. This was discussed earlier (see >> >> http://archives.postgresql.org/message-id/4FE2279C.2070506@enterprisedb.com), >> I think there's consensus that 9.3 would be a good time to do that as we >> changed the XLogRecPtr format anyway. > > > This is what I came up with. The replication protocol is now > architecture-independent. The WAL format itself is still > architecture-independent, of course, but this is useful if you want to e.g > use pg_receivexlog to back up a server that runs on a different platform. > > I chose the int64 format to transmit timestamps, even when compiled with > --disable-integer-datetimes. > > Please review if you have the time.. Thanks for the patch! When I ran pg_receivexlog, I encountered the following error. $ pg_receivexlog -D hoge pg_receivexlog: unexpected termination of replication stream: ERROR: no data left in message pg_basebackup -X stream caused the same error. $ pg_basebackup -D hoge -X stream -c fast pg_basebackup: could not send feedback packet: no COPY in progress pg_basebackup: child process exited with error 1 In walreceiver.c, tmpbuf is allocated for every XLogWalRcvProcessMsg() call. It should be allocated just once and continue to be used till end, to reduce palloc overhead? + hdrlen = sizeof(int64) + sizeof(int64) + sizeof(int64); + hdrlen = sizeof(int64) + sizeof(int64) + sizeof(char); These should be macro, to avoid calculation overhead? + /* Construct the the message and send it. */ + resetStringInfo(&reply_message); + pq_sendbyte(&reply_message, 'h'); + pq_sendint(&reply_message, xmin, 4); + pq_sendint(&reply_message, nextEpoch, 4); + walrcv_send(reply_message.data, reply_message.len); You seem to have forgotten to send the sendTime. Regards, -- Fujii Masao
Re: [BUGS] BUG #7534: walreceiver takes long time to detect n/w breakdown
From
Heikki Linnakangas
Date:
On 15.10.2012 19:31, Fujii Masao wrote: > On Mon, Oct 15, 2012 at 11:27 PM, Heikki Linnakangas > <hlinnakangas@vmware.com> wrote: >> On 15.10.2012 13:13, Heikki Linnakangas wrote: >>> >>> Oh, I didn't remember that we've documented the specific structs that we >>> pass around. It's quite bogus anyway to explain the messages the way we >>> do currently, as they are actually dependent on the underlying >>> architecture's endianess and padding. I think we should refactor the >>> protocol to not transmit raw structs, but use pq_sentint and friends to >>> construct the messages. This was discussed earlier (see >>> >>> http://archives.postgresql.org/message-id/4FE2279C.2070506@enterprisedb.com), >>> I think there's consensus that 9.3 would be a good time to do that as we >>> changed the XLogRecPtr format anyway. >> >> >> This is what I came up with. The replication protocol is now >> architecture-independent. The WAL format itself is still >> architecture-independent, of course, but this is useful if you want to e.g >> use pg_receivexlog to back up a server that runs on a different platform. >> >> I chose the int64 format to transmit timestamps, even when compiled with >> --disable-integer-datetimes. >> >> Please review if you have the time.. > > Thanks for the patch! > > When I ran pg_receivexlog, I encountered the following error. Yeah, clearly I didn't test this near enough... I fixed the bugs you bumped into, new version attached. > + hdrlen = sizeof(int64) + sizeof(int64) + sizeof(int64); > + hdrlen = sizeof(int64) + sizeof(int64) + sizeof(char); > > These should be macro, to avoid calculation overhead? The compiler will calculate this at compilation time, it's going to be a constant at runtime. - Heikki
Attachment
> On Monday, October 15, 2012 3:43 PM Heikki Linnakangas wrote: > On 13.10.2012 19:35, Fujii Masao wrote: > > On Thu, Oct 11, 2012 at 11:52 PM, Heikki Linnakangas > > <hlinnakangas@vmware.com> wrote: > >> Ok, thanks. Committed. > > > > I found one typo. The attached patch fixes that typo. > > Thanks, fixed. > > > ISTM you need to update the protocol.sgml because you added > > the field 'replyRequested' to WalSndrMessage and StandbyReplyMessage. > > > Is it worth adding the same mechanism (send back the reply immediately > > if walsender request a reply) into pg_basebackup and pg_receivexlog? > > Good catch. Yes, they should be taught about this too. I'll look into > doing that too. If you have not started and you don't have objection, I can pickup this to complete it. For both (pg_basebackup and pg_receivexlog), we need to get a timeout parameter from user in command line, as there is no conf file here. New Option can be -t (parameter name can be recvtimeout). The main changes will be in function ReceiveXlogStream(), it is a common function for both Pg_basebackup and pg_receivexlog. Handling will be done in same way as we have done in walreceiver. Suggestions/Comments? With Regards, Amit Kapila.
On Wednesday, October 17, 2012 5:16 PM Amit Kapila wrote: > > On Monday, October 15, 2012 3:43 PM Heikki Linnakangas wrote: > > On 13.10.2012 19:35, Fujii Masao wrote: > > > On Thu, Oct 11, 2012 at 11:52 PM, Heikki Linnakangas > > > <hlinnakangas@vmware.com> wrote: > > >> Ok, thanks. Committed. > > > > > > I found one typo. The attached patch fixes that typo. > > > > Thanks, fixed. > > > > > ISTM you need to update the protocol.sgml because you added > > > the field 'replyRequested' to WalSndrMessage and > StandbyReplyMessage. > > > > > > > Is it worth adding the same mechanism (send back the reply > immediately > > > if walsender request a reply) into pg_basebackup and pg_receivexlog? > > > > Good catch. Yes, they should be taught about this too. I'll look into > > doing that too. > > If you have not started and you don't have objection, I can pickup this > to > complete it. > > For both (pg_basebackup and pg_receivexlog), we need to get a timeout > parameter from user in command line, as > there is no conf file here. New Option can be -t (parameter name can be > recvtimeout). > > The main changes will be in function ReceiveXlogStream(), it is a common > function for both > Pg_basebackup and pg_receivexlog. Handling will be done in same way as > we > have done in walreceiver. Some more functions where it receives the data files also need similar handling in pg_basebackup. With Regards, Amit Kapila.
On Wed, Oct 17, 2012 at 8:46 PM, Amit Kapila <amit.kapila@huawei.com> wrote: >> On Monday, October 15, 2012 3:43 PM Heikki Linnakangas wrote: >> On 13.10.2012 19:35, Fujii Masao wrote: >> > On Thu, Oct 11, 2012 at 11:52 PM, Heikki Linnakangas >> > <hlinnakangas@vmware.com> wrote: >> >> Ok, thanks. Committed. >> > >> > I found one typo. The attached patch fixes that typo. >> >> Thanks, fixed. >> >> > ISTM you need to update the protocol.sgml because you added >> > the field 'replyRequested' to WalSndrMessage and StandbyReplyMessage. > > >> >> > Is it worth adding the same mechanism (send back the reply immediately >> > if walsender request a reply) into pg_basebackup and pg_receivexlog? >> >> Good catch. Yes, they should be taught about this too. I'll look into >> doing that too. > > If you have not started and you don't have objection, I can pickup this to > complete it. > > For both (pg_basebackup and pg_receivexlog), we need to get a timeout > parameter from user in command line, as > there is no conf file here. New Option can be -t (parameter name can be > recvtimeout). > > The main changes will be in function ReceiveXlogStream(), it is a common > function for both > Pg_basebackup and pg_receivexlog. Handling will be done in same way as we > have done in walreceiver. > > Suggestions/Comments? Before implementing the timeout parameter, I think that it's better to change both pg_basebackup background process and pg_receivexlog so that they send back the reply message immediately when they receive the keepalive message requesting the reply. Currently, they always ignore such keepalive message, so status interval parameter (-s) in them always must be set to the value less than replication timeout. We can avoid this troublesome parameter setting by introducing the same logic of walreceiver into both pg_basebackup background process and pg_receivexlog. Regards, -- Fujii Masao
On Tue, Oct 16, 2012 at 9:31 PM, Heikki Linnakangas <hlinnakangas@vmware.com> wrote: > On 15.10.2012 19:31, Fujii Masao wrote: >> >> On Mon, Oct 15, 2012 at 11:27 PM, Heikki Linnakangas >> <hlinnakangas@vmware.com> wrote: >>> >>> On 15.10.2012 13:13, Heikki Linnakangas wrote: >>>> >>>> >>>> Oh, I didn't remember that we've documented the specific structs that we >>>> pass around. It's quite bogus anyway to explain the messages the way we >>>> do currently, as they are actually dependent on the underlying >>>> architecture's endianess and padding. I think we should refactor the >>>> protocol to not transmit raw structs, but use pq_sentint and friends to >>>> construct the messages. This was discussed earlier (see >>>> >>>> >>>> http://archives.postgresql.org/message-id/4FE2279C.2070506@enterprisedb.com), >>>> I think there's consensus that 9.3 would be a good time to do that as we >>>> changed the XLogRecPtr format anyway. >>> >>> >>> >>> This is what I came up with. The replication protocol is now >>> architecture-independent. The WAL format itself is still >>> architecture-independent, of course, but this is useful if you want to >>> e.g >>> use pg_receivexlog to back up a server that runs on a different platform. >>> >>> I chose the int64 format to transmit timestamps, even when compiled with >>> --disable-integer-datetimes. >>> >>> Please review if you have the time.. >> >> >> Thanks for the patch! >> >> When I ran pg_receivexlog, I encountered the following error. > > > Yeah, clearly I didn't test this near enough... > > I fixed the bugs you bumped into, new version attached. Thanks for updating the patch! We should remove the check of integer_datetime by pg_basebackup background process and pg_receivexlog? Currently, they always check it, and then if its setting value is not the same between a client and server, they fail. Thanks to the patch, ISTM this check is no longer required. + pq_sendint64(&reply_message, GetCurrentIntegerTimestamp()); In XLogWalRcvSendReply() and XLogWalRcvSendHSFeedback(), GetCurrentTimestamp() is called twice. I think that we can skip the latter call if integer-datetime is enabled because the return value of GetCurrentTimestamp() and GetCurrentIntegerTimestamp() is in the same format. It's worth reducing the number of GetCurrentTimestamp() calls, I think. elog(DEBUG2, "sending write %X/%X flush %X/%X apply %X/%X", - (uint32) (reply_message.write >> 32), (uint32) reply_message.write, - (uint32) (reply_message.flush >> 32), (uint32) reply_message.flush, - (uint32) (reply_message.apply >> 32), (uint32) reply_message.apply); + (uint32) (writePtr >> 32), (uint32) writePtr, + (uint32) (flushPtr >> 32), (uint32) flushPtr, + (uint32) (applyPtr >> 32), (uint32) applyPtr); elog(DEBUG2, "write %X/%X flush %X/%X apply %X/%X", - (uint32) (reply.write >> 32), (uint32) reply.write, - (uint32) (reply.flush >> 32), (uint32) reply.flush, - (uint32) (reply.apply >> 32), (uint32) reply.apply); + (uint32) (writePtr >> 32), (uint32) writePtr, + (uint32) (flushPtr >> 32), (uint32) flushPtr, + (uint32) (applyPtr >> 32), (uint32) applyPtr); Isn't it worth logging not only WAL location but also the replyRequested flag in these debug message? The remaining of the patch looks good to me. >> + hdrlen = sizeof(int64) + sizeof(int64) + >> sizeof(int64); >> + hdrlen = sizeof(int64) + sizeof(int64) + >> sizeof(char); >> >> These should be macro, to avoid calculation overhead? > > > The compiler will calculate this at compilation time, it's going to be a > constant at runtime. Yes, you're right. Regards, -- Fujii Masao
On Thursday, October 18, 2012 8:49 PM Fujii Masao wrote: On Wed, Oct 17, 2012 at 8:46 PM, Amit Kapila <amit.kapila@huawei.com> wrote: >> On Monday, October 15, 2012 3:43 PM Heikki Linnakangas wrote: >> On 13.10.2012 19:35, Fujii Masao wrote: >> > On Thu, Oct 11, 2012 at 11:52 PM, Heikki Linnakangas >> > <hlinnakangas@vmware.com> wrote: >> >> Ok, thanks. Committed. >> > >> > I found one typo. The attached patch fixes that typo. >> >> Thanks, fixed. >> >> > ISTM you need to update the protocol.sgml because you added >> > the field 'replyRequested' to WalSndrMessage and StandbyReplyMessage. > > >> >> > Is it worth adding the same mechanism (send back the reply immediately >> > if walsender request a reply) into pg_basebackup and pg_receivexlog? >> >> Good catch. Yes, they should be taught about this too. I'll look into >> doing that too. > > If you have not started and you don't have objection, I can pickup this to > complete it. > > For both (pg_basebackup and pg_receivexlog), we need to get a timeout > parameter from user in command line, as > there is no conf file here. New Option can be -t (parameter name can be > recvtimeout). > > The main changes will be in function ReceiveXlogStream(), it is a common > function for both > Pg_basebackup and pg_receivexlog. Handling will be done in same way as we > have done in walreceiver. > > Suggestions/Comments? >Before implementing the timeout parameter, I think that it's better to change >both pg_basebackup background process and pg_receivexlog so that they >send back the reply message immediately when they receive the keepalive >message requesting the reply. Currently, they always ignore such keepalive >message, so status interval parameter (-s) in them always must be set to >the value less than replication timeout. We can avoid this troublesome >parameter setting by introducing the same logic of walreceiver into both >pg_basebackup background process and pg_receivexlog. Please find the patch attached to address the modification mentioned by you (send immediate reply for keepalive). Both basebackup and pg_receivexlog uses the same function ReceiveXLogStream, so single change for both will address the issue. Now further to this for introducing timeout in pg_basebackup and pg_receivexlog: We can have mechanism similar to wal receiver timeout while streaming the data from server, but same logic can not be usedincase network goes down during getting other database file from server. The reason for the same is to receive the data files PQgetCopyData() is called in synchronous mode, so it keeps waiting forinfinite time till it gets some data. In order to solve this issue, I can think of following options: 1. Making this call also asynchronous (but now sure about impact of this). 2. In function pqWait, instead of passing hard-code value -1 (i.e. infinite wait), we can send some finite time. This timecan be received as command line argument from respective utility and set the same in PGconn structure. In order to have timeout value in PGconn, we can have: a. Add new parameter in PGconn to indicate the receive timeout. b. Use the existing parameter connect_timeout for receive timeout also but this may lead to confusion. 3. Any other better option? Apart from above issue, there is possibility that if during connect time network goes down, then it might hang, becauseconnect_timeout by default will be NULL and connectDBComplete will start waiting inifinitely for connection to becomesuccessful. So shall we have command line argument separately for this also or any other way as you suugest. Suggestions/Comments With Regards, Amit Kapila.
Attachment
Re: [BUGS] BUG #7534: walreceiver takes long time to detect n/w breakdown
From
Heikki Linnakangas
Date:
On 16.10.2012 15:31, Heikki Linnakangas wrote: > On 15.10.2012 19:31, Fujii Masao wrote: >> On Mon, Oct 15, 2012 at 11:27 PM, Heikki Linnakangas >> <hlinnakangas@vmware.com> wrote: >>> On 15.10.2012 13:13, Heikki Linnakangas wrote: >>>> >>>> Oh, I didn't remember that we've documented the specific structs >>>> that we >>>> pass around. It's quite bogus anyway to explain the messages the way we >>>> do currently, as they are actually dependent on the underlying >>>> architecture's endianess and padding. I think we should refactor the >>>> protocol to not transmit raw structs, but use pq_sentint and friends to >>>> construct the messages. This was discussed earlier (see >>>> >>>> http://archives.postgresql.org/message-id/4FE2279C.2070506@enterprisedb.com), >>>> >>>> I think there's consensus that 9.3 would be a good time to do that >>>> as we changed the XLogRecPtr format anyway. >>> >>> This is what I came up with. The replication protocol is now >>> architecture-independent. The WAL format itself is still >>> architecture-independent, of course, but this is useful if you want >>> to e.g >>> use pg_receivexlog to back up a server that runs on a different >>> platform. >>> >>> I chose the int64 format to transmit timestamps, even when compiled with >>> --disable-integer-datetimes. >>> >>> Please review if you have the time.. >> >> Thanks for the patch! >> >> When I ran pg_receivexlog, I encountered the following error. > > Yeah, clearly I didn't test this near enough... > > I fixed the bugs you bumped into, new version attached. Committed this now, after fixing a few more bugs that came up during testing. Next, I'll take a look at the patch you sent for adding timeouts to pg_basebackup and pg_receivexlog (http://archives.postgresql.org/message-id/6C0B27F7206C9E4CA54AE035729E9C382853BBED@szxeml509-mbs) - Heikki
Re: [BUGS] BUG #7534: walreceiver takes long time to detect n/w breakdown
From
Heikki Linnakangas
Date:
On 19.10.2012 14:42, Amit kapila wrote: > On Thursday, October 18, 2012 8:49 PM Fujii Masao wrote: >> Before implementing the timeout parameter, I think that it's better to change >> both pg_basebackup background process and pg_receivexlog so that they >> send back the reply message immediately when they receive the keepalive >> message requesting the reply. Currently, they always ignore such keepalive >> message, so status interval parameter (-s) in them always must be set to >> the value less than replication timeout. We can avoid this troublesome >> parameter setting by introducing the same logic of walreceiver into both >> pg_basebackup background process and pg_receivexlog. > > Please find the patch attached to address the modification mentioned by you (send immediate reply for keepalive). > Both basebackup and pg_receivexlog uses the same function ReceiveXLogStream, so single change for both will address theissue. Thanks, committed this one after shuffling it around the changes I committed yesterday. I also updated the docs to not claim that -s option is required to avoid timeout disconnects anymore. - Heikki
On Thursday, November 08, 2012 2:04 PM Heikki Linnakangas wrote: > On 19.10.2012 14:42, Amit kapila wrote: > > On Thursday, October 18, 2012 8:49 PM Fujii Masao wrote: > >> Before implementing the timeout parameter, I think that it's better > to change > >> both pg_basebackup background process and pg_receivexlog so that they > >> send back the reply message immediately when they receive the > keepalive > >> message requesting the reply. Currently, they always ignore such > keepalive > >> message, so status interval parameter (-s) in them always must be set > to > >> the value less than replication timeout. We can avoid this > troublesome > >> parameter setting by introducing the same logic of walreceiver into > both > >> pg_basebackup background process and pg_receivexlog. > > > > Please find the patch attached to address the modification mentioned > by you (send immediate reply for keepalive). > > Both basebackup and pg_receivexlog uses the same function > ReceiveXLogStream, so single change for both will address the issue. > > Thanks, committed this one after shuffling it around the changes I > committed yesterday. I also updated the docs to not claim that -s option > is required to avoid timeout disconnects anymore. Thank you. However I think still the issue will not be completely solved. pg_basebackup/pg_receivexlog can still take long time to detect network break as they don't have timeout concept. To do that I have sent one proposal which is mentioned at end of mail chain: http://archives.postgresql.org/message-id/6C0B27F7206C9E4CA54AE035729E9C3828 53BBED@szxeml509-mbs Do you think there is any need to introduce such mechanism in pg_basebackup/pg_receivexlog? With Regards, Amit Kapila.
On Thu, Nov 8, 2012 at 2:22 AM, Heikki Linnakangas <hlinnakangas@vmware.com> wrote: > On 16.10.2012 15:31, Heikki Linnakangas wrote: >> >> On 15.10.2012 19:31, Fujii Masao wrote: >>> >>> On Mon, Oct 15, 2012 at 11:27 PM, Heikki Linnakangas >>> <hlinnakangas@vmware.com> wrote: >>>> >>>> On 15.10.2012 13:13, Heikki Linnakangas wrote: >>>>> >>>>> >>>>> Oh, I didn't remember that we've documented the specific structs >>>>> that we >>>>> pass around. It's quite bogus anyway to explain the messages the way we >>>>> do currently, as they are actually dependent on the underlying >>>>> architecture's endianess and padding. I think we should refactor the >>>>> protocol to not transmit raw structs, but use pq_sentint and friends to >>>>> construct the messages. This was discussed earlier (see >>>>> >>>>> >>>>> http://archives.postgresql.org/message-id/4FE2279C.2070506@enterprisedb.com), >>>>> >>>>> I think there's consensus that 9.3 would be a good time to do that >>>>> as we changed the XLogRecPtr format anyway. >>>> >>>> >>>> This is what I came up with. The replication protocol is now >>>> architecture-independent. The WAL format itself is still >>>> architecture-independent, of course, but this is useful if you want >>>> to e.g >>>> use pg_receivexlog to back up a server that runs on a different >>>> platform. >>>> >>>> I chose the int64 format to transmit timestamps, even when compiled with >>>> --disable-integer-datetimes. >>>> >>>> Please review if you have the time.. >>> >>> >>> Thanks for the patch! >>> >>> When I ran pg_receivexlog, I encountered the following error. >> >> >> Yeah, clearly I didn't test this near enough... >> >> I fixed the bugs you bumped into, new version attached. > > > Committed this now, after fixing a few more bugs that came up during > testing. As I suggested upthread, pg_basebackup and pg_receivexlog no longer need to check integer_datetimes before establishing the connection, thanks to this commit. If this is right, the attached patch should be applied. The patch just removes the check of integer_datetimes by pg_basebackup and pg_receivexlog. Regards, -- Fujii Masao
Attachment
On Fri, Nov 9, 2012 at 1:40 AM, Fujii Masao <masao.fujii@gmail.com> wrote: > On Thu, Nov 8, 2012 at 2:22 AM, Heikki Linnakangas > <hlinnakangas@vmware.com> wrote: >> On 16.10.2012 15:31, Heikki Linnakangas wrote: >>> >>> On 15.10.2012 19:31, Fujii Masao wrote: >>>> >>>> On Mon, Oct 15, 2012 at 11:27 PM, Heikki Linnakangas >>>> <hlinnakangas@vmware.com> wrote: >>>>> >>>>> On 15.10.2012 13:13, Heikki Linnakangas wrote: >>>>>> >>>>>> >>>>>> Oh, I didn't remember that we've documented the specific structs >>>>>> that we >>>>>> pass around. It's quite bogus anyway to explain the messages the way we >>>>>> do currently, as they are actually dependent on the underlying >>>>>> architecture's endianess and padding. I think we should refactor the >>>>>> protocol to not transmit raw structs, but use pq_sentint and friends to >>>>>> construct the messages. This was discussed earlier (see >>>>>> >>>>>> >>>>>> http://archives.postgresql.org/message-id/4FE2279C.2070506@enterprisedb.com), >>>>>> >>>>>> I think there's consensus that 9.3 would be a good time to do that >>>>>> as we changed the XLogRecPtr format anyway. >>>>> >>>>> >>>>> This is what I came up with. The replication protocol is now >>>>> architecture-independent. The WAL format itself is still >>>>> architecture-independent, of course, but this is useful if you want >>>>> to e.g >>>>> use pg_receivexlog to back up a server that runs on a different >>>>> platform. >>>>> >>>>> I chose the int64 format to transmit timestamps, even when compiled with >>>>> --disable-integer-datetimes. >>>>> >>>>> Please review if you have the time.. >>>> >>>> >>>> Thanks for the patch! >>>> >>>> When I ran pg_receivexlog, I encountered the following error. >>> >>> >>> Yeah, clearly I didn't test this near enough... >>> >>> I fixed the bugs you bumped into, new version attached. >> >> >> Committed this now, after fixing a few more bugs that came up during >> testing. > > As I suggested upthread, pg_basebackup and pg_receivexlog no longer > need to check integer_datetimes before establishing the connection, > thanks to this commit. If this is right, the attached patch should be applied. > The patch just removes the check of integer_datetimes by pg_basebackup > and pg_receivexlog. Another comment that I made upthread is: -------- In XLogWalRcvSendReply() and XLogWalRcvSendHSFeedback(), GetCurrentTimestamp() is called twice. I think that we can skip the latter call if integer-datetime is enabled because the return value of GetCurrentTimestamp() and GetCurrentIntegerTimestamp() is in the same format. It's worth reducing the number of GetCurrentTimestamp() calls, I think. -------- Attached patch removes redundant GetCurrentTimestamp() call from XLogWalRcvSendReply() and XLogWalRcvSendHSFeedback(), if --enable-integer-datetimes. Regards, -- Fujii Masao
Attachment
On Thu, Nov 8, 2012 at 5:53 PM, Amit Kapila <amit.kapila@huawei.com> wrote: > On Thursday, November 08, 2012 2:04 PM Heikki Linnakangas wrote: >> On 19.10.2012 14:42, Amit kapila wrote: >> > On Thursday, October 18, 2012 8:49 PM Fujii Masao wrote: >> >> Before implementing the timeout parameter, I think that it's better >> to change >> >> both pg_basebackup background process and pg_receivexlog so that they >> >> send back the reply message immediately when they receive the >> keepalive >> >> message requesting the reply. Currently, they always ignore such >> keepalive >> >> message, so status interval parameter (-s) in them always must be set >> to >> >> the value less than replication timeout. We can avoid this >> troublesome >> >> parameter setting by introducing the same logic of walreceiver into >> both >> >> pg_basebackup background process and pg_receivexlog. >> > >> > Please find the patch attached to address the modification mentioned >> by you (send immediate reply for keepalive). >> > Both basebackup and pg_receivexlog uses the same function >> ReceiveXLogStream, so single change for both will address the issue. >> >> Thanks, committed this one after shuffling it around the changes I >> committed yesterday. I also updated the docs to not claim that -s option >> is required to avoid timeout disconnects anymore. > > Thank you. > However I think still the issue will not be completely solved. > pg_basebackup/pg_receivexlog can still take long time to > detect network break as they don't have timeout concept. To do that I have > sent one proposal which is mentioned at end of mail chain: > http://archives.postgresql.org/message-id/6C0B27F7206C9E4CA54AE035729E9C3828 > 53BBED@szxeml509-mbs > > Do you think there is any need to introduce such mechanism in > pg_basebackup/pg_receivexlog? Are you planning to introduce the timeout mechanism in pg_basebackup main process? Or background process? It's useful to implement both. BTW, IIRC the walsender has no timeout mechanism during sending backup data to pg_basebackup. So it's also useful to implement the timeout mechanism for the walsender during backup. Regards, -- Fujii Masao
On Thursday, November 08, 2012 10:42 PM Fujii Masao wrote: > On Thu, Nov 8, 2012 at 5:53 PM, Amit Kapila <amit.kapila@huawei.com> > wrote: > > On Thursday, November 08, 2012 2:04 PM Heikki Linnakangas wrote: > >> On 19.10.2012 14:42, Amit kapila wrote: > >> > On Thursday, October 18, 2012 8:49 PM Fujii Masao wrote: > >> >> Before implementing the timeout parameter, I think that it's > better > >> to change > >> >> both pg_basebackup background process and pg_receivexlog so that > they > >> >> send back the reply message immediately when they receive the > >> keepalive > >> >> message requesting the reply. Currently, they always ignore such > >> keepalive > >> >> message, so status interval parameter (-s) in them always must be > set > >> to > >> >> the value less than replication timeout. We can avoid this > >> troublesome > >> >> parameter setting by introducing the same logic of walreceiver > into > >> both > >> >> pg_basebackup background process and pg_receivexlog. > >> > > >> > Please find the patch attached to address the modification > mentioned > >> by you (send immediate reply for keepalive). > >> > Both basebackup and pg_receivexlog uses the same function > >> ReceiveXLogStream, so single change for both will address the issue. > >> > >> Thanks, committed this one after shuffling it around the changes I > >> committed yesterday. I also updated the docs to not claim that -s > option > >> is required to avoid timeout disconnects anymore. > > > > Thank you. > > However I think still the issue will not be completely solved. > > pg_basebackup/pg_receivexlog can still take long time to > > detect network break as they don't have timeout concept. To do that I > have > > sent one proposal which is mentioned at end of mail chain: > > http://archives.postgresql.org/message- > id/6C0B27F7206C9E4CA54AE035729E9C3828 > > 53BBED@szxeml509-mbs > > > > Do you think there is any need to introduce such mechanism in > > pg_basebackup/pg_receivexlog? > > Are you planning to introduce the timeout mechanism in pg_basebackup > main process? Or background process? It's useful to implement both. By background process, you mean ReceiveXlogStream? For both. I think for background process, it can be done in a way similar to what we have done for walreceiver. But I have some doubts for how to do for main process: Logic similar to walreceiver can not be used incase network goes down during getting other database file from server. The reason for the same is to receive the data files PQgetCopyData() is called in synchronous mode, so it keeps waiting for infinite time till it gets some data. In order to solve this issue, I can think of following options: 1. Making this call also asynchronous (but now sure about impact of this). 2. In function pqWait, instead of passing hard-code value -1 (i.e. infinite wait), we can send some finite time. This time can be received as command line argument from respective utility and set the same in PGconn structure. In order to have timeout value in PGconn,we can have: a. Add new parameter in PGconn to indicate the receive timeout. b. Use the existing parameterconnect_timeout for receive timeout also but this may lead to confusion. 3. Any other better option? Apart from above issue, there is possibility that if during connect time network goes down, then it might hang, because connect_timeout by default will be NULL and connectDBComplete will start waiting inifinitely for connection to become successful. So shall we have command line argument separately for this also or any other way as you suugest. > BTW, IIRC the walsender has no timeout mechanism during sending > backup data to pg_basebackup. So it's also useful to implement the > timeout mechanism for the walsender during backup. Yes, its useful, but for walsender the main problem is that it uses blocking send call to send the data. I have tried using tcp_keepalive settings, but the send call doesn't comeout incase of network break. The only way I could get it out is: change in the corresponding file /proc/sys/net/ipv4/tcp_retries2 by using the command echo "8" > /proc/sys/net/ipv4/tcp_retries2 As per recommendation, its value should be at-least 8 (equivalent to 100 sec) Do you have any idea, how it can be achieved? With Regards, Amit Kapila.
On Fri, Nov 9, 2012 at 3:03 PM, Amit Kapila <amit.kapila@huawei.com> wrote: > On Thursday, November 08, 2012 10:42 PM Fujii Masao wrote: >> On Thu, Nov 8, 2012 at 5:53 PM, Amit Kapila <amit.kapila@huawei.com> >> wrote: >> > On Thursday, November 08, 2012 2:04 PM Heikki Linnakangas wrote: >> >> On 19.10.2012 14:42, Amit kapila wrote: >> >> > On Thursday, October 18, 2012 8:49 PM Fujii Masao wrote: >> >> >> Before implementing the timeout parameter, I think that it's >> better >> >> to change >> >> >> both pg_basebackup background process and pg_receivexlog so that >> they >> >> >> send back the reply message immediately when they receive the >> >> keepalive >> >> >> message requesting the reply. Currently, they always ignore such >> >> keepalive >> >> >> message, so status interval parameter (-s) in them always must be >> set >> >> to >> >> >> the value less than replication timeout. We can avoid this >> >> troublesome >> >> >> parameter setting by introducing the same logic of walreceiver >> into >> >> both >> >> >> pg_basebackup background process and pg_receivexlog. >> >> > >> >> > Please find the patch attached to address the modification >> mentioned >> >> by you (send immediate reply for keepalive). >> >> > Both basebackup and pg_receivexlog uses the same function >> >> ReceiveXLogStream, so single change for both will address the issue. >> >> >> >> Thanks, committed this one after shuffling it around the changes I >> >> committed yesterday. I also updated the docs to not claim that -s >> option >> >> is required to avoid timeout disconnects anymore. >> > >> > Thank you. >> > However I think still the issue will not be completely solved. >> > pg_basebackup/pg_receivexlog can still take long time to >> > detect network break as they don't have timeout concept. To do that I >> have >> > sent one proposal which is mentioned at end of mail chain: >> > http://archives.postgresql.org/message- >> id/6C0B27F7206C9E4CA54AE035729E9C3828 >> > 53BBED@szxeml509-mbs >> > >> > Do you think there is any need to introduce such mechanism in >> > pg_basebackup/pg_receivexlog? >> >> Are you planning to introduce the timeout mechanism in pg_basebackup >> main process? Or background process? It's useful to implement both. > > By background process, you mean ReceiveXlogStream? > For both. > > I think for background process, it can be done in a way similar to what we > have done for walreceiver. Yes. > But I have some doubts for how to do for main process: > > Logic similar to walreceiver can not be used incase network goes down during > getting other database file from server. > The reason for the same is to receive the data files PQgetCopyData() is > called in synchronous mode, so it keeps waiting for infinite time till it > gets some data. > In order to solve this issue, I can think of following options: > 1. Making this call also asynchronous (but now sure about impact of this). +1 Walreceiver already calls PQgetCopyData() asynchronously. ISTM you can solve the issue in the similar way to walreceiver's. > 2. In function pqWait, instead of passing hard-code value -1 (i.e. infinite > wait), we can send some finite time. This time can be received as command > line argument > from respective utility and set the same in PGconn structure. > In order to have timeout value in PGconn, we can have: > a. Add new parameter in PGconn to indicate the receive timeout. > b. Use the existing parameter connect_timeout for receive timeout > also but this may lead to confusion. > 3. Any other better option? > > Apart from above issue, there is possibility that if during connect time > network goes down, then it might hang, because connect_timeout by default > will be NULL and connectDBComplete will start waiting inifinitely for > connection to become successful. > So shall we have command line argument separately for this also or any other > way as you suugest. Yes, I think that we should add something like --conninfo option to pg_basebackup and pg_receivexlog. We can easily set not only connect_timeout but also sslmode, application_name, ... by using such option accepting conninfo string. >> BTW, IIRC the walsender has no timeout mechanism during sending >> backup data to pg_basebackup. So it's also useful to implement the >> timeout mechanism for the walsender during backup. > > Yes, its useful, but for walsender the main problem is that it uses blocking > send call to send the data. > I have tried using tcp_keepalive settings, but the send call doesn't comeout > incase of network break. > The only way I could get it out is: > change in the corresponding file /proc/sys/net/ipv4/tcp_retries2 by using > the command > echo "8" > /proc/sys/net/ipv4/tcp_retries2 > As per recommendation, its value should be at-least 8 (equivalent to 100 > sec) > > Do you have any idea, how it can be achieved? What about using pq_putmessage_noblock()? Regards, -- Fujii Masao
On Monday, November 12, 2012 8:23 PM Fujii Masao wrote: On Fri, Nov 9, 2012 at 3:03 PM, Amit Kapila <amit.kapila@huawei.com> wrote: > On Thursday, November 08, 2012 10:42 PM Fujii Masao wrote: >> On Thu, Nov 8, 2012 at 5:53 PM, Amit Kapila <amit.kapila@huawei.com> >> wrote: >> > On Thursday, November 08, 2012 2:04 PM Heikki Linnakangas wrote: >> >> On 19.10.2012 14:42, Amit kapila wrote: >> >> > On Thursday, October 18, 2012 8:49 PM Fujii Masao wrote: >> >> >> Before implementing the timeout parameter, I think that it's >> better >> >> to change >> >> >> both pg_basebackup background process and pg_receivexlog so that >>> BTW, IIRC the walsender has no timeout mechanism during sending >>> backup data to pg_basebackup. So it's also useful to implement the >> timeout mechanism for the walsender during backup. > >> Yes, its useful, but for walsender the main problem is that it uses blocking >> send call to send the data. >> I have tried using tcp_keepalive settings, but the send call doesn't comeout >> incase of network break. >> The only way I could get it out is: >> change in the corresponding file /proc/sys/net/ipv4/tcp_retries2 by using >> the command > echo "8" > /proc/sys/net/ipv4/tcp_retries2 >> As per recommendation, its value should be at-least 8 (equivalent to 100 >> sec) > >> Do you have any idea, how it can be achieved? > What about using pq_putmessage_noblock()? I will try this, but do you know why at first place in code the blocking mode is used to send files? I am asking as I am little scared that it should not break any design which was initially thought of while making send offiles as blocking. With Regards, Amit Kapila.
On Tue, Nov 13, 2012 at 1:06 PM, Amit kapila <amit.kapila@huawei.com> wrote: > On Monday, November 12, 2012 8:23 PM Fujii Masao wrote: > On Fri, Nov 9, 2012 at 3:03 PM, Amit Kapila <amit.kapila@huawei.com> wrote: >> On Thursday, November 08, 2012 10:42 PM Fujii Masao wrote: >>> On Thu, Nov 8, 2012 at 5:53 PM, Amit Kapila <amit.kapila@huawei.com> >>> wrote: >>> > On Thursday, November 08, 2012 2:04 PM Heikki Linnakangas wrote: >>> >> On 19.10.2012 14:42, Amit kapila wrote: >>> >> > On Thursday, October 18, 2012 8:49 PM Fujii Masao wrote: >>> >> >> Before implementing the timeout parameter, I think that it's >>> better >>> >> to change >>> >> >> both pg_basebackup background process and pg_receivexlog so that > >>>> BTW, IIRC the walsender has no timeout mechanism during sending >>>> backup data to pg_basebackup. So it's also useful to implement the >>> timeout mechanism for the walsender during backup. >> >>> Yes, its useful, but for walsender the main problem is that it uses blocking >>> send call to send the data. >>> I have tried using tcp_keepalive settings, but the send call doesn't comeout >>> incase of network break. >>> The only way I could get it out is: >>> change in the corresponding file /proc/sys/net/ipv4/tcp_retries2 by using >>> the command >> echo "8" > /proc/sys/net/ipv4/tcp_retries2 >>> As per recommendation, its value should be at-least 8 (equivalent to 100 >>> sec) >> >>> Do you have any idea, how it can be achieved? > >> What about using pq_putmessage_noblock()? > > I will try this, but do you know why at first place in code the blocking mode is used to send files? > I am asking as I am little scared that it should not break any design which was initially thought of while making sendof files as blocking. I'm afraid I don't know why. I guess that using non-blocking mode complicates the code, so in the first version of pg_basebackup the blocking mode was adopted. Regards, -- Fujii Masao
On Monday, November 12, 2012 8:23 PM Fujii Masao wrote: On Fri, Nov 9, 2012 at 3:03 PM, Amit Kapila <amit.kapila@huawei.com> wrote: > On Thursday, November 08, 2012 10:42 PM Fujii Masao wrote: >> On Thu, Nov 8, 2012 at 5:53 PM, Amit Kapila <amit.kapila@huawei.com> >> wrote: >> > On Thursday, November 08, 2012 2:04 PM Heikki Linnakangas wrote: >> >> On 19.10.2012 14:42, Amit kapila wrote: >> >> > On Thursday, October 18, 2012 8:49 PM Fujii Masao wrote: >>> Are you planning to introduce the timeout mechanism in pg_basebackup >>> main process? Or background process? It's useful to implement both. > >> By background process, you mean ReceiveXlogStream? >> For both. > >> I think for background process, it can be done in a way similar to what we >> have done for walreceiver. > Yes. >> But I have some doubts for how to do for main process: > >> Logic similar to walreceiver can not be used incase network goes down during >> getting other database file from server. >> The reason for the same is to receive the data files PQgetCopyData() is >> called in synchronous mode, so it keeps waiting for infinite time till it >> gets some data. >> In order to solve this issue, I can think of following options: >> 1. Making this call also asynchronous (but now sure about impact of this). > +1 > Walreceiver already calls PQgetCopyData() asynchronously. ISTM you can > solve the issue in the similar way to walreceiver's. >> 2. In function pqWait, instead of passing hard-code value -1 (i.e. infinite >> wait), we can send some finite time. This time can be received as command >> line argument >> from respective utility and set the same in PGconn structure. > Yes, I think that we should add something like --conninfo option to > pg_basebackup > and pg_receivexlog. We can easily set not only connect_timeout but also sslmode, > application_name, ... by using such option accepting conninfo string. I have prepared an attached patch to make pg_basebackup and pg_receivexlog as non-blocking. To do so I have to add new command line parameters in pg_basebackup and pg_receivexlog for now added two more command line arguments a. "-r" for pg_basebackup and pg_receivexlog to take receive time-out value. Default value for this parameter is60 sec. b. "-t" for pg_basebackup and pg_receivexlog to take initial connection timeout value. Default value is infinitewait. We can change to accept --conninfo as well. I feel apart from above, remaining problem is for function call PQgetResult() 1. Wherever query is getting sent from BaseBackup, it calls the function PQgetResult to receive the result of query. As PQgetResult() is blocking function (it calls pqWait which can hang), so if network is down before sending the queryitself, then there will not be any result, so it will keep hanging in PQgetResult . IMO, it can be solved in below ways: a. Create one corresponding non-blocking function. But this function is being called from inside some of the other libpq function (PQexec->PQexecFinish->PQgetResult). So it can be little tricky to solve this way. b. Add the receive_timeout variable in PGconn structure and use it in pqWait for timeout whenever it is set. c. any other better way? >> BTW, IIRC the walsender has no timeout mechanism during sending >> backup data to pg_basebackup. So it's also useful to implement the >> timeout mechanism for the walsender during backup. > >What about using pq_putmessage_noblock()? I think may be some more functions also needs to be made as noblock. I am still evaluating. I will upload the attached patch in commitfest if you don't have any objections? More Suggestions/Comments? With Regards, Amit Kapila.
Attachment
Re: Re: [BUGS] BUG #7534: walreceiver takes long time to detect n/w breakdown
From
Amit Kapila
Date:
On Thursday, November 15, 2012 7:29 PM Amit kapila wrote: > On Monday, November 12, 2012 8:23 PM Fujii Masao wrote: > On Fri, Nov 9, 2012 at 3:03 PM, Amit Kapila <amit.kapila@huawei.com> > wrote: > > On Thursday, November 08, 2012 10:42 PM Fujii Masao wrote: > >> On Thu, Nov 8, 2012 at 5:53 PM, Amit Kapila <amit.kapila@huawei.com> > >> wrote: > >> > On Thursday, November 08, 2012 2:04 PM Heikki Linnakangas wrote: > >> >> On 19.10.2012 14:42, Amit kapila wrote: > >> >> > On Thursday, October 18, 2012 8:49 PM Fujii Masao wrote: > > >>> Are you planning to introduce the timeout mechanism in pg_basebackup > I feel apart from above, remaining problem is for function call > PQgetResult() 1. Wherever query is getting sent from BaseBackup, it > calls the function PQgetResult to receive the result of query. > As PQgetResult() is blocking function (it calls pqWait which can > hang), so if network is down before sending the query itself, > then there will not be any result, so it will keep hanging in > PQgetResult . > IMO, it can be solved in below ways: > a. Create one corresponding non-blocking function. But this function is > being called from inside some of the > other libpq function (PQexec->PQexecFinish->PQgetResult). So it can > be little tricky to solve this way. > b. Add the receive_timeout variable in PGconn structure and use it in > pqWait for timeout whenever it is set. > c. any other better way? > > > >> BTW, IIRC the walsender has no timeout mechanism during sending > >> backup data to pg_basebackup. So it's also useful to implement the > >> timeout mechanism for the walsender during backup. > > > > >What about using pq_putmessage_noblock()? > > I think may be some more functions also needs to be made as noblock. I > am still evaluating. Done the analysis and seems that for below API's also, we need equivalent noblock, otherwise same problem can happen as they are also used in the flow. a. pq_endmessage b. EndCommand c. pq_puttextmessage d. pq_putemptymessage e. ReadyForQuery - For this, because now walsender and normal backend are same. f. ReadCommand - For this, because now walsender and normal backend are same. It seems solution for it can be tricky as pq_getbyte is not called from first level function. Suggestions/Thoughts? With Regards, Amit Kapila.
Review of "pg_basebackup and pg_receivexlog to use non-blocking socket communication", was: Re: Re: [BUGS] BUG #7534: walreceiver takes long time to detect n/w breakdown
From
Boszormenyi Zoltan
Date:
<div class="moz-cite-prefix">Hi,<br /><br /> 2012-11-15 14:59 keltezéssel, Amit kapila írta:<br /></div><blockquote cite="mid:6C0B27F7206C9E4CA54AE035729E9C38285499FA@szxeml509-mbx"type="cite"><pre wrap="">On Monday, November 12, 2012 8:23PM Fujii Masao wrote: On Fri, Nov 9, 2012 at 3:03 PM, Amit Kapila <a class="moz-txt-link-rfc2396E" href="mailto:amit.kapila@huawei.com"><amit.kapila@huawei.com></a>wrote: </pre><blockquote type="cite"><pre wrap="">On Thursday, November 08, 2012 10:42 PM Fujii Masao wrote: </pre><blockquote type="cite"><pre wrap="">On Thu, Nov 8, 2012 at 5:53 PM, Amit Kapila <a class="moz-txt-link-rfc2396E" href="mailto:amit.kapila@huawei.com"><amit.kapila@huawei.com></a> wrote: </pre><blockquote type="cite"><pre wrap="">On Thursday, November 08, 2012 2:04 PM Heikki Linnakangas wrote: </pre><blockquote type="cite"><pre wrap="">On 19.10.2012 14:42, Amit kapila wrote: </pre><blockquote type="cite"><pre wrap="">On Thursday, October 18, 2012 8:49 PM Fujii Masao wrote: </pre></blockquote></blockquote></blockquote></blockquote></blockquote><pre wrap=""> </pre><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><pre wrap="">Are you planning to introducethe timeout mechanism in pg_basebackup main process? Or background process? It's useful to implement both. </pre></blockquote></blockquote><pre wrap=""> </pre><blockquote type="cite"><pre wrap="">By background process, you mean ReceiveXlogStream? For both. </pre></blockquote><pre wrap=""> </pre><blockquote type="cite"><pre wrap="">I think for background process, it can be done in a way similar to what we have done for walreceiver. </pre></blockquote></blockquote><pre wrap=""> </pre><blockquote type="cite"><pre wrap="">Yes. </pre></blockquote><pre wrap=""> </pre><blockquote type="cite"><blockquote type="cite"><pre wrap="">But I have some doubts for how to do for main process: </pre></blockquote><pre wrap=""> </pre><blockquote type="cite"><pre wrap="">Logic similar to walreceiver can not be used incase network goes down during getting other database file from server. The reason for the same is to receive the data files PQgetCopyData() is called in synchronous mode, so it keeps waiting for infinite time till it gets some data. In order to solve this issue, I can think of following options: 1. Making this call also asynchronous (but now sure about impact of this). </pre></blockquote></blockquote><pre wrap=""> </pre><blockquote type="cite"><pre wrap="">+1 </pre></blockquote><pre wrap=""> </pre><blockquote type="cite"><pre wrap="">Walreceiver already calls PQgetCopyData() asynchronously. ISTM you can solve the issue in the similar way to walreceiver's. </pre></blockquote><pre wrap=""> </pre><blockquote type="cite"><blockquote type="cite"><pre wrap="">2. In function pqWait, instead of passing hard-code value-1 (i.e. infinite wait), we can send some finite time. This time can be received as command line argument from respective utility and set the same in PGconn structure. </pre></blockquote></blockquote><pre wrap=""> </pre><blockquote type="cite"><pre wrap="">Yes, I think that we should add something like --conninfo option to pg_basebackup and pg_receivexlog. We can easily set not only connect_timeout but also sslmode, application_name, ... by using such option accepting conninfo string. </pre></blockquote><pre wrap=""> I have prepared an attached patch to make pg_basebackup and pg_receivexlog as non-blocking. To do so I have to add new command line parameters in pg_basebackup and pg_receivexlog for now added two more command line arguments a. "-r" for pg_basebackup and pg_receivexlog to take receive time-outvalue. Default value for this parameter is 60 sec. b. "-t" for pg_basebackup and pg_receivexlog to takeinitial connection timeout value. Default value is infinite wait. We can change to accept --conninfo as well. I feel apart from above, remaining problem is for function call PQgetResult() 1. Wherever query is getting sent from BaseBackup, it calls the function PQgetResult to receive the result of query. AsPQgetResult() is blocking function (it calls pqWait which can hang), so if network is down before sending the query itself, then there will not be any result, so it will keep hanging in PQgetResult . IMO, it can be solved in below ways: a. Create one corresponding non-blocking function. But this function is being called from inside some of the other libpqfunction (PQexec->PQexecFinish->PQgetResult). So it can be little tricky to solve this way. b. Add the receive_timeout variable in PGconn structure and use it in pqWait for timeout whenever it is set. c. any other better way? </pre><blockquote type="cite"><blockquote type="cite"><pre wrap="">BTW, IIRC the walsender has no timeout mechanism duringsending backup data to pg_basebackup. So it's also useful to implement the timeout mechanism for the walsender during backup. </pre></blockquote><pre wrap=""> </pre></blockquote><pre wrap=""> </pre><blockquote type="cite"><pre wrap="">What about using pq_putmessage_noblock()? </pre></blockquote><pre wrap=""> I think may be some more functions also needs to be made as noblock. I am still evaluating. I will upload the attached patch in commitfest if you don't have any objections? More Suggestions/Comments? With Regards, Amit Kapila.</pre></blockquote><br /> I am reviewing your patch.<br /><br /><ul><li> Is the patch in <a class="external text"href="http://en.wikipedia.org/wiki/Diff#Context_format" rel="nofollow" title="http://en.wikipedia.org/wiki/Diff#Context_format">contextdiff format</a>? </ul><br /> Yes.<br /><br /><ul><li> Doesit apply cleanly to the current git master?</ul><br /> Not quite cleanly but it doesn't produce rejects or fuzz, onlyoffset warnings:<br /><br /> [zozo@localhost postgresql]$ cat ../noblock_basebackup_and_receivexlog.patch | patch -p1<br/> patching file src/bin/pg_basebackup/pg_basebackup.c<br /> Hunk #1 succeeded at 41 (offset -6 lines).<br /> Hunk#2 succeeded at 123 (offset -6 lines).<br /> Hunk #3 succeeded at 239 (offset -6 lines).<br /> Hunk #4 succeeded at 292(offset -6 lines).<br /> Hunk #5 succeeded at 470 (offset -6 lines).<br /> Hunk #6 succeeded at 588 (offset -6 lines).<br/> Hunk #7 succeeded at 601 (offset -6 lines).<br /> Hunk #8 succeeded at 727 (offset -6 lines).<br /> Hunk #9succeeded at 779 (offset -6 lines).<br /> Hunk #10 succeeded at 797 (offset -6 lines).<br /> Hunk #11 succeeded at 811(offset -6 lines).<br /> Hunk #12 succeeded at 879 (offset -6 lines).<br /> Hunk #13 succeeded at 1080 (offset -6 lines).<br/> Hunk #14 succeeded at 1381 (offset -6 lines).<br /> Hunk #15 succeeded at 1409 (offset -6 lines).<br /> Hunk#16 succeeded at 1521 (offset -6 lines).<br /> patching file src/bin/pg_basebackup/pg_receivexlog.c<br /> Hunk #1 succeededat 35 (offset -6 lines).<br /> Hunk #2 succeeded at 65 (offset -6 lines).<br /> Hunk #3 succeeded at 224 (offset-6 lines).<br /> Hunk #4 succeeded at 281 (offset -6 lines).<br /> Hunk #5 succeeded at 314 (offset -6 lines).<br/> Hunk #6 succeeded at 341 (offset -5 lines).<br /> Hunk #7 succeeded at 379 (offset -5 lines).<br /> patchingfile src/bin/pg_basebackup/receivelog.c<br /> Hunk #1 succeeded at 181 (offset -9 lines).<br /> Hunk #2 succeededat 201 (offset -9 lines).<br /> Hunk #3 succeeded at 223 (offset -9 lines).<br /> Hunk #4 succeeded at 333 (offset-9 lines).<br /> Hunk #5 succeeded at 342 (offset -9 lines).<br /> Hunk #6 succeeded at 397 (offset -9 lines).<br/> Hunk #7 succeeded at 484 (offset -9 lines).<br /> Hunk #8 succeeded at 533 (offset -9 lines).<br /> Hunk #9succeeded at 550 (offset -9 lines).<br /> patching file src/bin/pg_basebackup/receivelog.h<br /> patching file src/bin/pg_basebackup/streamutil.c<br/> Hunk #1 succeeded at 66 (offset -6 lines).<br /> Hunk #2 succeeded at 87 (offset-6 lines).<br /> Hunk #3 succeeded at 118 (offset -6 lines).<br /> patching file src/bin/pg_basebackup/streamutil.h<br/><br /><ul><li> Does it include reasonable tests, necessary doc patches, etc? </ul><br/> The test cases are not applicable. There is no test framework for<br /> testing network outage in "make check".<br/><br /> There are no documentation patches for the new --recvtimeout=INTERVAL<br /> and --conntimeout=INTERVALoptions for either pg_basebackup or<br /> pg_receivexlog.<br /><br /><ul><li> Does the patch actuallyimplement that? </ul><p><br /> It seems so, the patch adds the connect_timeout parameter to<br /> the connectionoptions and uses PQgetCopyData(..., 1) to get<br /> the data asynchronously and uses select(2) to watch for incoming<br/> data.<br /><br /><ul><li> Do we want that? </ul><p><br /> It can speed up detecting network breakdown so yes.<br/><br /><ul><li> Do we already have it? </ul><p><br /> No.<br /><br /><ul><li> Does it follow SQL spec, or the community-agreedbehavior? </ul><p><br /> There's no such SQL spec. The behaviour is desired.<br /><br /><ul><li> Does itinclude pg_dump support (if applicable)?</ul><p><br /> Not applicable.<br /><br /><ul><li> Are there dangers? </ul><p><br/> The patch author researched more functions that need<br /> to be extended in a nonblocking way.<br /><a class="moz-txt-link-freetext" href="http://archives.postgresql.org/pgsql-hackers/2012-11/msg00863.php">http://archives.postgresql.org/pgsql-hackers/2012-11/msg00863.php</a><br /><br/><ul><li> Have all the bases been covered? </ul><br /> For pg_basebackup/pg_receivexlog (for PQgetCopyData and<br />PQconnect), yes.<br /><br /> Per the previous comment, no. But those are for the backend<br /> to notice network breakdownsand as such, they need a<br /> separate patch. <br /><br /><ul><li> Does the feature work as advertised?</ul><p><br/> Yes.<br /><p>I tested it between two machines and pulled the ethernet<br /> plug while pg_basebackupwas running. With "-r 2", pg_basebackup<br /> detected the timeout after 2 seconds. Without the patch, I lost<br/> patience after two minutes and pressed Ctrl-C in pg_basebackup.<br /><p>I also tested pg_receivexlog and it alsonoticed the network error<br /> in the specified timeout.<br /><br /><ul><li> Are there corner cases the author has failedto consider?</ul><p><br /> As far as I can see in the client-side libpq code flow, no.<br /><br /><ul><li> Are thereany assertion failures or crashes? </ul><br /> Not applicable, the patch is for client applicatiions.<br /><br /><ul><li>Does the patch slow down simple tests? </ul><p><br /> No.<br /><br /><ul><li> If it claims to improve performance,does it?</ul><p><br /> Not applicable, not a performance patch. But it really<br /> improves detecting networkbreakdown.<br /><br /><ul><li> Does it slow down other things? </ul><br /> No.<br /><br /><ul><li> Does it followthe project <a class="external text" href="http://developer.postgresql.org/pgdocs/postgres/source.html" rel="nofollow"title="http://developer.postgresql.org/pgdocs/postgres/source.html">coding guidelines</a>? </ul><p><br /> Yes.<br/><br /><ul><li> Are there portability issues? </ul><p><br /> No. It introduces atoi() and select() as new calls,these are portable.<br /><br /><ul><li> Will it work on Windows/BSD etc? </ul><p><br /> It should.<br /><br /><ul><li>Are the comments sufficient and accurate?</ul><p><br /> This chunk below removes a comment which seems obviousenough<br /> so it's not needed:<br /><p>***************<br /> *** 518,524 **** ReceiveXlogStream(PGconn *conn, XLogRecPtrstartpos, uint32 timeline,<br /> goto error;<br /> }<br /> <br /> ! /* Check the message type. */<br /> if (copybuf[0] == 'k')<br /> {<br /> int pos;<br /> --- 559,568 ----<br /> goto error;<br /> }<br /> <br /> ! /* Set the last reply timestamp */<br /> ! last_recv_timestamp= localGetCurrentTimestamp();<br /> ! ping_sent = false;<br /> ! <br /> if (copybuf[0] == 'k')<br /> {<br /> int pos;<br /> ***************<br/><p><br /> Other comments are sufficient and accurate.<br /><br /><ul><li> Does it do what it says, correctly?</ul><p><br /> This question is redundant with the above "Does the feature work as advertised?"<br /> So yes.<br/><br /><ul><li> Does it produce compiler warnings?</ul><p><br /> No.<br /><br /><ul><li> Can you make it crash? </ul><br/> No.<br /><br /><ul><li> Is everything done in a way that fits together coherently with other features/modules?</ul><p><br /> Yes.<br /><br /><ul><li> Are there interdependencies that can cause problems? </ul><br />No.<br /><br /><br /> Best regards,<br /> Zoltán Böszörményi<br /><br /><br /><pre class="moz-signature" cols="90">-- ---------------------------------- Zoltán Böszörményi Cybertec Schönig & Schönig GmbH Gröhrmühlgasse 26 A-2700 Wiener Neustadt, Austria Web: <a class="moz-txt-link-freetext" href="http://www.postgresql-support.de">http://www.postgresql-support.de</a> <aclass="moz-txt-link-freetext" href="http://www.postgresql.at/">http://www.postgresql.at/</a> </pre>
Re: Review of "pg_basebackup and pg_receivexlog to use non-blocking socket communication", was: Re: Re: [BUGS] BUG #7534: walreceiver takes long time to detect n/w breakdown
From
Hari Babu
Date:
On January 01, 2013 10:19 PM Boszormenyi Zoltan wrote: >I am reviewing your patch. > Is the patch in context diff format? >Yes. Thanks for reviewing the patch. > Does it apply cleanly to the current git master? >Not quite cleanly but it doesn't produce rejects or fuzz, only offset warnings: Will rebase the patch to head. > Does it include reasonable tests, necessary doc patches, etc? >The test cases are not applicable. There is no test framework for >testing network outage in "make check". > >There are no documentation patches for the new --recvtimeout=INTERVAL >and --conntimeout=INTERVAL options for either pg_basebackup or >pg_receivexlog. I will add the documentation for the same. >Per the previous comment, no. But those are for the backend >to notice network breakdowns and as such, they need a >separate patch. I also think it is better to handle it as a separate patch for walsender. > Are the comments sufficient and accurate? >This chunk below removes a comment which seems obvious enough >so it's not needed: >*************** >*** 518,524 **** ReceiveXlogStream(PGconn *conn, XLogRecPtr startpos, uint32 timeline, > goto error; > } > >! /* Check the message type. */ > if (copybuf[0] == 'k') > { > int pos; >--- 559,568 ---- > goto error; > } > >! /* Set the last reply timestamp */ >! last_recv_timestamp = localGetCurrentTimestamp(); >! ping_sent = false; >! > if (copybuf[0] == 'k') > { > int pos; >*************** > >Other comments are sufficient and accurate. I will fix and update the patch. Please let me know if anything apart from above needs to be taken care. Regards, Hari babu.
Re: Review of "pg_basebackup and pg_receivexlog to use non-blocking socket communication", was: Re: Re: [BUGS] BUG #7534: walreceiver takes long time to detect n/w breakdown
From
Hari Babu
Date:
On January 02, 2013 12:41 PM Hari Babu wrote: >On January 01, 2013 10:19 PM Boszormenyi Zoltan wrote: >>I am reviewing your patch. >> Is the patch in context diff format? >>Yes. > >Thanks for reviewing the patch. > >> Does it apply cleanly to the current git master? >>Not quite cleanly but it doesn't produce rejects or fuzz, only offset warnings: > >Will rebase the patch to head. > >> Does it include reasonable tests, necessary doc patches, etc? >>The test cases are not applicable. There is no test framework for >>testing network outage in "make check". >> >>There are no documentation patches for the new --recvtimeout=INTERVAL >>and --conntimeout=INTERVAL options for either pg_basebackup or >>pg_receivexlog. > >I will add the documentation for the same. > >>Per the previous comment, no. But those are for the backend >>to notice network breakdowns and as such, they need a >>separate patch. > >I also think it is better to handle it as a separate patch for walsender. > >> Are the comments sufficient and accurate? >>This chunk below removes a comment which seems obvious enough >>so it's not needed: >>*************** >>*** 518,524 **** ReceiveXlogStream(PGconn *conn, XLogRecPtr startpos, uint32 timeline, >> goto error; >> } >> >>! /* Check the message type. */ >> if (copybuf[0] == 'k') >> { >> int pos; >>--- 559,568 ---- >> goto error; >> } >> >>! /* Set the last reply timestamp */ >>! last_recv_timestamp = localGetCurrentTimestamp(); >>! ping_sent = false; >>! >> if (copybuf[0] == 'k') >> { >> int pos; >>*************** >> >>Other comments are sufficient and accurate. > >I will fix and update the patch. The attached V2 patch in the mail handles all the review comments identified above. Regards, Hari babu.
Re: Review of "pg_basebackup and pg_receivexlog to use non-blocking socket communication", was: Re: Re: [BUGS] BUG #7534: walreceiver takes long time to detect n/w breakdown
From
Boszormenyi Zoltan
Date:
2013-01-04 13:43 keltezéssel, Hari Babu írta: > On January 02, 2013 12:41 PM Hari Babu wrote: >> On January 01, 2013 10:19 PM Boszormenyi Zoltan wrote: >>> I am reviewing your patch. >>> • Is the patch in context diff format? >>> Yes. >> Thanks for reviewing the patch. >> >>> • Does it apply cleanly to the current git master? >>> Not quite cleanly but it doesn't produce rejects or fuzz, only offset > warnings: >> Will rebase the patch to head. >> >>> • Does it include reasonable tests, necessary doc patches, etc? >>> The test cases are not applicable. There is no test framework for >>> testing network outage in "make check". >>> >>> There are no documentation patches for the new --recvtimeout=INTERVAL >>> and --conntimeout=INTERVAL options for either pg_basebackup or >>> pg_receivexlog. >> I will add the documentation for the same. >> >>> Per the previous comment, no. But those are for the backend >>> to notice network breakdowns and as such, they need a >>> separate patch. >> I also think it is better to handle it as a separate patch for walsender. >> >>> • Are the comments sufficient and accurate? >>> This chunk below removes a comment which seems obvious enough >>> so it's not needed: >>> *************** >>> *** 518,524 **** ReceiveXlogStream(PGconn *conn, XLogRecPtr startpos, > uint32 timeline, >>> goto error; >>> } >>> >>> ! /* Check the message type. */ >>> if (copybuf[0] == 'k') >>> { >>> int pos; >>> --- 559,568 ---- >>> goto error; >>> } >>> >>> ! /* Set the last reply timestamp */ >>> ! last_recv_timestamp = localGetCurrentTimestamp(); >>> ! ping_sent = false; >>> ! >>> if (copybuf[0] == 'k') >>> { >>> int pos; >>> *************** >>> >>> Other comments are sufficient and accurate. >> I will fix and update the patch. > The attached V2 patch in the mail handles all the review comments identified > above. > > Regards, > Hari babu. Since my other patch against pg_basebackup is now committed, this patch doesn't apply cleanly, patch rejects 2 hunks. The fixed up patch is attached. Best regards, Zoltán Böszörményi -- ---------------------------------- Zoltán Böszörményi Cybertec Schönig & Schönig GmbH Gröhrmühlgasse 26 A-2700 Wiener Neustadt, Austria Web: http://www.postgresql-support.de http://www.postgresql.at/
Attachment
Re: Review of "pg_basebackup and pg_receivexlog to use non-blocking socket communication", was: Re: Re: [BUGS] BUG #7534: walreceiver takes long time to detect n/w breakdown
From
Hari Babu
Date:
On January 07, 2013 7:53 PM Boszormenyi Zoltan wrote: >Since my other patch against pg_basebackup is now committed, >this patch doesn't apply cleanly, patch rejects 2 hunks. >The fixed up patch is attached. Patch is verified. Thanks for rebasing the patch. Regards, Hari babu.
Re: Review of "pg_basebackup and pg_receivexlog to use non-blocking socket communication", was: Re: Re: [BUGS] BUG #7534: walreceiver takes long time to detect n/w breakdown
From
Abhijit Menon-Sen
Date:
Hi. This patch was marked "Needs review" with no reviewers in the ongoing CF, so I decided to take a look at it. I see that Zoltan has posted a review, so I've added him to the list. But I took a look at the latest patch in any case. Here are some comments, mostly cosmetic ones. > diff -dcrpN postgresql.orig/doc/src/sgml/ref/pg_basebackup.sgml postgresql/doc/src/sgml/ref/pg_basebackup.sgml > *** postgresql.orig/doc/src/sgml/ref/pg_basebackup.sgml 2013-01-05 17:34:30.742135371 +0100 > --- postgresql/doc/src/sgml/ref/pg_basebackup.sgml 2013-01-07 15:11:40.787007890 +0100 > *************** PostgreSQL documentation > *** 400,405 **** > --- 400,425 ---- > </varlistentry> > > <varlistentry> > + <term><option>-r <replaceable class="parameter">interval</replaceable></option></term> > + <term><option>--recvtimeout=<replaceable class="parameter">interval</replaceable></option></term> > + <listitem> > + <para> > + time that receiver waits for communication from server (in seconds). > + </para> > + </listitem> > + </varlistentry> I would reword this as "The maximum time (in seconds) to wait for data from the server (default: wait forever)". > + <varlistentry> > + <term><option>-t <replaceable class="parameter">interval</replaceable></option></term> > + <term><option>--conntimeout=<replaceable class="parameter">interval</replaceable></option></term> > + <listitem> > + <para> > + time that client wait for connection to establish with server (in seconds). > + </para> > + </listitem> > + </varlistentry> Likewise, "The maximum time (in seconds) to wait for a connection to the server to succeed (default: wait forever)". Same thing in pg_receivexlog.sgml. Also, there's trailing whitespace in various places in these files (and elsewhere in the patch), which should be fixed. > diff -dcrpN postgresql.orig/src/bin/pg_basebackup/pg_basebackup.c postgresql/src/bin/pg_basebackup/pg_basebackup.c > *** postgresql.orig/src/bin/pg_basebackup/pg_basebackup.c 2013-01-05 17:34:30.778135625 +0100 > --- postgresql/src/bin/pg_basebackup/pg_basebackup.c 2013-01-07 15:16:24.610037886 +0100 > *************** bool streamwal = false; > *** 45,50 **** > --- 45,54 ---- > bool fastcheckpoint = false; > bool writerecoveryconf = false; > int standby_message_timeout = 10 * 1000; /* 10 sec = default */ > + int standby_recv_timeout = 60*1000; /* 60 sec = default */ > + char *standby_connect_timeout = NULL; I don't really like standby_recv_timeout being an int and standby_connect_timeout being a char *. I understand that it's so that it can be assigned to "values[i]" in GetConnection(), but that reason is very distant, and not obvious from this code at all. That said, I don't know if it's really worth bothering with. > + #define NAPTIME_PER_CYCLE 100 /* max sleep time between cycles (100ms) */ This probably needs a better comment. Why are we sleeping between cycles? What cycles? > + printf(_(" -r, --recvtimeout=INTERVAL time that receiver waits for communication from\n" > + " server (in seconds)\n")); > + printf(_(" -t, --conntimeout=INTERVAL time that client wait for connection to establish\n" > + " with server (in seconds)\n")); Same comments about wording apply, but perhaps there's no need to mention the default. > ! if (r == 0 || (r < 0 && errno == EINTR)) > ! { > ! /* > ! * Got a timeout or signal. Before Continuing the loop, check for timeout. > ! */ > ! if (standby_recv_timeout > 0) > ! { > ! now = localGetCurrentTimestamp(); I'd make "now" local to this block, and get rid of the comment. The two "if"s are perfectly clear. This applies to the same pattern in other places in the patch as well. > ! if (localTimestampDifferenceExceeds(last_recv_timestamp, now, standby_recv_timeout)) > ! { > ! fprintf(stderr, _("%s: terminating DB File receive due to timeout\n"), Better wording? "DB File receive" is confusing. Even something like "Closing connection due to read timeout" would be better. Or perhaps you can make it like the following message, slightly lower: > ! if (PQconsumeInput(conn) == 0) > ! { > ! fprintf(stderr, > ! _("%s: could not receive data from WAL Sender: %s"), > ! progname, PQerrorMessage(conn)); …and in the former case, say "read timeout" instead of PQerrorMessage(). > ! /* Set the last reply timestamp */ > ! last_recv_timestamp = localGetCurrentTimestamp(); > ! > ! /* Some data is received, so go back read them in buffer*/ > ! continue; No need for these comments. > + /* Set the last reply timestamp */ > + last_recv_timestamp = localGetCurrentTimestamp(); Likewise (in various places). > /* > ! * Connect in replication mode to the server, Sending connect_timeout > ! * as configured, there is no need for rw_timeout. > */ > ! conn = GetConnection(standby_connect_timeout); This comment is pretty confusing. > * Connect to the server. Returns a valid PGconn pointer if connected, > * or NULL on non-permanent error. On permanent error, the function will > * call exit(1) directly. > + * Set conn_timeout to PGconn structure if their value > + * is not NULL. > */ > PGconn * > ! GetConnection(char *conn_timeout) And this comment is just wrong. The patch looks OK otherwise. Zoltan indicated that his tests were successful, so I didn't retest. Marking "Waiting on author" again. -- Abhijit
Re: Review of "pg_basebackup and pg_receivexlog to use non-blocking socket communication", was: Re: Re: [BUGS] BUG #7534: walreceiver takes long time to detect n/w breakdown
From
Heikki Linnakangas
Date:
On 07.01.2013 16:23, Boszormenyi Zoltan wrote: > Since my other patch against pg_basebackup is now committed, > this patch doesn't apply cleanly, patch rejects 2 hunks. > The fixed up patch is attached. Now that I look at this a high-level perspective, why are we only worried about timeouts in the Copy-mode and when connecting? The initial checkpoint could take a long time too, and if the server turns into a black hole while the checkpoint is running, pg_basebackup will still hang. Then again, a short timeout on that phase would be a bad idea, because the checkpoint can indeed take a long time. In streaming replication, the keep-alive messages carry additional information, the timestamps and WAL locations, so a keepalive makes sense at that level. But otherwise, aren't we just trying to reimplement TCP keepalives? TCP keepalives are not perfect, but if we want to have an application level timeout, it should be implemented in the FE/BE protocol. I don't think we need to do anything specific to pg_basebackup. The user can simply specify TCP keepalive settings in the connection string, like with any libpq program. - Heikki
Re: Review of "pg_basebackup and pg_receivexlog to use non-blocking socket communication", was: Re: Re: [BUGS] BUG #7534: walreceiver takes long time to detect n/w breakdown
From
Amit Kapila
Date:
On Wednesday, January 16, 2013 4:02 PM Heikki Linnakangas wrote: > On 07.01.2013 16:23, Boszormenyi Zoltan wrote: > > Since my other patch against pg_basebackup is now committed, > > this patch doesn't apply cleanly, patch rejects 2 hunks. > > The fixed up patch is attached. > > Now that I look at this a high-level perspective, why are we only > worried about timeouts in the Copy-mode and when connecting? The > initial > checkpoint could take a long time too, and if the server turns into a > black hole while the checkpoint is running, pg_basebackup will still > hang. Then again, a short timeout on that phase would be a bad idea, > because the checkpoint can indeed take a long time. True, but IMO, if somebody want to take basebackup, he should do that when the server is not loaded. > In streaming replication, the keep-alive messages carry additional > information, the timestamps and WAL locations, so a keepalive makes > sense at that level. But otherwise, aren't we just trying to > reimplement > TCP keepalives? TCP keepalives are not perfect, but if we want to have > an application level timeout, it should be implemented in the FE/BE > protocol. > > I don't think we need to do anything specific to pg_basebackup. The > user > can simply specify TCP keepalive settings in the connection string, > like > with any libpq program. I think currently user has no way to specify TCP keepalive settings from pg_basebackup, please let me know if there is any such existing way? I think specifying TCP settings is very cumbersome for most users, that's the reason most standard interfaces (ODBC/JDBC) have such application level timeout mechanism. By implementing in FE/BE protocol (do you mean to say that make such non-blocking behavior inside Libpq or something else), it might be generic and can be used for others as well but it might need few interface changes. IMHO if by having such less impact changes for pg_basebackup, it makes pg_basebackup network sensitive, the current approach can also be considered. With Regards, Amit Kapila.
On 18.01.2013 08:50, Amit Kapila wrote: > I think currently user has no way to specify TCP keepalive settings from > pg_basebackup, please let me know if there is any such existing way? I was going to say you can just use "keepalives_idle=30" in the connection string. But there's no way to pass a connection string to pg_basebackup on the command line! The usual way to pass a connection string is to pass it as the database name, and PQconnect will expand it, but that doesn't work with pg_basebackup because it hardcodes the database name as "replication". D'oh. You could still use environment variables and a service file to do it, but it's certainly more cumbersome. It clearly should be possible to pass a full connection string to pg_basebackup, that's an obvious oversight. - Heikki
On Friday, January 18, 2013 3:46 PM Heikki Linnakangas wrote: > On 18.01.2013 08:50, Amit Kapila wrote: > > I think currently user has no way to specify TCP keepalive settings > from > > pg_basebackup, please let me know if there is any such existing way? > > I was going to say you can just use "keepalives_idle=30" in the > connection string. But there's no way to pass a connection string to > pg_basebackup on the command line! The usual way to pass a connection > string is to pass it as the database name, and PQconnect will expand > it, > but that doesn't work with pg_basebackup because it hardcodes the > database name as "replication". D'oh. > > You could still use environment variables and a service file to do it, > but it's certainly more cumbersome. It clearly should be possible to > pass a full connection string to pg_basebackup, that's an obvious > oversight. So to solve this problem below can be done: 1. Support connection string in pg_basebackup and mention keepalives or connection_timeout 2. Support recv_timeout separately to provide a way to users who are not comfortable tcp keepalives a. 1 can be done alone b. 2 can be done alone c. both 1 and 2. With Regards, Amit Kapila.
On 18.01.2013 13:41, Amit Kapila wrote: > On Friday, January 18, 2013 3:46 PM Heikki Linnakangas wrote: >> On 18.01.2013 08:50, Amit Kapila wrote: >>> I think currently user has no way to specify TCP keepalive settings >> from >>> pg_basebackup, please let me know if there is any such existing way? >> >> I was going to say you can just use "keepalives_idle=30" in the >> connection string. But there's no way to pass a connection string to >> pg_basebackup on the command line! The usual way to pass a connection >> string is to pass it as the database name, and PQconnect will expand >> it, >> but that doesn't work with pg_basebackup because it hardcodes the >> database name as "replication". D'oh. >> >> You could still use environment variables and a service file to do it, >> but it's certainly more cumbersome. It clearly should be possible to >> pass a full connection string to pg_basebackup, that's an obvious >> oversight. > > So to solve this problem below can be done: > 1. Support connection string in pg_basebackup and mention keepalives or > connection_timeout > 2. Support recv_timeout separately to provide a way to users who are not > comfortable tcp keepalives > > a. 1 can be done alone > b. 2 can be done alone > c. both 1 and 2. Right. Let's do just 1 for now. An general application level, non-TCP, keepalive message at the libpq level might be a good idea, but that's a much larger patch, definitely not 9.3 material. - Heikki
On Friday, January 18, 2013 5:35 PM Heikki Linnakangas wrote: > On 18.01.2013 13:41, Amit Kapila wrote: > > On Friday, January 18, 2013 3:46 PM Heikki Linnakangas wrote: > >> On 18.01.2013 08:50, Amit Kapila wrote: > >>> I think currently user has no way to specify TCP keepalive settings > >> from > >>> pg_basebackup, please let me know if there is any such existing > way? > >> > >> I was going to say you can just use "keepalives_idle=30" in the > >> connection string. But there's no way to pass a connection string to > >> pg_basebackup on the command line! The usual way to pass a > connection > >> string is to pass it as the database name, and PQconnect will expand > >> it, > >> but that doesn't work with pg_basebackup because it hardcodes the > >> database name as "replication". D'oh. > >> > >> You could still use environment variables and a service file to do > it, > >> but it's certainly more cumbersome. It clearly should be possible to > >> pass a full connection string to pg_basebackup, that's an obvious > >> oversight. > > > > So to solve this problem below can be done: > > 1. Support connection string in pg_basebackup and mention keepalives > or > > connection_timeout > > 2. Support recv_timeout separately to provide a way to users who are > not > > comfortable tcp keepalives > > > > a. 1 can be done alone > > b. 2 can be done alone > > c. both 1 and 2. > > Right. Let's do just 1 for now. I shall fix it as Review comment and update the patch and change the location from CF-3 to current CF. > An general application level, non-TCP, > keepalive message at the libpq level might be a good idea, but that's a > much larger patch, definitely not 9.3 material. With Regards, Amit Kapila.
Heikki Linnakangas <hlinnakangas@vmware.com> writes: > You could still use environment variables and a service file to do it, but > it's certainly more cumbersome. It clearly should be possible to pass a full > connection string to pg_basebackup, that's an obvious oversight. FWIW, +1. I would consider it a bugfix (backpatch, etc). Regards, -- Dimitri Fontaine http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support
On Fri, Jan 18, 2013 at 1:05 PM, Heikki Linnakangas <hlinnakangas@vmware.com> wrote: > On 18.01.2013 13:41, Amit Kapila wrote: >> >> On Friday, January 18, 2013 3:46 PM Heikki Linnakangas wrote: >>> >>> On 18.01.2013 08:50, Amit Kapila wrote: >>>> >>>> I think currently user has no way to specify TCP keepalive settings >>> >>> from >>>> >>>> pg_basebackup, please let me know if there is any such existing way? >>> >>> >>> I was going to say you can just use "keepalives_idle=30" in the >>> connection string. But there's no way to pass a connection string to >>> pg_basebackup on the command line! The usual way to pass a connection >>> string is to pass it as the database name, and PQconnect will expand >>> it, >>> but that doesn't work with pg_basebackup because it hardcodes the >>> database name as "replication". D'oh. >>> >>> You could still use environment variables and a service file to do it, >>> but it's certainly more cumbersome. It clearly should be possible to >>> pass a full connection string to pg_basebackup, that's an obvious >>> oversight. >> >> >> So to solve this problem below can be done: >> 1. Support connection string in pg_basebackup and mention keepalives or >> connection_timeout >> 2. Support recv_timeout separately to provide a way to users who are not >> comfortable tcp keepalives >> >> a. 1 can be done alone >> b. 2 can be done alone >> c. both 1 and 2. > > > Right. Let's do just 1 for now. An general application level, non-TCP, > keepalive message at the libpq level might be a good idea, but that's a much > larger patch, definitely not 9.3 material. +1 for doing 1 now. But actually, I think we can just keep it that way in the future as well. If you need to specify these fairly advanced options, using a connection string really isn't a problem. I think it would be more worthwhile to go through the rest of the tools in bin/ and make sure they *all* support connection strings. And, an important point, do it the same way. --Magnus HaganderMe: http://www.hagander.net/Work: http://www.redpill-linpro.com/
On Fri, Jan 18, 2013 at 2:43 PM, Dimitri Fontaine <dimitri@2ndquadrant.fr> wrote: > Heikki Linnakangas <hlinnakangas@vmware.com> writes: >> You could still use environment variables and a service file to do it, but >> it's certainly more cumbersome. It clearly should be possible to pass a full >> connection string to pg_basebackup, that's an obvious oversight. > > FWIW, +1. I would consider it a bugfix (backpatch, etc). While it's a feature I'd very much like to see, I really don't think you can consider it a bugfix. It's functionality that was left out - it's not like we tried to implement it and it didn't work. We pushed the whole implementation to "next version" (and then forgot about actually putting it in the next version, until now) --Magnus HaganderMe: http://www.hagander.net/Work: http://www.redpill-linpro.com/
Magnus Hagander <magnus@hagander.net> writes: >> FWIW, +1. I would consider it a bugfix (backpatch, etc). > > While it's a feature I'd very much like to see, I really don't think > you can consider it a bugfix. It's functionality that was left out - > it's not like we tried to implement it and it didn't work. We pushed > the whole implementation to "next version" (and then forgot about > actually putting it in the next version, until now) Thanks for reminding me about that, I completely forgot about all that. On the other hand, discrepancies in between command line arguments processing in our tools are already not helping our users (even if pg_dump -d seems to have been fixed along the years); so much so that I'm having a hard time finding any upside into having a different set of command line argument capabilities for the same tool depending on the major version. We are not talking about a new feature per se, but exposing a feature that about every other command line tool we ship have. So I think I'm standing on my position that it should get backpatched as a "fix". Regards, -- Dimitri Fontaine http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support
Dimitri Fontaine <dimitri@2ndQuadrant.fr> writes: > On the other hand, discrepancies in between command line arguments > processing in our tools are already not helping our users (even if > pg_dump -d seems to have been fixed along the years); so much so that > I'm having a hard time finding any upside into having a different set of > command line argument capabilities for the same tool depending on the > major version. > We are not talking about a new feature per se, but exposing a feature > that about every other command line tool we ship have. So I think I'm > standing on my position that it should get backpatched as a "fix". I don't think that argument holds any water at all. There would still be differences in command line argument capabilities out there --- they'd just be between minor versions not major ones. That's not any easier for people to deal with. And what will you say to someone whose application got broken by a minor-version update? If this feature were all that critical someone would have noticed its lack before now, anyway. So I can't get excited about back-patching. regards, tom lane
Tom Lane <tgl@sss.pgh.pa.us> writes: > I don't think that argument holds any water at all. There would still > be differences in command line argument capabilities out there --- > they'd just be between minor versions not major ones. That's not any > easier for people to deal with. And what will you say to someone whose > application got broken by a minor-version update? Fair enough, I suppose. Regards, -- Dimitri Fontaine http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support
On Sat, Jan 19, 2013 at 12:33 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > Dimitri Fontaine <dimitri@2ndQuadrant.fr> writes: >> On the other hand, discrepancies in between command line arguments >> processing in our tools are already not helping our users (even if >> pg_dump -d seems to have been fixed along the years); so much so that >> I'm having a hard time finding any upside into having a different set of >> command line argument capabilities for the same tool depending on the >> major version. > >> We are not talking about a new feature per se, but exposing a feature >> that about every other command line tool we ship have. So I think I'm >> standing on my position that it should get backpatched as a "fix". > > I don't think that argument holds any water at all. There would still > be differences in command line argument capabilities out there --- > they'd just be between minor versions not major ones. That's not any > easier for people to deal with. And what will you say to someone whose > application got broken by a minor-version update? I heartily agree. I can say from firsthand experience that when minor releases break things for customers (and they do), the customers get *really* cranky. Based on recent experience, I think we should be tightening our standards for what gets back-patched, not loosening them. (No, I don't have a specific example off-hand, sorry.) -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
Re: Review of "pg_basebackup and pg_receivexlog to use non-blocking socket communication", was: Re: Re: [BUGS] BUG #7534: walreceiver takes long time to detect n/w breakdown
From
Magnus Hagander
Date:
On Fri, Jan 18, 2013 at 7:50 AM, Amit Kapila <amit.kapila@huawei.com> wrote: > On Wednesday, January 16, 2013 4:02 PM Heikki Linnakangas wrote: >> On 07.01.2013 16:23, Boszormenyi Zoltan wrote: >> > Since my other patch against pg_basebackup is now committed, >> > this patch doesn't apply cleanly, patch rejects 2 hunks. >> > The fixed up patch is attached. >> >> Now that I look at this a high-level perspective, why are we only >> worried about timeouts in the Copy-mode and when connecting? The >> initial >> checkpoint could take a long time too, and if the server turns into a >> black hole while the checkpoint is running, pg_basebackup will still >> hang. Then again, a short timeout on that phase would be a bad idea, >> because the checkpoint can indeed take a long time. > > True, but IMO, if somebody want to take basebackup, he should do that when > the server is not loaded. A lot of installations don't have such an optino, because there is no time whe nthe server is not loaded. >> In streaming replication, the keep-alive messages carry additional >> information, the timestamps and WAL locations, so a keepalive makes >> sense at that level. But otherwise, aren't we just trying to >> reimplement >> TCP keepalives? TCP keepalives are not perfect, but if we want to have >> an application level timeout, it should be implemented in the FE/BE >> protocol. >> >> I don't think we need to do anything specific to pg_basebackup. The >> user >> can simply specify TCP keepalive settings in the connection string, >> like >> with any libpq program. > > I think currently user has no way to specify TCP keepalive settings from > pg_basebackup, please let me know if there is any such existing way? You can set it through environment variables. As was discussed elsewhere, it would be good to have the ability to do it natively to pg_basebackup as well. > I think specifying TCP settings is very cumbersome for most users, that's > the reason most standard interfaces (ODBC/JDBC) have such application level > timeout mechanism. > > By implementing in FE/BE protocol (do you mean to say that make such > non-blocking behavior inside Libpq or something else), it might be generic > and can be used for others as well but it might need few interface changes. If it's specifying them that is cumbersome, then that's the part we should fix, rather than modifying the protocol, no? --Magnus HaganderMe: http://www.hagander.net/Work: http://www.redpill-linpro.com/
Re: Review of "pg_basebackup and pg_receivexlog to use non-blocking socket communication", was: Re: Re: [BUGS] BUG #7534: walreceiver takes long time to detect n/w breakdown
From
Amit Kapila
Date:
On Monday, January 21, 2013 6:22 PM Magnus Hagander > On Fri, Jan 18, 2013 at 7:50 AM, Amit Kapila <amit.kapila@huawei.com> > wrote: > > On Wednesday, January 16, 2013 4:02 PM Heikki Linnakangas wrote: > >> On 07.01.2013 16:23, Boszormenyi Zoltan wrote: > >> > Since my other patch against pg_basebackup is now committed, > >> > this patch doesn't apply cleanly, patch rejects 2 hunks. > >> > The fixed up patch is attached. > >> > >> Now that I look at this a high-level perspective, why are we only > >> worried about timeouts in the Copy-mode and when connecting? The > >> initial > >> checkpoint could take a long time too, and if the server turns into > a > >> black hole while the checkpoint is running, pg_basebackup will still > >> hang. Then again, a short timeout on that phase would be a bad idea, > >> because the checkpoint can indeed take a long time. > > > > True, but IMO, if somebody want to take basebackup, he should do that > when > > the server is not loaded. > > A lot of installations don't have such an optino, because there is no > time whe nthe server is not loaded. Good to know about it. I have always heard that customer will run background maintenance activities (Reindex, Vacuum Full, etc) when the server is less loaded. For example a. Billing applications in telecom, at night times they can be relatively less loaded. b. Any databases used for Sensex transactions, they will be relatively free once the market is closed. c. Banking solutions, because transactions are done mostly in day times. There will be many cases where Database server will be loaded all the times, if you can give some example, it will be a good learning for me. > >> In streaming replication, the keep-alive messages carry additional > >> information, the timestamps and WAL locations, so a keepalive makes > >> sense at that level. But otherwise, aren't we just trying to > >> reimplement > >> TCP keepalives? TCP keepalives are not perfect, but if we want to > have > >> an application level timeout, it should be implemented in the FE/BE > >> protocol. > >> > >> I don't think we need to do anything specific to pg_basebackup. The > >> user > >> can simply specify TCP keepalive settings in the connection string, > >> like > >> with any libpq program. > > > > I think currently user has no way to specify TCP keepalive settings > from > > pg_basebackup, please let me know if there is any such existing way? > > You can set it through environment variables. As was discussed > elsewhere, it would be good to have the ability to do it natively to > pg_basebackup as well. Sure, already modifying the existing patch to support connection string in pg_basebackup and pg_receivexlog. > > > I think specifying TCP settings is very cumbersome for most users, > that's > > the reason most standard interfaces (ODBC/JDBC) have such application > level > > timeout mechanism. > > > > By implementing in FE/BE protocol (do you mean to say that make such > > non-blocking behavior inside Libpq or something else), it might be > generic > > and can be used for others as well but it might need few interface > changes. > > If it's specifying them that is cumbersome, then that's the part we > should fix, rather than modifying the protocol, no? That can be done as part of point 2 of initial proposal (2. Support recv_timeout separately to provide a way to users who are not comfortable tcp keepalives). To achieve this there can be 2 ways. 1. Change in FE/BE protocol - I am not sure exactly how this can be done, but as per Heikki this is better way of implementing it. 2. Make the socket as non-blocking in pg_basebackup. Advantage of Approach-1 is that if we do in such a fashion that in lower layers (libpq) it is addressed then all other apps (pg_basebackup, etc) can use it, no need to handle separately in each application. So now as changes in Approach-1 seems to be invasive, we decided to do it later. With Regards, Amit Kapila.
On Saturday, January 19, 2013 5:49 PM Magnus Hagander wrote: >On Fri, Jan 18, 2013 at 1:05 PM, Heikki Linnakangas ><hlinnakangas@vmware.com> wrote: >> On 18.01.2013 13:41, Amit Kapila wrote: >>> >>> On Friday, January 18, 2013 3:46 PM Heikki Linnakangas wrote: >>>> >>>> On 18.01.2013 08:50, Amit Kapila wrote: >>> So to solve this problem below can be done: >>> 1. Support connection string in pg_basebackup and mention keepalives or >>> connection_timeout >>> 2. Support recv_timeout separately to provide a way to users who are not >>> comfortable tcp keepalives >>> >>> a. 1 can be done alone >>> b. 2 can be done alone >>> c. both 1 and 2. >> >> >> Right. Let's do just 1 for now. An general application level, non-TCP, >> keepalive message at the libpq level might be a good idea, but that's a much >> larger patch, definitely not 9.3 material. > >+1 for doing 1 now. But actually, I think we can just keep it that way >in the future as well. If you need to specify these fairly advanced >options, using a connection string really isn't a problem. > >I think it would be more worthwhile to go through the rest of the >tools in bin/ and make sure they *all* support connection strings. >And, an important point, do it the same way. Presently I am trying to implement the option-1 by adding an extra command line Option -C "connection_string" to pg_basebackup and pg_receivexlog. This option can be used with all the tools in bin folder. The existing command line options to the tools are not planned to remove as of now. To handle both options, we can follow these approaches. 1. To make the code simpler, the connection string is formed inside with the existing command line options, if the user is not provided the "connection_string" option. which is used for further processing. 2. The connection_string and existing command line options are handled separately. I feel approach-1 is better. Please provide your suggestions on the same. Regards, Hari babu.
On Tue, Jan 22, 2013 3:27 PM Hari Babu wrote: >On Saturday, January 19, 2013 5:49 PM Magnus Hagander wrote: >>On Fri, Jan 18, 2013 at 1:05 PM, Heikki Linnakangas >><hlinnakangas@vmware.com> wrote: >>> On 18.01.2013 13:41, Amit Kapila wrote: >>>> >>>> On Friday, January 18, 2013 3:46 PM Heikki Linnakangas wrote: >>>>> >>>>> On 18.01.2013 08:50, Amit Kapila wrote: >>>> So to solve this problem below can be done: >>>> 1. Support connection string in pg_basebackup and mention keepalives or >>>> connection_timeout >>>> 2. Support recv_timeout separately to provide a way to users who are not >>>> comfortable tcp keepalives >>>> >>>> a. 1 can be done alone >>>> b. 2 can be done alone >>>> c. both 1 and 2. >>> >>> >>> Right. Let's do just 1 for now. An general application level, non-TCP, >>> keepalive message at the libpq level might be a good idea, but that's a much >>> larger patch, definitely not 9.3 material. >> >>+1 for doing 1 now. But actually, I think we can just keep it that way >>in the future as well. If you need to specify these fairly advanced >>options, using a connection string really isn't a problem. >> >>I think it would be more worthwhile to go through the rest of the >>tools in bin/ and make sure they *all* support connection strings. >>And, an important point, do it the same way. > >Presently I am trying to implement the option-1 by adding an extra command line >Option -C "connection_string" to pg_basebackup and pg_receivexlog. >This option can be used with all the tools in bin folder. > >The existing command line options to the tools are not planned to remove as of now. > >To handle both options, we can follow these approaches. > >1. To make the code simpler, the connection string is formed inside with the existing >command line options, if the user is not provided the "connection_string" option. >which is used for further processing. > >2. The connection_string and existing command line options are handled separately. > >I feel approach-1 is better. Please provide your suggestions on the same. Here is the patch which handles taking of connection string as an argument to pg_basebackup and pg_receivexlog. Description of changes: 1. New command line "-C connection-string"option is added for passing the connection string. 2. Used "PQconnectdb" function for connecting to server instead of existing function "PQconnectdbParams". 3. The existing command line parameters are formed in a string and passed to "PQconnectdb" function. 4. With the connection string, if user provides additional options with existing command line options, higher priority is given for the additional options. 5. "conninfo_parse" function is modified to handle of single quote in the password provided as input. please provide your suggestions. Regards, Hari babu.
Re: Review of "pg_basebackup and pg_receivexlog to use non-blocking socket communication", was: Re: Re: [BUGS] BUG #7534: walreceiver takes long time to detect n/w breakdown
From
Magnus Hagander
Date:
On Tue, Jan 22, 2013 at 7:31 AM, Amit Kapila <amit.kapila@huawei.com> wrote: > On Monday, January 21, 2013 6:22 PM Magnus Hagander >> On Fri, Jan 18, 2013 at 7:50 AM, Amit Kapila <amit.kapila@huawei.com> >> wrote: >> > On Wednesday, January 16, 2013 4:02 PM Heikki Linnakangas wrote: >> >> On 07.01.2013 16:23, Boszormenyi Zoltan wrote: >> >> > Since my other patch against pg_basebackup is now committed, >> >> > this patch doesn't apply cleanly, patch rejects 2 hunks. >> >> > The fixed up patch is attached. >> >> >> >> Now that I look at this a high-level perspective, why are we only >> >> worried about timeouts in the Copy-mode and when connecting? The >> >> initial >> >> checkpoint could take a long time too, and if the server turns into >> a >> >> black hole while the checkpoint is running, pg_basebackup will still >> >> hang. Then again, a short timeout on that phase would be a bad idea, >> >> because the checkpoint can indeed take a long time. >> > >> > True, but IMO, if somebody want to take basebackup, he should do that >> when >> > the server is not loaded. >> >> A lot of installations don't have such an optino, because there is no >> time whe nthe server is not loaded. > > Good to know about it. > I have always heard that customer will run background maintenance activities > (Reindex, Vacuum Full, etc) when the server is less loaded. > For example > a. Billing applications in telecom, at night times they can be relatively > less loaded. That assumes there is a nighttime.. If you're operating in enough timezones, that won't happen. > b. Any databases used for Sensex transactions, they will be relatively free > once the market is closed. > c. Banking solutions, because transactions are done mostly in day times. True. But those are definitely very very narrow usecases ;) Don't get me wrong. There are a *lot* of people who have nighttimes to do maintenance in. They are the lucky ones :) But we can't assume this scenario. > There will be many cases where Database server will be loaded all the times, > if you can give some example, it will be a good learning for me. Most internet based businesses that do business in multiple countries. Or really, any business that has customers in multiple timezones across the world. And even more to the point, any business who's *customers* have customers in multiple timezones across the world, provided they are services-based. >> >> In streaming replication, the keep-alive messages carry additional >> >> information, the timestamps and WAL locations, so a keepalive makes >> >> sense at that level. But otherwise, aren't we just trying to >> >> reimplement >> >> TCP keepalives? TCP keepalives are not perfect, but if we want to >> have >> >> an application level timeout, it should be implemented in the FE/BE >> >> protocol. >> >> >> >> I don't think we need to do anything specific to pg_basebackup. The >> >> user >> >> can simply specify TCP keepalive settings in the connection string, >> >> like >> >> with any libpq program. >> > >> > I think currently user has no way to specify TCP keepalive settings >> from >> > pg_basebackup, please let me know if there is any such existing way? >> >> You can set it through environment variables. As was discussed >> elsewhere, it would be good to have the ability to do it natively to >> pg_basebackup as well. > > Sure, already modifying the existing patch to support connection string in > pg_basebackup and pg_receivexlog. Good. >> > I think specifying TCP settings is very cumbersome for most users, >> that's >> > the reason most standard interfaces (ODBC/JDBC) have such application >> level >> > timeout mechanism. >> > >> > By implementing in FE/BE protocol (do you mean to say that make such >> > non-blocking behavior inside Libpq or something else), it might be >> generic >> > and can be used for others as well but it might need few interface >> changes. >> >> If it's specifying them that is cumbersome, then that's the part we >> should fix, rather than modifying the protocol, no? > > That can be done as part of point 2 of initial proposal > (2. Support recv_timeout separately to provide a way to users who are not > comfortable tcp keepalives). Looking at the bigger picture, we should in that case support those on *all* our frontend applications, and not just pg_basebackup. To me, it makes more sense to just say "use the connection string method to connect when you need to set these parameters". There are always going to be some parameters that require that. > To achieve this there can be 2 ways. > 1. Change in FE/BE protocol - I am not sure exactly how this can be done, > but as per Heikki this is better way of implementing it. > 2. Make the socket as non-blocking in pg_basebackup. > > Advantage of Approach-1 is that if we do in such a fashion that in lower > layers (libpq) it is addressed then all other apps (pg_basebackup, etc) can > use it, no need to handle separately in each application. > > So now as changes in Approach-1 seems to be invasive, we decided to do it > later. Ok - I haven't really been following the thread, but that doesn't seem unreasonable. The thing I was objecting to is putting in special parameters to pg_basebackup to deal with it, rather than just implementing the connection string option, which is consistent with other tools and will give us other parameters as well for free. --Magnus HaganderMe: http://www.hagander.net/Work: http://www.redpill-linpro.com/