Thread: Retry Cached Remote Connections for postgres_fdw in case remote backend gets killed/goes away

Hi,

Currently with the postgres_fdw remote connections cached in the local
backend, the queries that use the cached connections from local
backend will not check whether the remote backend is killed or gone
away, and it goes tries to submit the query and fails if the remote
backend is killed.

This problem was found during the discussions made in [1].

One way, we could solve the above problem is that, upon firing the new
foreign query from local backend using the cached connection,
(assuming the remote backend that was cached in the local backed got
killed for some reasons), instead of failing the query in the local
backend, upon detecting error from the remote backend, we could just
delete the cached old entry and try getting another connection to
remote backend, cache it and proceed to submit the query. This has to
happen only at the beginning of remote xact.

This way, instead of the query getting failed, the query succeeds if
the local backend is able to get a new remote backend connection.

Attaching the patch that implements the above retry mechanism.

The way I tested the patch:
1. select * from foreign_tbl; /*from local backend - this results in a
remote connection being cached in the postgres_fdw connection cache
and a remote backend is opened.*/
2. (intentionally) kill the remote backend, just to simulate the scenario.
3. select * from foreign_tbl; /*from local backend - without patch
this throws error "ERROR: server closed the connection unexpectedly".
with patch - try to use the cached connection at the beginning of
remote xact, upon receiving error from remote postgres server, instead
of aborting the query, delete the cached entry, try to get a new
connection, if it gets, caches it and uses that for executing the
query, query succeeds.

I couldn't think of adding a test case to the existing postgres_fdw
regression test suite with an automated scenario of the remote backend
getting killed.

I would like to thank Ashutosh Bapat (ashutosh.bapat.oss@gmail.com)
for the suggestion to fix this and the review of my initial patch
attached in [2]. I tried to address the review comments provided on my
initial patch [3].

For, one of the Ashutosh's review comments from [3] "the fact that the
same connection may be used by multiple plan nodes", I tried to have
few use cases where there exist joins on two foreign tables on the
same remote server, in a single query, so essentially, the same
connection was used for multiple plan nodes. In this case we avoid
retrying for the second GetConnection() request for the second foreign
table, with the check entry->xact_depth <= 0 , xact_depth after the
first GetConnection() and the first remote query will become 1 and we
don't hit the retry logic and seems like we are safe here. Please add
If I'm missing something here.

Request the community to consider the patch for further review if the
overall idea seems beneficial.

[1]  https://www.postgresql.org/message-id/CAExHW5t21B_XPQy_hownm1Qq%3DhMrgOhX%2B8gDj3YEKFvpk%3DVRgw%40mail.gmail.com
[2]  https://www.postgresql.org/message-id/CALj2ACXp6DQ3iLGx5g%2BLgVtGwC4F6K9WzKQJpyR4FfdydQzC_g%40mail.gmail.com
[3]  https://www.postgresql.org/message-id/CAExHW5u3Gyv6Q1BEr6zMg0t%2B59e8c4KMfKVrV3Z%3D4UKKjJ19nQ%40mail.gmail.com

With Regards,
Bharath Rupireddy.
EnterpriseDB: http://www.enterprisedb.com

Attachment
Hi,

On Wed, Jul 8, 2020 at 9:40 PM Bharath Rupireddy
<bharath.rupireddyforpostgres@gmail.com> wrote:
> One way, we could solve the above problem is that, upon firing the new
> foreign query from local backend using the cached connection,
> (assuming the remote backend that was cached in the local backed got
> killed for some reasons), instead of failing the query in the local
> backend, upon detecting error from the remote backend, we could just
> delete the cached old entry and try getting another connection to
> remote backend, cache it and proceed to submit the query. This has to
> happen only at the beginning of remote xact.
+1.

I think this is a very useful feature.
In an environment with connection pooling for local, if a remote
server has a failover or switchover,
this feature would prevent unexpected errors of local queries after
recovery of the remote server.

I haven't looked at the code in detail yet, some comments here.

1. To keep the previous behavior (and performance), how about allowing
the user to specify
   whether or not to retry as a GUC parameter or in the FOREIGN SERVER OPTION?

2. How about logging a LOG message when retry was success to let us know
   the retry feature worked or how often the retries worked ?

> I couldn't think of adding a test case to the existing postgres_fdw
> regression test suite with an automated scenario of the remote backend
> getting killed.

Couldn't you confirm this by adding a test case like the following?
===================================================
BEGIN;
-- Generate a connection to remote
SELECT * FROM ft1 LIMIT 1;

-- retrieve pid of postgres_fdw and kill it
-- could use the other unique identifier (not postgres_fdw but
fdw_retry_check, etc ) for application name
SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE
backend_type = 'client backend' AND application_name = 'postgres_fdw'

-- COMMIT, so next query will should success if connection-retry works
COMMIT;
SELECT * FROM ft1 LIMIT 1;
===================================================

Best regards,

-- 
Tatsuhito Kasahara
kasahara.tatsuhito _at_ gmail.com



Thanks for the comments. Attaching the v2 patch.

>
> > One way, we could solve the above problem is that, upon firing the new
> > foreign query from local backend using the cached connection,
> > (assuming the remote backend that was cached in the local backed got
> > killed for some reasons), instead of failing the query in the local
> > backend, upon detecting error from the remote backend, we could just
> > delete the cached old entry and try getting another connection to
> > remote backend, cache it and proceed to submit the query. This has to
> > happen only at the beginning of remote xact.
> +1.
>
> I think this is a very useful feature.
> In an environment with connection pooling for local, if a remote
> server has a failover or switchover,
> this feature would prevent unexpected errors of local queries after
> recovery of the remote server.

Thanks for backing this feature.

>
> I haven't looked at the code in detail yet, some comments here.
>

Thanks for the comments. Please feel free to review more of the
attached v2 patch.

>
> 1. To keep the previous behavior (and performance), how about allowing
> the user to specify
>    whether or not to retry as a GUC parameter or in the FOREIGN SERVER OPTION?
>

Do we actually need this? We don't encounter much performance with this connection retry, as
we just do it at the beginning of the remote xact i.e. only once per a remote session, if we are
able to establish it's well and good otherwise, the query is bound to fail.

If at all, we need one (if there exists a strong reason to have the option), then the question is
GUC or the SERVER OPTION?

There's a similar discussion going on having GUC at the core level vs SERVER OPTION for
postgres_fdw in [1].

>
> 2. How about logging a LOG message when retry was success to let us know
>    the retry feature worked or how often the retries worked ?
>

In the v1 patch I added the logging messages, but in v2 patch
"postgres_fdw connection retry is successful" is added. Please note that all the
new logs are added at level "DEBUG3" as all the existing logs are also at the same
level.

>
> > I couldn't think of adding a test case to the existing postgres_fdw
> > regression test suite with an automated scenario of the remote backend
> > getting killed.
>
> Couldn't you confirm this by adding a test case like the following?
> ===================================================
> BEGIN;
> -- Generate a connection to remote
> SELECT * FROM ft1 LIMIT 1;
>
> -- retrieve pid of postgres_fdw and kill it
> -- could use the other unique identifier (not postgres_fdw but
> fdw_retry_check, etc ) for application name
> SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE
> backend_type = 'client backend' AND application_name = 'postgres_fdw'
>
> -- COMMIT, so next query will should success if connection-retry works
> COMMIT;
> SELECT * FROM ft1 LIMIT 1;
> ===================================================
>

Yes, this way it works. Thanks for the suggestion. I added the test
case to the postgres_fdw regression test suite. v2 patch has these
changes also.

[1] - https://www.postgresql.org/message-id/CALj2ACVvrp5%3DAVp2PupEm%2BnAC8S4buqR3fJMmaCoc7ftT0aD2A%40mail.gmail.com
With Regards,
Bharath Rupireddy.

Attachment
On Wed, Jul 8, 2020 at 6:10 PM Bharath Rupireddy
<bharath.rupireddyforpostgres@gmail.com> wrote:
>
> I couldn't think of adding a test case to the existing postgres_fdw
> regression test suite with an automated scenario of the remote backend
> getting killed.

You could get a backend's PID using PQbackendPID and then kill it by
calling pg_terminate_backend() to kill the remote backend to automate
scenario of remote backend being killed.

>
> I would like to thank Ashutosh Bapat (ashutosh.bapat.oss@gmail.com)
> for the suggestion to fix this and the review of my initial patch
> attached in [2]. I tried to address the review comments provided on my
> initial patch [3].
>
> For, one of the Ashutosh's review comments from [3] "the fact that the
> same connection may be used by multiple plan nodes", I tried to have
> few use cases where there exist joins on two foreign tables on the
> same remote server, in a single query, so essentially, the same
> connection was used for multiple plan nodes. In this case we avoid
> retrying for the second GetConnection() request for the second foreign
> table, with the check entry->xact_depth <= 0 , xact_depth after the
> first GetConnection() and the first remote query will become 1 and we
> don't hit the retry logic and seems like we are safe here. Please add
> If I'm missing something here.
>
> Request the community to consider the patch for further review if the
> overall idea seems beneficial.

I think this idea will be generally useful if your work on dropping
stale connection uses idle_connection_timeout or something like that
on the remote server.

About the patch. It seems we could just catch the error from
begin_remote_xact() in GetConnection() and retry connection if the
error is "bad connection". Retrying using PQreset() might be better
than calling PQConnect* always.


>
> [1]
https://www.postgresql.org/message-id/CAExHW5t21B_XPQy_hownm1Qq%3DhMrgOhX%2B8gDj3YEKFvpk%3DVRgw%40mail.gmail.com
> [2]  https://www.postgresql.org/message-id/CALj2ACXp6DQ3iLGx5g%2BLgVtGwC4F6K9WzKQJpyR4FfdydQzC_g%40mail.gmail.com
> [3]  https://www.postgresql.org/message-id/CAExHW5u3Gyv6Q1BEr6zMg0t%2B59e8c4KMfKVrV3Z%3D4UKKjJ19nQ%40mail.gmail.com
>
> With Regards,
> Bharath Rupireddy.
> EnterpriseDB: http://www.enterprisedb.com



--
Best Wishes,
Ashutosh Bapat



>
> You could get a backend's PID using PQbackendPID and then kill it by
> calling pg_terminate_backend() to kill the remote backend to automate
> scenario of remote backend being killed.
>

I already added the test case in v2 patch itself(added one more test
case in v3 patch), using the similar approach.

>
> > For, one of the Ashutosh's review comments from [3] "the fact that the
> > same connection may be used by multiple plan nodes", I tried to have
> > few use cases where there exist joins on two foreign tables on the
> > same remote server, in a single query, so essentially, the same
> > connection was used for multiple plan nodes. In this case we avoid
> > retrying for the second GetConnection() request for the second foreign
> > table, with the check entry->xact_depth <= 0 , xact_depth after the
> > first GetConnection() and the first remote query will become 1 and we
> > don't hit the retry logic and seems like we are safe here. Please add
> > If I'm missing something here.
> >
> > Request the community to consider the patch for further review if the
> > overall idea seems beneficial.
>
> I think this idea will be generally useful if your work on dropping
> stale connection uses idle_connection_timeout or something like that
> on the remote server.

Assuming we use idle_connection_timeout or some other means(as it is
not yet finalized, I will try to respond in that mail chain) to drop
stale/idle connections from the local backend, I think we have two
options 1) deleting that cached entry from the connection cache
entirely using disconnect_pg_server() and hash table remove. This
frees up some space and we don't have to deal with the connection
invalidations and don't have to bother on resetting cached entry's
other parameters such as xact_depth, have_prep_stmt etc. 2) or we
could just drop the connections using disconnect_pg_server(), retain
the hash entry, reset other parameters, and deal with the
invalidations. This is like, we maintain unnecessary info in the
cached entry, where we actually don't have a connection at all and
keep holding some space for cached entry.

IMO, if we go with option 1, then it will be good.

Anyways, this connection retry feature will not have any dependency on
the idle_connection_timeout or dropping stale connection feature,
because there can be many reasons where remote backends go away/get
killed.

If I'm not sidetracking - if we use something like
idle_session_timeout [1] on the remote server, this retry feature will
be very useful.

>
> About the patch. It seems we could just catch the error from
> begin_remote_xact() in GetConnection() and retry connection if the
> error is "bad connection". Retrying using PQreset() might be better
> than calling PQConnect* always.
>

Thanks for the comment, it made life easier. Added the patch with the
changes. Please take a look at the v3 patch and let me know if still
something needs to be done.


With Regards,
Bharath Rupireddy.
EnterpriseDB: http://www.enterprisedb.com
Attachment
Has this been added to CF, possibly next CF?

On Tue, Jul 14, 2020 at 7:27 AM Bharath Rupireddy
<bharath.rupireddyforpostgres@gmail.com> wrote:
>
> >
> > You could get a backend's PID using PQbackendPID and then kill it by
> > calling pg_terminate_backend() to kill the remote backend to automate
> > scenario of remote backend being killed.
> >
>
> I already added the test case in v2 patch itself(added one more test
> case in v3 patch), using the similar approach.
>
> >
> > > For, one of the Ashutosh's review comments from [3] "the fact that the
> > > same connection may be used by multiple plan nodes", I tried to have
> > > few use cases where there exist joins on two foreign tables on the
> > > same remote server, in a single query, so essentially, the same
> > > connection was used for multiple plan nodes. In this case we avoid
> > > retrying for the second GetConnection() request for the second foreign
> > > table, with the check entry->xact_depth <= 0 , xact_depth after the
> > > first GetConnection() and the first remote query will become 1 and we
> > > don't hit the retry logic and seems like we are safe here. Please add
> > > If I'm missing something here.
> > >
> > > Request the community to consider the patch for further review if the
> > > overall idea seems beneficial.
> >
> > I think this idea will be generally useful if your work on dropping
> > stale connection uses idle_connection_timeout or something like that
> > on the remote server.
>
> Assuming we use idle_connection_timeout or some other means(as it is
> not yet finalized, I will try to respond in that mail chain) to drop
> stale/idle connections from the local backend, I think we have two
> options 1) deleting that cached entry from the connection cache
> entirely using disconnect_pg_server() and hash table remove. This
> frees up some space and we don't have to deal with the connection
> invalidations and don't have to bother on resetting cached entry's
> other parameters such as xact_depth, have_prep_stmt etc. 2) or we
> could just drop the connections using disconnect_pg_server(), retain
> the hash entry, reset other parameters, and deal with the
> invalidations. This is like, we maintain unnecessary info in the
> cached entry, where we actually don't have a connection at all and
> keep holding some space for cached entry.
>
> IMO, if we go with option 1, then it will be good.
>
> Anyways, this connection retry feature will not have any dependency on
> the idle_connection_timeout or dropping stale connection feature,
> because there can be many reasons where remote backends go away/get
> killed.
>
> If I'm not sidetracking - if we use something like
> idle_session_timeout [1] on the remote server, this retry feature will
> be very useful.
>
> >
> > About the patch. It seems we could just catch the error from
> > begin_remote_xact() in GetConnection() and retry connection if the
> > error is "bad connection". Retrying using PQreset() might be better
> > than calling PQConnect* always.
> >
>
> Thanks for the comment, it made life easier. Added the patch with the
> changes. Please take a look at the v3 patch and let me know if still
> something needs to be done.
>
> [1] - https://www.postgresql.org/message-id/763A0689-F189-459E-946F-F0EC4458980B%40hotmail.com
>
> With Regards,
> Bharath Rupireddy.
> EnterpriseDB: http://www.enterprisedb.com



-- 
Best Wishes,
Ashutosh Bapat



On Tue, Jul 14, 2020 at 6:13 PM Ashutosh Bapat
<ashutosh.bapat.oss@gmail.com> wrote:
>
> Has this been added to CF, possibly next CF?
>

I have not added yet. Request to get it done in this CF, as the final
patch for review(v3 patch) is ready and shared. We can target it to
the next CF if there are major issues with the patch.

With Regards,
Bharath Rupireddy.
EnterpriseDB: http://www.enterprisedb.com



>
> On Tue, Jul 14, 2020 at 6:13 PM Ashutosh Bapat
> <ashutosh.bapat.oss@gmail.com> wrote:
> >
> > Has this been added to CF, possibly next CF?
> >
>
> I have not added yet. Request to get it done in this CF, as the final
> patch for review(v3 patch) is ready and shared. We can target it to
> the next CF if there are major issues with the patch.
>

I added this feature to the next CF - https://commitfest.postgresql.org/29/2651/

With Regards,
Bharath Rupireddy.
EnterpriseDB: http://www.enterprisedb.com




On 2020/07/17 21:02, Bharath Rupireddy wrote:
>>
>> On Tue, Jul 14, 2020 at 6:13 PM Ashutosh Bapat
>> <ashutosh.bapat.oss@gmail.com> wrote:
>>>
>>> Has this been added to CF, possibly next CF?
>>>
>>
>> I have not added yet. Request to get it done in this CF, as the final
>> patch for review(v3 patch) is ready and shared. We can target it to
>> the next CF if there are major issues with the patch.
>>
> 
> I added this feature to the next CF - https://commitfest.postgresql.org/29/2651/

Thanks for the patch! Here are some comments from me.

+            PQreset(entry->conn);

Isn't using PQreset() to reconnect to the foreign server unsafe?
When new connection is established, some SET commands seem
to need to be issued like configure_remote_session() does.
But PQreset() doesn't do that at all.


Originally when GetConnection() establishes new connection,
it resets all transient state fields, to be sure all are clean (as the
comment says). With the patch, even when reconnecting
the remote server, shouldn't we do the same, for safe?


+            PGresult *res = NULL;
+            res = PQgetResult(entry->conn);
+            PQclear(res);

Are these really necessary? I was just thinking that's not because
pgfdw_get_result() and pgfdw_report_error() seem to do that
already in do_sql_command().


+        /* Start a new transaction or subtransaction if needed. */
+        begin_remote_xact(entry);

Even when there is no cached connection and new connection is made,
then if begin_remote_xact() reports an error, another new connection
tries to be made again. This behavior looks a bit strange.

Regards,

-- 
Fujii Masao
Advanced Computing Technology Center
Research and Development Headquarters
NTT DATA CORPORATION



Thanks for the review comments. I will post a new patch soon
addressing all the comments.

On Tue, Sep 15, 2020 at 2:49 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:
>
> +                       PQreset(entry->conn);
>
> Isn't using PQreset() to reconnect to the foreign server unsafe?
> When new connection is established, some SET commands seem
> to need to be issued like configure_remote_session() does.
> But PQreset() doesn't do that at all.
>

This is a good catch. Thanks, I missed this point. Indeed we need to
set the session params. We can do this in two ways: 1. use
configure_remote_session() after PQreset(). 2. use connect_pg_server()
instead of PQreset() and configure_remote_session(). One problem I see
with the 2nd way is that we will be doing the checks that are being
performed in connect_pg_server() twice, which we would have done for
the first time before retrying.  The required parameters such as
keywords, values, are still in entry->conn structure from the first
attempt, which can safely be used by PQreset(). So, I will go with the
1st way. Thoughts?

>
> Originally when GetConnection() establishes new connection,
> it resets all transient state fields, to be sure all are clean (as the
> comment says). With the patch, even when reconnecting
> the remote server, shouldn't we do the same, for safe?
>

I guess there is no need for us to reset all the transient state
before we do begin_remote_xact() in the 2nd attempt. We retry the
connection only when entry->xact_depth <= 0 i.e. beginning of the
remote txn and the begin_remote_xact() doesn't modify any transient
state if entry->xact_depth <= 0, except for entry->changing_xact_state
= true; all other transient state is intact in entry structure. In the
error case, we will not reach the code after do_sql_command in
begin_remote_xact(). If needed, we can only set
entry->changing_xact_state to false which is set to true before
do_sql_command().

        entry->changing_xact_state = true;
        do_sql_command(entry->conn, sql);
        entry->xact_depth = 1;
        entry->changing_xact_state = false;

>
> +                       PGresult *res = NULL;
> +                       res = PQgetResult(entry->conn);
> +                       PQclear(res);
> Are these really necessary? I was just thinking that's not because
> pgfdw_get_result() and pgfdw_report_error() seem to do that
> already in do_sql_command().
>

If an error occurs in the first attempt, we return from
pgfdw_get_result()'s  if (!PQconsumeInput(conn)) to the catch block we
added and pgfdw_report_error() will never get called. And the below
part of the code is reached only in scenarios as mentioned in the
comments. Removing this might have problems if we receive errors other
than CONNECTION_BAD or for subtxns. We could clear if any result and
just rethrow the error upstream. I think no problem having this code
here.

        else
        {
            /*
             * We are here, due to either some error other than CONNECTION_BAD
             * occurred or connection may have broken during start of a
             * subtransacation. Just, clear off any result, try rethrowing the
             * error, so that it will be caught appropriately.
             */
            PGresult *res = NULL;
            res = PQgetResult(entry->conn);
            PQclear(res);
            PG_RE_THROW();
        }

>
> +               /* Start a new transaction or subtransaction if needed. */
> +               begin_remote_xact(entry);
>
> Even when there is no cached connection and new connection is made,
> then if begin_remote_xact() reports an error, another new connection
> tries to be made again. This behavior looks a bit strange.
>

When there is no cached connection, we try to acquire one, if we
can't, then no error will be thrown to the user, just we retry one
more time. If we get in the 2nd attempt, fine, if not, we will throw
the error to the user. Assume in the 1st attempt the remote server is
unreachable, we may hope to connect in the 2nd attempt. I think there
is no problem here.


With Regards,
Bharath Rupireddy.
EnterpriseDB: http://www.enterprisedb.com




On 2020/09/17 15:44, Bharath Rupireddy wrote:
> Thanks for the review comments. I will post a new patch soon
> addressing all the comments.

Thanks a lot!


> 
> On Tue, Sep 15, 2020 at 2:49 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:
>>
>> +                       PQreset(entry->conn);
>>
>> Isn't using PQreset() to reconnect to the foreign server unsafe?
>> When new connection is established, some SET commands seem
>> to need to be issued like configure_remote_session() does.
>> But PQreset() doesn't do that at all.
>>
> 
> This is a good catch. Thanks, I missed this point. Indeed we need to
> set the session params. We can do this in two ways: 1. use
> configure_remote_session() after PQreset(). 2. use connect_pg_server()
> instead of PQreset() and configure_remote_session(). One problem I see
> with the 2nd way is that we will be doing the checks that are being
> performed in connect_pg_server() twice, which we would have done for
> the first time before retrying.  The required parameters such as
> keywords, values, are still in entry->conn structure from the first
> attempt, which can safely be used by PQreset(). So, I will go with the
> 1st way. Thoughts?


In 1st way, you may also need to call ReleaseExternalFD() when new connection fails
to be made, as connect_pg_server() does. Also we need to check that
non-superuser has used password to make new connection,
as connect_pg_server() does? I'm concerned about the case where
pg_hba.conf is changed accidentally so that no password is necessary
at the remote server and the existing connection is terminated. In this case,
if we connect to the local server as non-superuser, connection to
the remote server should fail because the remote server doesn't
require password. But with your patch, we can successfully reconnect
to the remote server.

Therefore I like 2nd option. Also maybe disconnect_ps_server() needs to
be called before that. I'm not sure how much useful 1st option is.


> 
>>
>> Originally when GetConnection() establishes new connection,
>> it resets all transient state fields, to be sure all are clean (as the
>> comment says). With the patch, even when reconnecting
>> the remote server, shouldn't we do the same, for safe?
>>
> 
> I guess there is no need for us to reset all the transient state
> before we do begin_remote_xact() in the 2nd attempt. We retry the
> connection only when entry->xact_depth <= 0 i.e. beginning of the
> remote txn and the begin_remote_xact() doesn't modify any transient
> state if entry->xact_depth <= 0, except for entry->changing_xact_state
> = true; all other transient state is intact in entry structure. In the
> error case, we will not reach the code after do_sql_command in
> begin_remote_xact(). If needed, we can only set
> entry->changing_xact_state to false which is set to true before
> do_sql_command().

What if 2nd attempt happens with have_prep_stmt=true? I'm not sure
if this case really happens, though. But if that can, it's strange to start
new connection with have_prep_stmt=true even when the caller of
GetConnection() doesn't intend to create any prepared statements.

I think it's safer to do 2nd attempt in the same way as 1st one. Maybe
we can simplify the code by making them into common code block
or function.


> 
>          entry->changing_xact_state = true;
>          do_sql_command(entry->conn, sql);
>          entry->xact_depth = 1;
>          entry->changing_xact_state = false;
> 
>>
>> +                       PGresult *res = NULL;
>> +                       res = PQgetResult(entry->conn);
>> +                       PQclear(res);
>> Are these really necessary? I was just thinking that's not because
>> pgfdw_get_result() and pgfdw_report_error() seem to do that
>> already in do_sql_command().
>>
> 
> If an error occurs in the first attempt, we return from
> pgfdw_get_result()'s  if (!PQconsumeInput(conn)) to the catch block we
> added and pgfdw_report_error() will never get called. And the below
> part of the code is reached only in scenarios as mentioned in the
> comments. Removing this might have problems if we receive errors other
> than CONNECTION_BAD or for subtxns. We could clear if any result and
> just rethrow the error upstream. I think no problem having this code
> here.

But the orignal code works without this?
Or you mean that the original code has the bug?


> 
>          else
>          {
>              /*
>               * We are here, due to either some error other than CONNECTION_BAD
>               * occurred or connection may have broken during start of a
>               * subtransacation. Just, clear off any result, try rethrowing the
>               * error, so that it will be caught appropriately.
>               */
>              PGresult *res = NULL;
>              res = PQgetResult(entry->conn);
>              PQclear(res);
>              PG_RE_THROW();
>          }
> 
>>
>> +               /* Start a new transaction or subtransaction if needed. */
>> +               begin_remote_xact(entry);
>>
>> Even when there is no cached connection and new connection is made,
>> then if begin_remote_xact() reports an error, another new connection
>> tries to be made again. This behavior looks a bit strange.
>>
> 
> When there is no cached connection, we try to acquire one, if we
> can't, then no error will be thrown to the user, just we retry one
> more time. If we get in the 2nd attempt, fine, if not, we will throw
> the error to the user. Assume in the 1st attempt the remote server is
> unreachable, we may hope to connect in the 2nd attempt. I think there
> is no problem here.
> 
> 
> With Regards,
> Bharath Rupireddy.
> EnterpriseDB: http://www.enterprisedb.com
> 
> 

-- 
Fujii Masao
Advanced Computing Technology Center
Research and Development Headquarters
NTT DATA CORPORATION



On Thu, Sep 17, 2020 at 10:20 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:
>
> In 1st way, you may also need to call ReleaseExternalFD() when new connection fails
> to be made, as connect_pg_server() does. Also we need to check that
> non-superuser has used password to make new connection,
> as connect_pg_server() does? I'm concerned about the case where
> pg_hba.conf is changed accidentally so that no password is necessary
> at the remote server and the existing connection is terminated. In this case,
> if we connect to the local server as non-superuser, connection to
> the remote server should fail because the remote server doesn't
> require password. But with your patch, we can successfully reconnect
> to the remote server.
>
> Therefore I like 2nd option. Also maybe disconnect_ps_server() needs to
> be called before that. I'm not sure how much useful 1st option is.
>

Thanks. Above points look sensible. +1 for the 2nd option i.e. instead of PQreset(entry->conn);, let's try to disconnect_pg_server() and connect_pg_server().

>
> What if 2nd attempt happens with have_prep_stmt=true? I'm not sure
> if this case really happens, though. But if that can, it's strange to start
> new connection with have_prep_stmt=true even when the caller of
> GetConnection() doesn't intend to create any prepared statements.
>
> I think it's safer to do 2nd attempt in the same way as 1st one. Maybe
> we can simplify the code by making them into common code block
> or function.
>

I don't think the have_prep_stmt will be set by the time we make 2nd attempt because entry->have_prep_stmt |= will_prep_stmt; gets hit only after we are successful in either 1st attempt or 2nd attempt. I think we don't need to set all transient state. No other transient state except changing_xact_state changes from 1st attempt to 2nd attempt(see below), so let's set only entry->changing_xact_state to false before 2nd attempt.

1st attempt:
(gdb) p *entry
$3 = {key = 16389, conn = 0x55a896199990, xact_depth = 0, have_prep_stmt = false,
  have_error = false, changing_xact_state = false, invalidated = false,
  server_hashvalue = 3905865521, mapping_hashvalue = 2617776010}

2nd attempt i.e. in retry block:
(gdb) p *entry
$4 = {key = 16389, conn = 0x55a896199990, xact_depth = 0, have_prep_stmt = false,
  have_error = false, changing_xact_state = true, invalidated = false,
  server_hashvalue = 3905865521, mapping_hashvalue = 2617776010}

>>
> > If an error occurs in the first attempt, we return from
> > pgfdw_get_result()'s  if (!PQconsumeInput(conn)) to the catch block we
> > added and pgfdw_report_error() will never get called. And the below
> > part of the code is reached only in scenarios as mentioned in the
> > comments. Removing this might have problems if we receive errors other
> > than CONNECTION_BAD or for subtxns. We could clear if any result and
> > just rethrow the error upstream. I think no problem having this code
> > here.
>
> But the orignal code works without this?
> Or you mean that the original code has the bug?
>

There's no bug in the original code. Sorry, I was wrong in saying pgfdw_report_error() will never get called with this patch. Indeed it gets called even when 1's attempt connection is failed. Since we added an extra try-catch block, we will not be throwing the error to the user, instead we make a 2nd attempt from the catch block.

I'm okay to remove below part of the code

> >> +                       PGresult *res = NULL;
> >> +                       res = PQgetResult(entry->conn);
> >> +                       PQclear(res);
> >> Are these really necessary? I was just thinking that's not because
> >> pgfdw_get_result() and pgfdw_report_error() seem to do that
> >> already in do_sql_command().

Please let me know if okay with the above agreed points, I will work on the new patch.

With Regards,
Bharath Rupireddy.
EnterpriseDB: http://www.enterprisedb.com

On 2020/09/21 12:44, Bharath Rupireddy wrote:
> On Thu, Sep 17, 2020 at 10:20 PM Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>
wrote:
>  >
>  > In 1st way, you may also need to call ReleaseExternalFD() when new connection fails
>  > to be made, as connect_pg_server() does. Also we need to check that
>  > non-superuser has used password to make new connection,
>  > as connect_pg_server() does? I'm concerned about the case where
>  > pg_hba.conf is changed accidentally so that no password is necessary
>  > at the remote server and the existing connection is terminated. In this case,
>  > if we connect to the local server as non-superuser, connection to
>  > the remote server should fail because the remote server doesn't
>  > require password. But with your patch, we can successfully reconnect
>  > to the remote server.
>  >
>  > Therefore I like 2nd option. Also maybe disconnect_ps_server() needs to
>  > be called before that. I'm not sure how much useful 1st option is.
>  >
> 
> Thanks. Above points look sensible. +1 for the 2nd option i.e. instead of PQreset(entry->conn);, let's try to
disconnect_pg_server()and connect_pg_server().
 
> 
>  >
>  > What if 2nd attempt happens with have_prep_stmt=true? I'm not sure
>  > if this case really happens, though. But if that can, it's strange to start
>  > new connection with have_prep_stmt=true even when the caller of
>  > GetConnection() doesn't intend to create any prepared statements.
>  >
>  > I think it's safer to do 2nd attempt in the same way as 1st one. Maybe
>  > we can simplify the code by making them into common code block
>  > or function.
>  >
> 
> I don't think the have_prep_stmt will be set by the time we make 2nd attempt because entry->have_prep_stmt |=
will_prep_stmt;gets hit only after we are successful in either 1st attempt or 2nd attempt. I think we don't need to set
alltransient state. No other transient state except changing_xact_state changes from 1st attempt to 2nd attempt(see
below),so let's set only entry->changing_xact_state to false before 2nd attempt.
 
> 
> 1st attempt:
> (gdb) p *entry
> $3 = {key = 16389, conn = 0x55a896199990, xact_depth = 0, have_prep_stmt = false,
>    have_error = false, changing_xact_state = false, invalidated = false,
>    server_hashvalue = 3905865521, mapping_hashvalue = 2617776010}
> 
> 2nd attempt i.e. in retry block:
> (gdb) p *entry
> $4 = {key = 16389, conn = 0x55a896199990, xact_depth = 0, have_prep_stmt = false,
>    have_error = false, changing_xact_state = true, invalidated = false,
>    server_hashvalue = 3905865521, mapping_hashvalue = 2617776010}
> 
>  >>
>  > > If an error occurs in the first attempt, we return from
>  > > pgfdw_get_result()'s  if (!PQconsumeInput(conn)) to the catch block we
>  > > added and pgfdw_report_error() will never get called. And the below
>  > > part of the code is reached only in scenarios as mentioned in the
>  > > comments. Removing this might have problems if we receive errors other
>  > > than CONNECTION_BAD or for subtxns. We could clear if any result and
>  > > just rethrow the error upstream. I think no problem having this code
>  > > here.
>  >
>  > But the orignal code works without this?
>  > Or you mean that the original code has the bug?
>  >
> 
> There's no bug in the original code. Sorry, I was wrong in saying pgfdw_report_error() will never get called with
thispatch. Indeed it gets called even when 1's attempt connection is failed. Since we added an extra try-catch block,
wewill not be throwing the error to the user, instead we make a 2nd attempt from the catch block.
 
> 
> I'm okay to remove below part of the code
> 
>  > >> +                       PGresult *res = NULL;
>  > >> +                       res = PQgetResult(entry->conn);
>  > >> +                       PQclear(res);
>  > >> Are these really necessary? I was just thinking that's not because
>  > >> pgfdw_get_result() and pgfdw_report_error() seem to do that
>  > >> already in do_sql_command().
> 
> Please let me know if okay with the above agreed points, I will work on the new patch.

Yes, please work on the patch! Thanks! I may revisit the above points later, though ;)

Regards,

-- 
Fujii Masao
Advanced Computing Technology Center
Research and Development Headquarters
NTT DATA CORPORATION



On Wed, Sep 23, 2020 at 8:19 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:
>
> > Please let me know if okay with the above agreed points, I will work on the new patch.
>
> Yes, please work on the patch! Thanks! I may revisit the above points later, though ;)
>

Thanks, attaching v4 patch. Please consider this for further review.

With Regards,
Bharath Rupireddy.
EnterpriseDB: http://www.enterprisedb.com

Attachment

On 2020/09/25 13:56, Bharath Rupireddy wrote:
> On Wed, Sep 23, 2020 at 8:19 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:
>>
>>> Please let me know if okay with the above agreed points, I will work on the new patch.
>>
>> Yes, please work on the patch! Thanks! I may revisit the above points later, though ;)
>>
> 
> Thanks, attaching v4 patch. Please consider this for further review.

Thanks for updating the patch!

In the orignal code, disconnect_pg_server() is called when invalidation
occurs and connect_pg_server() is called when no valid connection exists.
I think that we can simplify the code by merging the connection-retry
code into them, like the attached very WIP patch (based on yours) does.
Basically I'd like to avoid duplicating disconnect_pg_server(),
connect_pg_server() and begin_remote_xact() if possible. Thought?

Your patch adds several codes into PG_CATCH() section, but it's better
  to keep that section simple enough (as the source comment for
PG_CATCH() explains). So what about moving some codes out of PG_CATCH()
section?

+            else
+                ereport(ERROR,
+                    (errcode(ERRCODE_SQLCLIENT_UNABLE_TO_ESTABLISH_SQLCONNECTION),
+                     errmsg("could not connect to server \"%s\"",
+                            server->servername),
+                     errdetail_internal("%s", pchomp(PQerrorMessage(entry->conn)))));

The above is not necessary? If this error occurs, connect_pg_server()
reports an error, before the above code is reached. Right?

Regards,

-- 
Fujii Masao
Advanced Computing Technology Center
Research and Development Headquarters
NTT DATA CORPORATION

Attachment
On Fri, Sep 25, 2020 at 3:21 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:
>
> I think that we can simplify the code by merging the connection-retry
> code into them, like the attached very WIP patch (based on yours) does.
>

+1.

>
> +                       else
> +                               ereport(ERROR,
> +                                       (errcode(ERRCODE_SQLCLIENT_UNABLE_TO_ESTABLISH_SQLCONNECTION),
> +                                        errmsg("could not connect to server \"%s\"",
> +                                                       server->servername),
> +                                        errdetail_internal("%s", pchomp(PQerrorMessage(entry->conn)))));
>
> The above is not necessary? If this error occurs, connect_pg_server()
> reports an error, before the above code is reached. Right?
>

Removed.

Thanks for the comments.

I think we need to have a volatile qualifier for need_reset_conn, because of the sigsetjmp.
Instead of "need_reset_conn", "retry_conn" would be more meaningful and also instead of goto label name "reset;", "retry:".
I changed "closing connection %p to reestablish connection" to  "closing connection %p to reestablish a new one"
I also adjusted the comments to be under the 80char limit.
I feel, when we are about to close an existing connection and reestablish a new connection, it will be good to have a debug3 message saying that we "could not connect to postgres_fdw connection %p"[1].

Attaching v5 patch that has the above changes. Both make check and make check-world regression tests passes. Please review.

[1] This would tell the user that we are not able to connect to the connection.
postgres=# SELECT * FROM foreign_tbl;
DEBUG:  starting remote transaction on connection 0x55ab0e416830
DEBUG:  could not connect to postgres_fdw connection 0x55ab0e416830
DEBUG:  closing connection 0x55ab0e416830 to reestablish a new one
DEBUG:  new postgres_fdw connection 0x55ab0e416830 for server "foreign_server" (user mapping oid 16407, userid 10)
DEBUG:  starting remote transaction on connection 0x55ab0e416830
DEBUG:  closing remote transaction on connection 0x55ab0e416830
 a1  | b1  
-----+-----
 100 | 200

Without the above message, it would look like we are starting remote txn, and closing connection without any reason.

postgres=# SELECT * FROM foreign_tbl;
DEBUG:  starting remote transaction on connection 0x55ab0e4c0d50
DEBUG:  closing connection 0x55ab0e4c0d50 to reestablish a new one
DEBUG:  new postgres_fdw connection 0x55ab0e4c0d50 for server "foreign_server" (user mapping oid 16389, userid 10)
DEBUG:  starting remote transaction on connection 0x55ab0e4c0d50
DEBUG:  closing remote transaction on connection 0x55ab0e4c0d50
 a1  | b1  
-----+-----
 100 | 200

With Regards,
Bharath Rupireddy.
EnterpriseDB: http://www.enterprisedb.com
Attachment

On 2020/09/25 21:19, Bharath Rupireddy wrote:
> On Fri, Sep 25, 2020 at 3:21 PM Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>
wrote:
>  >
>  > I think that we can simplify the code by merging the connection-retry
>  > code into them, like the attached very WIP patch (based on yours) does.
>  >
> 
> +1.
> 
>  >
>  > +                       else
>  > +                               ereport(ERROR,
>  > +                                       (errcode(ERRCODE_SQLCLIENT_UNABLE_TO_ESTABLISH_SQLCONNECTION),
>  > +                                        errmsg("could not connect to server \"%s\"",
>  > +                                                       server->servername),
>  > +                                        errdetail_internal("%s", pchomp(PQerrorMessage(entry->conn)))));
>  >
>  > The above is not necessary? If this error occurs, connect_pg_server()
>  > reports an error, before the above code is reached. Right?
>  >
> 
> Removed.
> 
> Thanks for the comments.
> 
> I think we need to have a volatile qualifier for need_reset_conn, because of the sigsetjmp.

Yes.

> Instead of "need_reset_conn", "retry_conn" would be more meaningful and also instead of goto label name "reset;",
"retry:".

Sounds good.

> I changed "closing connection %p to reestablish connection" to  "closing connection %p to reestablish a new one"

OK.

> I also adjusted the comments to be under the 80char limit.
> I feel, when we are about to close an existing connection and reestablish a new connection, it will be good to have a
debug3message saying that we "could not connect to postgres_fdw connection %p"[1].
 

+1 to add debug3 message there. But this message doesn't seem to
match with what the error actually happened. What about something like
"could not start remote transaction on connection %p", instead?
Also maybe it's better to append PQerrorMessage(entry->conn)?

> 
> Attaching v5 patch that has the above changes. Both make check and make check-world regression tests passes. Please
review.

Thanks for updating the patch!

+-- Generate a connection to remote. Local backend will cache it.
+SELECT * FROM ft1 LIMIT 1;

The result of this query would not be stable. Probably you need to,
for example, add ORDER BY or replace * with 1, etc.


+-- Retrieve pid of remote backend with application name fdw_retry_check
+-- and kill it intentionally here. Note that, local backend still has
+-- the remote connection/backend info in it's cache.
+SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE
+backend_type = 'client backend' AND application_name = 'fdw_retry_check';

Isn't this test fragile because there is no gurantee that the target backend
has exited just after calling pg_terminate_backend()?

Regards,

-- 
Fujii Masao
Advanced Computing Technology Center
Research and Development Headquarters
NTT DATA CORPORATION



Thanks for the comments.

On Tue, Sep 29, 2020 at 7:30 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:
>
> +1 to add debug3 message there. But this message doesn't seem to
> match with what the error actually happened. What about something like
> "could not start remote transaction on connection %p", instead?
>

Looks better. Changed.

>
> Also maybe it's better to append PQerrorMessage(entry->conn)?
>

Added. Now the log looks like [1].

>
> +-- Generate a connection to remote. Local backend will cache it.
> +SELECT * FROM ft1 LIMIT 1;
>
> The result of this query would not be stable. Probably you need to,
> for example, add ORDER BY or replace * with 1, etc.
>

Changed to SELECT 1 FROM ft1 LIMIT 1;

>
> +-- Retrieve pid of remote backend with application name fdw_retry_check
> +-- and kill it intentionally here. Note that, local backend still has
> +-- the remote connection/backend info in it's cache.
> +SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE
> +backend_type = 'client backend' AND application_name = 'fdw_retry_check';
>
> Isn't this test fragile because there is no gurantee that the target backend
> has exited just after calling pg_terminate_backend()?
>

I think this is okay, because pg_terminate_backend() sends SIGTERM to
the backend, and upon receiving SIGTERM the signal handler die() will
be called and since there is no query being executed on the backend by
the time SIGTERM is received, it will exit immediately. Thoughts?

pqsignal(SIGTERM, die); /* cancel current query and exit */

And also, pg_terminate_backend() returns true in case the backend is
killed successfully, otherwise it returns false.     PG_RETURN_BOOL(r
== SIGNAL_BACKEND_SUCCESS);

Attaching v6 patch, please review it further.

[1]
DEBUG:  starting remote transaction on connection 0x55cd393a66e0
DEBUG:  could not start remote transaction on connection 0x55cd393a66e0
DETAIL:  server closed the connection unexpectedly
        This probably means the server terminated abnormally
        before or while processing the request.
DEBUG:  closing connection 0x55cd393a66e0 to reestablish a new one
DEBUG:  new postgres_fdw connection 0x55cd393a66e0 for server
"foreign_server" (user mapping oid 16398, userid 10)
DEBUG:  starting remote transaction on connection 0x55cd393a66e0
DEBUG:  closing remote transaction on connection 0x55cd393a66e0


With Regards,
Bharath Rupireddy.
EnterpriseDB: http://www.enterprisedb.com

Attachment

On 2020/09/30 0:50, Bharath Rupireddy wrote:
> Thanks for the comments.
> 
> On Tue, Sep 29, 2020 at 7:30 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:
>>
>> +1 to add debug3 message there. But this message doesn't seem to
>> match with what the error actually happened. What about something like
>> "could not start remote transaction on connection %p", instead?
>>
> 
> Looks better. Changed.
> 
>>
>> Also maybe it's better to append PQerrorMessage(entry->conn)?
>>
> 
> Added. Now the log looks like [1].
> 
>>
>> +-- Generate a connection to remote. Local backend will cache it.
>> +SELECT * FROM ft1 LIMIT 1;
>>
>> The result of this query would not be stable. Probably you need to,
>> for example, add ORDER BY or replace * with 1, etc.
>>
> 
> Changed to SELECT 1 FROM ft1 LIMIT 1;
> 
>>
>> +-- Retrieve pid of remote backend with application name fdw_retry_check
>> +-- and kill it intentionally here. Note that, local backend still has
>> +-- the remote connection/backend info in it's cache.
>> +SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE
>> +backend_type = 'client backend' AND application_name = 'fdw_retry_check';
>>
>> Isn't this test fragile because there is no gurantee that the target backend
>> has exited just after calling pg_terminate_backend()?
>>
> 
> I think this is okay, because pg_terminate_backend() sends SIGTERM to
> the backend, and upon receiving SIGTERM the signal handler die() will
> be called and since there is no query being executed on the backend by
> the time SIGTERM is received, it will exit immediately. Thoughts?

Yeah, basically you're right. But that backend *can* still be running
when the subsequent test query starts. I'm wondering if wait_pid()
(please see regress.c and sql/dblink.sql) should be used to ensure
the target backend disappeared.

Regards,

-- 
Fujii Masao
Advanced Computing Technology Center
Research and Development Headquarters
NTT DATA CORPORATION



On Tue, Sep 29, 2020 at 10:01 PM Fujii Masao
<masao.fujii@oss.nttdata.com> wrote:
>
> > I think this is okay, because pg_terminate_backend() sends SIGTERM to
> > the backend, and upon receiving SIGTERM the signal handler die() will
> > be called and since there is no query being executed on the backend by
> > the time SIGTERM is received, it will exit immediately. Thoughts?
>
> Yeah, basically you're right. But that backend *can* still be running
> when the subsequent test query starts. I'm wondering if wait_pid()
> (please see regress.c and sql/dblink.sql) should be used to ensure
> the target backend disappeared.
>

I think wait_pid() is not a generic function, and I'm unable to use
that inside postgres_fdw.sql. I think I need to recreate that function
for postgres_fdw.sql. For dblink, it's being created as part of
paths.source. Could you help me in doing so?

And another way, if we don't want to use wait_pid() is to have a
plpgsql stored procedure, that in a loop keeps on checking for the
backed pid from pg_stat_activity, exit when pid is 0. and then proceed
to issue the next foreign table query. Thoughts?

mypid = -1;
while (mypid != 0)
SELECT pid INTO mypid FROM pg_stat_activity WHERE backend_type =
'client backend' AND application_name = 'fdw_retry_check';


With Regards,
Bharath Rupireddy.
EnterpriseDB: http://www.enterprisedb.com




On 2020/09/30 21:02, Bharath Rupireddy wrote:
> On Tue, Sep 29, 2020 at 10:01 PM Fujii Masao
> <masao.fujii@oss.nttdata.com> wrote:
>>
>>> I think this is okay, because pg_terminate_backend() sends SIGTERM to
>>> the backend, and upon receiving SIGTERM the signal handler die() will
>>> be called and since there is no query being executed on the backend by
>>> the time SIGTERM is received, it will exit immediately. Thoughts?
>>
>> Yeah, basically you're right. But that backend *can* still be running
>> when the subsequent test query starts. I'm wondering if wait_pid()
>> (please see regress.c and sql/dblink.sql) should be used to ensure
>> the target backend disappeared.
>>
> 
> I think wait_pid() is not a generic function, and I'm unable to use
> that inside postgres_fdw.sql. I think I need to recreate that function
> for postgres_fdw.sql. For dblink, it's being created as part of
> paths.source. Could you help me in doing so?
> 
> And another way, if we don't want to use wait_pid() is to have a
> plpgsql stored procedure, that in a loop keeps on checking for the
> backed pid from pg_stat_activity, exit when pid is 0. and then proceed
> to issue the next foreign table query. Thoughts?

+1 for this approach! We can use plpgsql or DO command.


> 
> mypid = -1;
> while (mypid != 0)
> SELECT pid INTO mypid FROM pg_stat_activity WHERE backend_type =
> 'client backend' AND application_name = 'fdw_retry_check';

Or we can just wait for the number of processes with
appname='fdw_retry_check' to be zero rather than checking the pid.

Regards,

-- 
Fujii Masao
Advanced Computing Technology Center
Research and Development Headquarters
NTT DATA CORPORATION



On Wed, Sep 30, 2020 at 11:32 PM Fujii Masao
<masao.fujii@oss.nttdata.com> wrote:
>
> > And another way, if we don't want to use wait_pid() is to have a
> > plpgsql stored procedure, that in a loop keeps on checking for the
> > backed pid from pg_stat_activity, exit when pid is 0. and then proceed
> > to issue the next foreign table query. Thoughts?
>
> +1 for this approach! We can use plpgsql or DO command.
>

Used plpgsql procedure as we have to use the procedure 2 times.

>
> >
> > mypid = -1;
> > while (mypid != 0)
> > SELECT pid INTO mypid FROM pg_stat_activity WHERE backend_type =
> > 'client backend' AND application_name = 'fdw_retry_check';
>
> Or we can just wait for the number of processes with
> appname='fdw_retry_check' to be zero rather than checking the pid.
>

Done.

Attaching v7 patch with above changes. Please review it.



With Regards,
Bharath Rupireddy.
EnterpriseDB: http://www.enterprisedb.com

Attachment

On 2020/10/01 21:14, Bharath Rupireddy wrote:
> On Wed, Sep 30, 2020 at 11:32 PM Fujii Masao
> <masao.fujii@oss.nttdata.com> wrote:
>>
>>> And another way, if we don't want to use wait_pid() is to have a
>>> plpgsql stored procedure, that in a loop keeps on checking for the
>>> backed pid from pg_stat_activity, exit when pid is 0. and then proceed
>>> to issue the next foreign table query. Thoughts?
>>
>> +1 for this approach! We can use plpgsql or DO command.
>>
> 
> Used plpgsql procedure as we have to use the procedure 2 times.
> 
>>
>>>
>>> mypid = -1;
>>> while (mypid != 0)
>>> SELECT pid INTO mypid FROM pg_stat_activity WHERE backend_type =
>>> 'client backend' AND application_name = 'fdw_retry_check';
>>
>> Or we can just wait for the number of processes with
>> appname='fdw_retry_check' to be zero rather than checking the pid.
>>
> 
> Done.
> 
> Attaching v7 patch with above changes. Please review it.

Thanks for updating the patch!

+-- committed the txn. The entry of the terminated backend from pg_stat_activity
+-- would be removed only after the txn commit.

pg_stat_clear_snapshot() can be used to reset the entry.

+        EXIT WHEN proccnt = 0;
+    END LOOP;

Isn't it better to sleep here, to avoid th busy loop?

So what I thought was something like

CREATE OR REPLACE PROCEDURE wait_for_backend_termination()
LANGUAGE plpgsql
AS $$
BEGIN
     LOOP
         PERFORM * FROM pg_stat_activity WHERE application_name = 'fdw_retry_check';
         EXIT WHEN NOT FOUND;
         PERFORM pg_sleep(1), pg_stat_clear_snapshot();
     END LOOP;
END;
$$;

Regards,

-- 
Fujii Masao
Advanced Computing Technology Center
Research and Development Headquarters
NTT DATA CORPORATION



On Thu, Oct 1, 2020 at 8:10 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:
>
> pg_stat_clear_snapshot() can be used to reset the entry.
>

Thanks. I wasn't knowing it.

>
> +               EXIT WHEN proccnt = 0;
> +    END LOOP;
>
> Isn't it better to sleep here, to avoid th busy loop?
>

+1.

>
> So what I thought was something like
>
> CREATE OR REPLACE PROCEDURE wait_for_backend_termination()
> LANGUAGE plpgsql
> AS $$
> BEGIN
>      LOOP
>          PERFORM * FROM pg_stat_activity WHERE application_name = 'fdw_retry_check';
>          EXIT WHEN NOT FOUND;
>          PERFORM pg_sleep(1), pg_stat_clear_snapshot();
>      END LOOP;
> END;
> $$;
>

Changed.

Attaching v8 patch, please review it.. Both make check and make
check-world passes on v8.

I have another question not related to this patch: though we have
wait_pid() function, we are not able to use it like
pg_terminate_backend() in other modules, wouldn't be nice if we can
make it generic under the name pg_wait_pid() and usable across all pg
modules?


With Regards,
Bharath Rupireddy.
EnterpriseDB: http://www.enterprisedb.com

Attachment

On 2020/10/02 0:46, Bharath Rupireddy wrote:
> On Thu, Oct 1, 2020 at 8:10 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:
>>
>> pg_stat_clear_snapshot() can be used to reset the entry.
>>
> 
> Thanks. I wasn't knowing it.
> 
>>
>> +               EXIT WHEN proccnt = 0;
>> +    END LOOP;
>>
>> Isn't it better to sleep here, to avoid th busy loop?
>>
> 
> +1.
> 
>>
>> So what I thought was something like
>>
>> CREATE OR REPLACE PROCEDURE wait_for_backend_termination()
>> LANGUAGE plpgsql
>> AS $$
>> BEGIN
>>       LOOP
>>           PERFORM * FROM pg_stat_activity WHERE application_name = 'fdw_retry_check';
>>           EXIT WHEN NOT FOUND;
>>           PERFORM pg_sleep(1), pg_stat_clear_snapshot();
>>       END LOOP;
>> END;
>> $$;
>>
> 
> Changed.
> 
> Attaching v8 patch, please review it.. Both make check and make
> check-world passes on v8.

Thanks for updating the patch! It basically looks good to me.
I tweaked the patch as follows.

+        if (!entry->conn ||
+            PQstatus(entry->conn) != CONNECTION_BAD ||

With the above change, if entry->conn is NULL, an error is thrown and no new
connection is reestablished. But why? IMO it's more natural to reestablish
new connection in that case. So I removed "!entry->conn" from the above
condition.

+        ereport(DEBUG3,
+                (errmsg("could not start remote transaction on connection %p",
+                 entry->conn)),

I replaced errmsg() with errmsg_internal() because the translation of
this debug message is not necessary.


+SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE
+backend_type = 'client backend' AND application_name = 'fdw_retry_check';
+CALL wait_for_backend_termination();

Since we always use pg_terminate_backend() and wait_for_backend_termination()
together, I merged them into one function.

I simplied the comments on the regression test.

  Attached is the updated version of the patch. If this patch is ok,
  I'd like to mark it as ready for committer.


> I have another question not related to this patch: though we have
> wait_pid() function, we are not able to use it like
> pg_terminate_backend() in other modules, wouldn't be nice if we can
> make it generic under the name pg_wait_pid() and usable across all pg
> modules?

I thought that, too. But I could not come up with good idea for *real* use case
of that function. At least that's useful for the regression test, though.
Anyway, IMO it's worth proposing that and hearing more opinions about that
from other hackers.

Regards,

-- 
Fujii Masao
Advanced Computing Technology Center
Research and Development Headquarters
NTT DATA CORPORATION

Attachment
On Fri, Oct 2, 2020 at 11:30 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:
>
> > Attaching v8 patch, please review it.. Both make check and make
> > check-world passes on v8.
>
> Thanks for updating the patch! It basically looks good to me.
> I tweaked the patch as follows.
>
> +               if (!entry->conn ||
> +                       PQstatus(entry->conn) != CONNECTION_BAD ||
>
> With the above change, if entry->conn is NULL, an error is thrown and no new
> connection is reestablished. But why? IMO it's more natural to reestablish
> new connection in that case. So I removed "!entry->conn" from the above
> condition.
>

Yeah, that makes sense.

>
> +               ereport(DEBUG3,
> +                               (errmsg("could not start remote transaction on connection %p",
> +                                entry->conn)),
>
> I replaced errmsg() with errmsg_internal() because the translation of
> this debug message is not necessary.
>

I'm okay with this as we don't have any specific strings that need
translation in the debug message. But, should we also try to have
errmsg_internal in a few other places in connection.c?

                 errmsg("could not obtain message string for remote error"),
                 errmsg("cannot PREPARE a transaction that has
operated on postgres_fdw foreign tables")));
                 errmsg("password is required"),

I see the errmsg() with plain texts in other places in the code base
as well. Is it  that we look at the error message and if it is a plain
text(without database objects or table data), we decide to have no
translation? Or is there any other policy?

>
> +SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE
> +backend_type = 'client backend' AND application_name = 'fdw_retry_check';
> +CALL wait_for_backend_termination();
>
> Since we always use pg_terminate_backend() and wait_for_backend_termination()
> together, I merged them into one function.
>
> I simplied the comments on the regression test.
>

+1. I slightly adjusted comments in connection.c and ran pg_indent to
keep them upt 80 char limit.

>
>   Attached is the updated version of the patch. If this patch is ok,
>   I'd like to mark it as ready for committer.
>

Thanks. Attaching v10 patch. Please have a look.

>
> > I have another question not related to this patch: though we have
> > wait_pid() function, we are not able to use it like
> > pg_terminate_backend() in other modules, wouldn't be nice if we can
> > make it generic under the name pg_wait_pid() and usable across all pg
> > modules?
>
> I thought that, too. But I could not come up with good idea for *real* use case
> of that function. At least that's useful for the regression test, though.
> Anyway, IMO it's worth proposing that and hearing more opinions about that
> from other hackers.
>

Yes it will be useful for testing when coupled with
pg_terminate_backend(). I will post the idea in a separate thread soon
for more thoughts.



With Regards,
Bharath Rupireddy.
EnterpriseDB: http://www.enterprisedb.com

Attachment

On 2020/10/03 20:40, Bharath Rupireddy wrote:
> On Fri, Oct 2, 2020 at 11:30 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:
>>
>>> Attaching v8 patch, please review it.. Both make check and make
>>> check-world passes on v8.
>>
>> Thanks for updating the patch! It basically looks good to me.
>> I tweaked the patch as follows.
>>
>> +               if (!entry->conn ||
>> +                       PQstatus(entry->conn) != CONNECTION_BAD ||
>>
>> With the above change, if entry->conn is NULL, an error is thrown and no new
>> connection is reestablished. But why? IMO it's more natural to reestablish
>> new connection in that case. So I removed "!entry->conn" from the above
>> condition.
>>
> 
> Yeah, that makes sense.
> 
>>
>> +               ereport(DEBUG3,
>> +                               (errmsg("could not start remote transaction on connection %p",
>> +                                entry->conn)),
>>
>> I replaced errmsg() with errmsg_internal() because the translation of
>> this debug message is not necessary.
>>
> 
> I'm okay with this as we don't have any specific strings that need
> translation in the debug message. But, should we also try to have
> errmsg_internal in a few other places in connection.c?
> 
>                   errmsg("could not obtain message string for remote error"),
>                   errmsg("cannot PREPARE a transaction that has
> operated on postgres_fdw foreign tables")));
>                   errmsg("password is required"),
> 
> I see the errmsg() with plain texts in other places in the code base
> as well. Is it  that we look at the error message and if it is a plain
> text(without database objects or table data), we decide to have no
> translation? Or is there any other policy?

I was thinking that elog() basically should be used to report this
debug message, instead, but you used ereport() because maybe
you'd like to add detail message about connection error. Is this
understanding right? elog() uses errmsg_internal(). So if ereport()
is used as an aternative of elog() for some reasons,
IMO errmsg_internal() should be used. Thought?

OTOH, the messages you mentioned are not debug ones,
so basically ereport() and errmsg() should be used, I think.


> 
>>
>> +SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE
>> +backend_type = 'client backend' AND application_name = 'fdw_retry_check';
>> +CALL wait_for_backend_termination();
>>
>> Since we always use pg_terminate_backend() and wait_for_backend_termination()
>> together, I merged them into one function.
>>
>> I simplied the comments on the regression test.
>>
> 
> +1. I slightly adjusted comments in connection.c and ran pg_indent to
> keep them upt 80 char limit.
> 
>>
>>    Attached is the updated version of the patch. If this patch is ok,
>>    I'd like to mark it as ready for committer.
>>
> 
> Thanks. Attaching v10 patch. Please have a look.

Thanks for updating the patch! I will mark the patch as ready for committer in CF.
Barring any objections, I will commit that.

> 
>>
>>> I have another question not related to this patch: though we have
>>> wait_pid() function, we are not able to use it like
>>> pg_terminate_backend() in other modules, wouldn't be nice if we can
>>> make it generic under the name pg_wait_pid() and usable across all pg
>>> modules?
>>
>> I thought that, too. But I could not come up with good idea for *real* use case
>> of that function. At least that's useful for the regression test, though.
>> Anyway, IMO it's worth proposing that and hearing more opinions about that
>> from other hackers.
>>
> 
> Yes it will be useful for testing when coupled with
> pg_terminate_backend(). I will post the idea in a separate thread soon
> for more thoughts.

Sounds good!
ISTM that he function should at least check the target process is PostgreSQL one.

Regards,

-- 
Fujii Masao
Advanced Computing Technology Center
Research and Development Headquarters
NTT DATA CORPORATION



On Mon, Oct 5, 2020 at 9:45 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:
>
> > I see the errmsg() with plain texts in other places in the code base
> > as well. Is it  that we look at the error message and if it is a plain
> > text(without database objects or table data), we decide to have no
> > translation? Or is there any other policy?
>
> I was thinking that elog() basically should be used to report this
> debug message, instead, but you used ereport() because maybe
> you'd like to add detail message about connection error. Is this
> understanding right? elog() uses errmsg_internal().
>

Yes that's correct.

>
> So if ereport() is used as an aternative of elog() for some reasons,
> IMO errmsg_internal() should be used. Thought?
>

Yes, this is apt for our case.

>
> OTOH, the messages you mentioned are not debug ones,
> so basically ereport() and errmsg() should be used, I think.
>

In connection.c file, yes they are of ERROR type. Looks like it's not
a standard to use errmsg_internal for DEBUG messages that require no
translation with ereport

(errmsg("wrote block details for %d blocks", num_blocks)));
(errmsg("MultiXact member stop limit is now %u based on MultiXact %u
(errmsg("oldest MultiXactId member is at offset %u",

However, there are few other places, where errmsg_internal is used for
DEBUG purposes.

(errmsg_internal("finished verifying presence of "
(errmsg_internal("%s(%d) name: %s; blockState:

Having said that, IMHO it's better to keep the way it is currently in
the code base.

>
> > Thanks. Attaching v10 patch. Please have a look.
>
> Thanks for updating the patch! I will mark the patch as ready for committer in CF.
> Barring any objections, I will commit that.
>

Thanks a lot for the review comments.

> >>
> >>> I have another question not related to this patch: though we have
> >>> wait_pid() function, we are not able to use it like
> >>> pg_terminate_backend() in other modules, wouldn't be nice if we can
> >>> make it generic under the name pg_wait_pid() and usable across all pg
> >>> modules?
> >>
> >> I thought that, too. But I could not come up with good idea for *real* use case
> >> of that function. At least that's useful for the regression test, though.
> >> Anyway, IMO it's worth proposing that and hearing more opinions about that
> >> from other hackers.
> >>
> >
> > Yes it will be useful for testing when coupled with
> > pg_terminate_backend(). I will post the idea in a separate thread soon
> > for more thoughts.
>
> Sounds good!
> ISTM that he function should at least check the target process is PostgreSQL one.
>

Thanks. I will take care of this point.

With Regards,
Bharath Rupireddy.
EnterpriseDB: http://www.enterprisedb.com




On 2020/10/05 20:32, Bharath Rupireddy wrote:
> On Mon, Oct 5, 2020 at 9:45 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:
>>
>>> I see the errmsg() with plain texts in other places in the code base
>>> as well. Is it  that we look at the error message and if it is a plain
>>> text(without database objects or table data), we decide to have no
>>> translation? Or is there any other policy?
>>
>> I was thinking that elog() basically should be used to report this
>> debug message, instead, but you used ereport() because maybe
>> you'd like to add detail message about connection error. Is this
>> understanding right? elog() uses errmsg_internal().
>>
> 
> Yes that's correct.
> 
>>
>> So if ereport() is used as an aternative of elog() for some reasons,
>> IMO errmsg_internal() should be used. Thought?
>>
> 
> Yes, this is apt for our case.
> 
>>
>> OTOH, the messages you mentioned are not debug ones,
>> so basically ereport() and errmsg() should be used, I think.
>>
> 
> In connection.c file, yes they are of ERROR type. Looks like it's not
> a standard to use errmsg_internal for DEBUG messages that require no
> translation with ereport
> 
> (errmsg("wrote block details for %d blocks", num_blocks)));
> (errmsg("MultiXact member stop limit is now %u based on MultiXact %u
> (errmsg("oldest MultiXactId member is at offset %u",
> 
> However, there are few other places, where errmsg_internal is used for
> DEBUG purposes.
> 
> (errmsg_internal("finished verifying presence of "
> (errmsg_internal("%s(%d) name: %s; blockState:
> 
> Having said that, IMHO it's better to keep the way it is currently in
> the code base.

Agreed.


> 
>>
>>> Thanks. Attaching v10 patch. Please have a look.
>>
>> Thanks for updating the patch! I will mark the patch as ready for committer in CF.
>> Barring any objections, I will commit that.
>>
> 
> Thanks a lot for the review comments.

I pushed the patch. Thanks!

Regards,

-- 
Fujii Masao
Advanced Computing Technology Center
Research and Development Headquarters
NTT DATA CORPORATION