Re: Transactions involving multiple postgres foreign servers, take 2 - Mailing list pgsql-hackers

From Masahiko Sawada
Subject Re: Transactions involving multiple postgres foreign servers, take 2
Date
Msg-id CA+fd4k78d2rO9z_r-qSJaqA_nRAR=tU7y7ZFf3+Rg1dRrcFr=w@mail.gmail.com
Whole thread Raw
In response to RE: Transactions involving multiple postgres foreign servers, take 2  ("tsunakawa.takay@fujitsu.com" <tsunakawa.takay@fujitsu.com>)
Responses RE: Transactions involving multiple postgres foreign servers, take 2  ("tsunakawa.takay@fujitsu.com" <tsunakawa.takay@fujitsu.com>)
List pgsql-hackers
On Thu, 24 Sep 2020 at 17:23, tsunakawa.takay@fujitsu.com
<tsunakawa.takay@fujitsu.com> wrote:
>
> From: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>
> > So with your idea, I think we require FDW developers to not call
> > ereport(ERROR) as much as possible. If they need to use a function
> > including palloc, lappend etc that could call ereport(ERROR), they
> > need to use PG_TRY() and PG_CATCH() and return the control along with
> > the error message to the transaction manager rather than raising an
> > error. Then the transaction manager will emit the error message at an
> > error level lower than ERROR (e.g., WARNING), and call commit/rollback
> > API again. But normally we do some cleanup on error but in this case
> > the retrying commit/rollback is performed without any cleanup. Is that
> > right? I’m not sure it’s safe though.
>
>
> Yes.  It's legitimate to require the FDW commit routine to return control, because the prepare of 2PC is a promise to
commitsuccessfully.  The second-phase commit should avoid doing that could fail.  For example, if some memory is needed
forcommit, it should be allocated in prepare or before. 
>

I don't think it's always possible to avoid raising errors in advance.
Considering how postgres_fdw can implement your idea, I think
postgres_fdw would need PG_TRY() and PG_CATCH() for its connection
management. It has a connection cache in the local memory using HTAB.
It needs to create an entry for the first time to connect (e.g., when
prepare and commit prepared a transaction are performed by different
processes) and it needs to re-connect the foreign server when the
entry is invalidated. In both cases, ERROR could happen. I guess the
same is true for other FDW implementations. Possibly other FDWs might
need more work for example cleanup or releasing resources. I think
that the pros of your idea are to make the transaction manager simple
since we don't need resolvers and launcher but the cons are to bring
the complexity to FDW implementation codes instead. Also, IMHO I don't
think it's safe way that FDW does neither re-throwing an error nor
abort transaction when an error occurs.

In terms of performance you're concerned, I wonder if we can somewhat
eliminate the bottleneck if multiple resolvers are able to run on one
database in the future. For example, if we could launch resolver
processes as many as connections on the database, individual backend
processes could have one resolver process. Since there would be
contention and inter-process communication it still brings some
overhead but it might be negligible comparing to network round trip.

Perhaps we can hear more opinions on that from other hackers to decide
the FDW transaction API design.

Regards,

--
Masahiko Sawada            http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



pgsql-hackers by date:

Previous
From: Dilip Kumar
Date:
Subject: Re: Logical replication from PG v13 and below to PG v14 (devel version) is not working.
Next
From: "Hou, Zhijie"
Date:
Subject: AppendStringInfoChar instead of appendStringInfoString