Re: Transactions involving multiple postgres foreign servers, take 2 - Mailing list pgsql-hackers

From Masahiko Sawada
Subject Re: Transactions involving multiple postgres foreign servers, take 2
Date
Msg-id CA+fd4k6pmcsSUdNeVQ5o_u8vAgfSU7cJnTHLLMtEFOv5SaNUJw@mail.gmail.com
Whole thread Raw
In response to RE: Transactions involving multiple postgres foreign servers, take 2  ("tsunakawa.takay@fujitsu.com" <tsunakawa.takay@fujitsu.com>)
Responses RE: Transactions involving multiple postgres foreign servers, take 2
List pgsql-hackers
On Fri, 25 Sep 2020 at 18:21, tsunakawa.takay@fujitsu.com
<tsunakawa.takay@fujitsu.com> wrote:
>
> From: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>
> > I don't think it's always possible to avoid raising errors in advance.
> > Considering how postgres_fdw can implement your idea, I think
> > postgres_fdw would need PG_TRY() and PG_CATCH() for its connection
> > management. It has a connection cache in the local memory using HTAB.
> > It needs to create an entry for the first time to connect (e.g., when
> > prepare and commit prepared a transaction are performed by different
> > processes) and it needs to re-connect the foreign server when the
> > entry is invalidated. In both cases, ERROR could happen. I guess the
> > same is true for other FDW implementations. Possibly other FDWs might
> > need more work for example cleanup or releasing resources. I think
>
> Why does the client backend have to create a new connection cache entry during PREPARE or COMMIT PREPARE?  Doesn't
theclient backend naturally continue to use connections that it has used in its current transaction?
 

I think there are two cases: a process executes PREPARE TRANSACTION
and another process executes COMMIT PREPARED later, and if the
coordinator has cascaded foreign servers (i.g., a foreign server has
its foreign server) and temporary connection problem happens in the
intermediate node after PREPARE then another process on the
intermediate node will execute COMMIT PREPARED on its foreign server.

>
>
> > that the pros of your idea are to make the transaction manager simple
> > since we don't need resolvers and launcher but the cons are to bring
> > the complexity to FDW implementation codes instead. Also, IMHO I don't
> > think it's safe way that FDW does neither re-throwing an error nor
> > abort transaction when an error occurs.
>
> No, I didn't say the resolver is unnecessary.  The resolver takes care of terminating remote transactions when the
clientbackend encountered an error during COMMIT/ROLLBACK PREPARED.
 

Understood. With your idea, we can remove at least the code of making
backend wait and inter-process communication between backends and
resolvers.

I think we need to consider that it's really safe and what needs to
achieve your idea safely.

>
>
> > In terms of performance you're concerned, I wonder if we can somewhat
> > eliminate the bottleneck if multiple resolvers are able to run on one
> > database in the future. For example, if we could launch resolver
> > processes as many as connections on the database, individual backend
> > processes could have one resolver process. Since there would be
> > contention and inter-process communication it still brings some
> > overhead but it might be negligible comparing to network round trip.
>
> Do you mean that if concurrent 200 clients each update data on two foreign servers, there are 400 resolvers?
...That'soveruse of resources.
 

I think we have 200 resolvers in this case since one resolver process
per backend process. Or another idea is that all processes queue
foreign transactions to resolve into the shared memory queue and
resolver processes fetch and resolve them instead of assigning one
distributed transaction to one resolver process. Using asynchronous
execution, the resolver process can process a bunch of foreign
transactions across distributed transactions and grouped by the
foreign server at once. It might be more complex than the current
approach but having multiple resolver processes on one database would
increase through-put well especially by combining with asynchronous
execution.

Regards,

-- 
Masahiko Sawada            http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



pgsql-hackers by date:

Previous
From: Daniel Gustafsson
Date:
Subject: Re: Dumping/restoring fails on inherited generated column
Next
From: Tom Lane
Date:
Subject: Re: Dumping/restoring fails on inherited generated column