Re: Transactions involving multiple postgres foreign servers, take 2 - Mailing list pgsql-hackers
From | Masahiko Sawada |
---|---|
Subject | Re: Transactions involving multiple postgres foreign servers, take 2 |
Date | |
Msg-id | CA+fd4k7qurzf1WayPSdYWXyaVxoe2iUk08+CwGq4mkTyJTzmXw@mail.gmail.com Whole thread Raw |
In response to | RE: Transactions involving multiple postgres foreign servers, take 2 ("tsunakawa.takay@fujitsu.com" <tsunakawa.takay@fujitsu.com>) |
Responses |
RE: Transactions involving multiple postgres foreign servers, take 2
|
List | pgsql-hackers |
On Fri, 11 Sep 2020 at 18:24, tsunakawa.takay@fujitsu.com <tsunakawa.takay@fujitsu.com> wrote: > > From: Masahiko Sawada <masahiko.sawada@2ndquadrant.com> > > On Tue, 8 Sep 2020 at 13:00, tsunakawa.takay@fujitsu.com > > <tsunakawa.takay@fujitsu.com> wrote: > > > 2. 2PC processing is queued and serialized in one background worker. That > > severely subdues transaction throughput. Each backend should perform > > 2PC. > > > > Not sure it's safe that each backend perform PREPARE and COMMIT > > PREPARED since the current design is for not leading an inconsistency > > between the actual transaction result and the result the user sees. > > As Fujii-san is asking, I also would like to know what situation you think is not safe. Are you worried that the FDW'scommit function might call ereport(ERROR | FATAL | PANIC)? Yes. > If so, can't we stipulate that the FDW implementor should ensure that the commit function always returns control to thecaller? How can the FDW implementor ensure that? Since even palloc could call ereport(ERROR) I guess it's hard to require that to all FDW implementors. > > > > But in the future, I think we can have multiple background workers per > > database for better performance. > > Does the database in "per database" mean the local database (that applications connect to), or the remote database accessedvia FDW? I meant the local database. In the current patch, we launch the resolver process per local database. My idea is to allow launching multiple resolver processes for one local database as long as the number of workers doesn't exceed the limit. > > I'm wondering how the FDW and background worker(s) can realize parallel prepare and parallel commit. That is, the coordinatortransaction performs: > > 1. Issue prepare to all participant nodes, but doesn't wait for the reply for each issue. > 2. Waits for replies from all participants. > 3. Issue commit to all participant nodes, but doesn't wait for the reply for each issue. > 4. Waits for replies from all participants. > > If we just consider PostgreSQL and don't think about FDW, we can use libpq async functions -- PQsendQuery, PQconsumeInput,and PQgetResult. pgbench uses them so that one thread can issue SQL statements on multiple connections inparallel. > > But when we consider the FDW interface, plus other DBMSs, how can we achieve the parallelism? It's still a rough idea but I think we can use TMASYNC flag and xa_complete explained in the XA specification. The core transaction manager call prepare, commit, rollback APIs with the flag, requiring to execute the operation asynchronously and to return a handler (e.g., a socket taken by PQsocket in postgres_fdw case) to the transaction manager. Then the transaction manager continues polling the handler until it becomes readable and testing the completion using by xa_complete() with no wait, until all foreign servers return OK on xa_complete check. > > > > > 3. postgres_fdw cannot detect remote updates when the UDF executed on a > > remote node updates data. > > > > I assume that you mean the pushing the UDF down to a foreign server. > > If so, I think we can do this by improving postgres_fdw. In the current patch, > > registering and unregistering a foreign server to a group of 2PC and marking a > > foreign server as updated is FDW responsible. So perhaps if we had a way to > > tell postgres_fdw that the UDF might update the data on the foreign server, > > postgres_fdw could mark the foreign server as updated if the UDF is shippable. > > Maybe we can consider VOLATILE functions update data. That may be overreaction, though. Sorry I don't understand that. The volatile functions are not pushed down to the foreign servers in the first place, no? Regards, -- Masahiko Sawada http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
pgsql-hackers by date: