Re: Transactions involving multiple postgres foreign servers, take 2 - Mailing list pgsql-hackers
From | Robert Haas |
---|---|
Subject | Re: Transactions involving multiple postgres foreign servers, take 2 |
Date | |
Msg-id | CA+TgmoZWYaWxhMG3ZYacoG=FOED3cPw2iaAQ2DoxFj8+YivyTA@mail.gmail.com Whole thread Raw |
In response to | RE: Transactions involving multiple postgres foreign servers, take 2 ("tsunakawa.takay@fujitsu.com" <tsunakawa.takay@fujitsu.com>) |
Responses |
RE: Transactions involving multiple postgres foreign servers, take 2
|
List | pgsql-hackers |
On Sun, Jun 13, 2021 at 10:04 PM tsunakawa.takay@fujitsu.com <tsunakawa.takay@fujitsu.com> wrote: > I know sending a commit request may get an error from various underlying functions, but we're talking about the clientside, not the Postgres's server side that could unexpectedly ereport(ERROR) somewhere. So, the new FDW commit routinewon't lose control and can return an error code as its return value. For instance, the FDW commit routine for DBMS-Xwould typically be: > > int > DBMSXCommit(...) > { > int ret; > > /* extract info from the argument to pass to xa_commit() */ > > ret = DBMSX_xa_commit(...); > /* This is the actual commit function which is exposed to the app server (e.g. Tuxedo) through the xa_commit()interface */ > > /* map xa_commit() return values to the corresponding return values of the FDW commit routine */ > switch (ret) > { > case XA_RMERR: > ret = ...; > break; > ... > } > > return ret; > } Well, we're talking about running this commit routine from within CommitTransaction(), right? So I think it is in fact running in the server. And if that's so, then you have to worry about how to make it respond to interrupts. You can't just call some functions DBMSX_xa_commit() and wait for infinite time for it to return. Look at pgfdw_get_result() for an example of what real code to do this looks like. > So, we need to design how commit behaves from the user's perspective. That's the functional design. We should figureout what's the desirable response of commit first, and then see if we can implement it or have to compromise in someway. I think we can reference the X/Open TX standard and/or JTS (Java Transaction Service) specification (I haven'thad a chance to read them yet, though.) Just in case we can't find the requested commit behavior in the volcano casefrom those specifications, ... (I'm hesitant to say this because it may be hard,) it's desirable to follow representativeproducts such as Tuxedo and GlassFish (the reference implementation of Java EE specs.) Honestly, I am not quite sure what any specification has to say about this. We're talking about what happens when a user does something with a foreign table and then type COMMIT. That's all about providing a set of behaviors that are consistent with how PostgreSQL works in other situations. You can't negotiate away the requirement to handle errors in a way that works with PostgreSQL's infrastructure, or the requirement that any length operation handle interrupts properly, by appealing to a specification. > Concurrent transactions are serialized at the resolver. I heard that the current patch handles 2PC like this: the TM (transactionmanager in Postgres core) requests prepare to the resolver, the resolver sends prepare to the remote server andwait for reply, the TM gets back control from the resolver, TM requests commit to the resolver, the resolver sends committo the remote server and wait for reply, and TM gets back control. The resolver handles one transaction at a time. That sounds more like a limitation of the present implementation than a fundamental problem. We shouldn't reject the idea of having a resolver process handle this just because the initial implementation might be slow. If there's no fundamental problem with the idea, parallelism and concurrency can be improved in separate patches at a later time. It's much more important at this stage to reject ideas that are not theoretically sound. -- Robert Haas EDB: http://www.enterprisedb.com
pgsql-hackers by date: