RE: Transactions involving multiple postgres foreign servers, take 2 - Mailing list pgsql-hackers
From | tsunakawa.takay@fujitsu.com |
---|---|
Subject | RE: Transactions involving multiple postgres foreign servers, take 2 |
Date | |
Msg-id | TYAPR01MB299057AD6487997F47D204D2FE309@TYAPR01MB2990.jpnprd01.prod.outlook.com Whole thread Raw |
In response to | Re: Transactions involving multiple postgres foreign servers, take 2 (Robert Haas <robertmhaas@gmail.com>) |
Responses |
Re: Transactions involving multiple postgres foreign servers, take 2
|
List | pgsql-hackers |
From: Robert Haas <robertmhaas@gmail.com> > Well, we're talking about running this commit routine from within > CommitTransaction(), right? So I think it is in fact running in the > server. And if that's so, then you have to worry about how to make it > respond to interrupts. You can't just call some functions > DBMSX_xa_commit() and wait for infinite time for it to return. Look at > pgfdw_get_result() for an example of what real code to do this looks > like. Postgres can do that, but other implementations can not necessaily do it, I'm afraid. But before that, the FDW interfacedocumentation doesn't describe anything about how to handle interrupts. Actually, odbc_fdw and possibly other FDWsdon't respond to interrupts. > Honestly, I am not quite sure what any specification has to say about > this. We're talking about what happens when a user does something with > a foreign table and then type COMMIT. That's all about providing a set > of behaviors that are consistent with how PostgreSQL works in other > situations. You can't negotiate away the requirement to handle errors > in a way that works with PostgreSQL's infrastructure, or the > requirement that any length operation handle interrupts properly, by > appealing to a specification. What we're talking here is mainly whether commit should return success or failure when some participants failed to commitin the second phase of 2PC. That's new to Postgres, isn't it? Anyway, we should respect existing relevant specificationsand (well-known) implementations before we conclude that we have to devise our own behavior. > That sounds more like a limitation of the present implementation than > a fundamental problem. We shouldn't reject the idea of having a > resolver process handle this just because the initial implementation > might be slow. If there's no fundamental problem with the idea, > parallelism and concurrency can be improved in separate patches at a > later time. It's much more important at this stage to reject ideas > that are not theoretically sound. We talked about that, and unfortunately, I haven't seen a good and feasible idea to enhance the current approach that involvesthe resolver from the beginning of 2PC processing. Honestly, I don't understand why such a "one prepare, one commitin turn" serialization approach can be allowed in PostgreSQL where developers pursue best performance and even triesto refrain from adding an if statement in a hot path. As I showed and Ikeda-san said, other implementations have eachclient session send prepare and commit requests. That's a natural way to achieve reasonable concurrency and performance. Regards Takayuki Tsunakawa
pgsql-hackers by date: