Re: Transactions involving multiple postgres foreign servers, take 2 - Mailing list pgsql-hackers
From | Masahiko Sawada |
---|---|
Subject | Re: Transactions involving multiple postgres foreign servers, take 2 |
Date | |
Msg-id | CA+fd4k7PUoOXaFBEAx_j5VrFC--+U9eeVC_Mz7tVz3pi8Qdovg@mail.gmail.com Whole thread Raw |
In response to | RE: Transactions involving multiple postgres foreign servers, take 2 ("tsunakawa.takay@fujitsu.com" <tsunakawa.takay@fujitsu.com>) |
Responses |
RE: Transactions involving multiple postgres foreign servers, take 2
|
List | pgsql-hackers |
On Wed, 30 Sep 2020 at 16:02, tsunakawa.takay@fujitsu.com <tsunakawa.takay@fujitsu.com> wrote: > > From: Masahiko Sawada <masahiko.sawada@2ndquadrant.com> > > To avoid misunderstanding, I didn't mean to disregard the performance. > > I mean especially for the transaction management feature it's > > essential to work fine even in failure cases. So I hope we have a > > safe, robust, and probably simple design for the first version that > > might be low performance yet though but have a potential for > > performance improvement and we will be able to try to improve > > performance later. > > Yes, correctness (safety?) is a basic premise. I understand that given the time left for PG 14, we haven't yet given upa sound design that offers practical or normally expected performance. I don't think the design has not well thought yetto see if it's simple or complex. At least, I don't believe doing "send commit request, perform commit on a remote server,and wait for reply" sequence one transaction at a time in turn is what this community (and other DBMSs) tolerate. A kid's tricycle is safe, but it's not safe to ride a tricycle on the road. Let's not rush to commit and do ourbest! Okay. I'd like to resolve my concern that I repeatedly mentioned and we don't find a good solution yet. That is, how we handle errors raised by FDW transaction callbacks during committing/rolling back prepared foreign transactions. Actually, this has already been discussed before[1] and we concluded at that time that using a background worker to commit/rolling back foreign prepared transactions is the best way. Anyway, let me summarize the discussion on this issue so far. With your idea, after the local commit, the backend process directly call transaction FDW API to commit the foreign prepared transactions. However, it's likely to happen an error (i.g. ereport(ERROR)) during that due to various reasons. It could be an OOM by memory allocation, connection error whatever. In case an error happens during committing prepared foreign transactions, the user will get the error but it's too late. The local transaction and possibly other foreign prepared transaction have already been committed. You proposed the first idea to avoid such a situation that FDW implementor can write the code while trying to reduce the possibility of errors happening as much as possible, for example by usingpalloc_extended(MCXT_ALLOC_NO_OOM) and hash_search(HASH_ENTER_NULL) but I think it's not a comprehensive solution. They might miss, not know it, or use other functions provided by the core that could lead an error. Another idea is to use PG_TRY() and PG_CATCH(). IIUC with this idea, FDW implementor catches an error but ignores it rather than rethrowing by PG_RE_THROW() in order to return the control to the core after an error. I’m really not sure it’s a correct usage of those macros. In addition, after returning to the core, it will retry to resolve the same or other foreign transactions. That is, after ignoring an error, the core needs to continue working and possibly call transaction callbacks of other FDW implementations. Regards, [1] https://www.postgresql.org/message-id/CA%2BTgmoY%3DVkHrzXD%3Djw5DA%2BPp-ePW_6_v5n%2BTJk40s5Q9VXY-Pw%40mail.gmail.com -- Masahiko Sawada http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
pgsql-hackers by date: