Re: Transactions involving multiple postgres foreign servers, take 2 - Mailing list pgsql-hackers

From Robert Haas
Subject Re: Transactions involving multiple postgres foreign servers, take 2
Date
Msg-id CA+TgmoYy7BbFrqbwC52uHj+Wo9ZO1XM81A9NjCw=1aW-HeXTsw@mail.gmail.com
Whole thread Raw
In response to RE: Transactions involving multiple postgres foreign servers, take 2  ("tsunakawa.takay@fujitsu.com" <tsunakawa.takay@fujitsu.com>)
Responses RE: Transactions involving multiple postgres foreign servers, take 2
List pgsql-hackers
On Fri, Jun 4, 2021 at 4:04 AM tsunakawa.takay@fujitsu.com
<tsunakawa.takay@fujitsu.com> wrote:
> Why does the user have to get an error?  Once the local transaction has been prepared, which means all remote ones
alsohave been prepared, the whole transaction is determined to commit.  So, the user doesn't have to receive an error
aslong as the local node is alive. 

That is completely unrealistic. As Sawada-san has pointed out
repeatedly, there are tons of things that can go wrong even after the
remote side has prepared the transaction. Preparing a transaction only
promises that the remote side will let you commit the transaction upon
request. It doesn't guarantee that you'll be able to make the request.
Like Sawada-san says, network problems, out of memory issues, or many
other things could stop that from happening. Someone could come along
in another session and run "ROLLBACK PREPARED" on the remote side, and
now the "COMMIT PREPARED" will never succeed no matter how many times
you try it. At least, not unless someone goes and creates a new
prepared transaction with the same 2PC identifier, but then you won't
be committing the correct transaction anyway. Or someone could take
the remote server and drop it in a volcano. How do you propose that we
avoid giving the user an error after the remote server has been
dropped into a volcano, even though the local node is still alive?

Also, leaving aside theoretical arguments, I think it's not
realistically possible for an FDW author to write code to commit a
prepared transaction that will be safe in the context of running late
in PrepareTransaction(), after we've already done
RecordTransactionCommit(). Such code can't avoid throwing errors
because it can't avoid performing operations and allocating memory.
It's already been mentioned that, if an ERROR is thrown, it would be
reported to the user in place of the COMMIT acknowledgement that they
are expecting. Now, it has also been suggested that we could downgrade
the ERROR to a WARNING and still report the COMMIT. That doesn't sound
easy to do, because when the ERROR happens, control is going to jump
to AbortTransaction(). But even if you could hack it so it works like
that, it doesn't really solve the problem. What about all of the other
servers where the prepared transaction also needs to be committed? In
the design of PostgreSQL, in all circumstances, the way you recover
from an error is to abort the transaction. That is what brings the
system back to a clean state. You can't simply ignore the requirement
to abort the transaction and keep doing more work. It will never be
reliable, and Tom will instantaneously demand that any code works like
that be reverted -- and for good reason.

I am not sure that it's 100% impossible to find a way to solve this
problem without just having the resolver do all the work, but I think
it's going to be extremely difficult. We tried to figure out some
vaguely similar things while working on undo, and it really didn't go
very well. The later stages of CommitTransaction() and
AbortTransaction() are places where very few kinds of code are safe to
execute, and finding a way to patch around that problem is not simple
either. If the resolver performance is poor, perhaps we could try to
find a way to improve it. I don't know. But I don't think it does any
good to say, well, no errors can occur after the remote transaction is
prepared. That's clearly incorrect.

--
Robert Haas
EDB: http://www.enterprisedb.com



pgsql-hackers by date:

Previous
From: Bharath Rupireddy
Date:
Subject: Re: RFC: Logging plan of the running query
Next
From: Matthias van de Meent
Date:
Subject: Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic