> I am aware of the fact > that by definition PREPARE TRANSACTION ensures that a transaction will > be committed with COMMIT PREPARED, just trying to see any corner cases > with the approach proposed. The DTM approach is actually rather close > to what a GTM in Postgres-XC does :)
Yes. I think that we should try to learn as much as possible from the XC experience, but that doesn't mean we should incorporate XC's fuzzy thinking about 2PC into PG. We should not.
Fuzzy thinking. Please explain.
One point I'd like to mention is that it's absolutely critical to design this in a way that minimizes network roundtrips without compromising correctness. XC's GTM proxy suggests that they failed to do that. I think we really need to look at what's going to be on the other sides of the proposed APIs and think about whether it's going to be possible to have a strong local caching layer that keeps network roundtrips to a minimum. We should consider whether the need for such a caching layer has any impact on what the APIs should look like.
You mean the caching layer that already exists in XL/XC?
For example, consider a 10-node cluster where each node has 32 cores and 32 clients, and each client is running lots of short-running SQL statements. The demand for snapshots will be intense. If every backend separately requests a snapshot for every SQL statement from the coordinator, that's probably going to be terrible. We can make it the problem of the stuff behind the DTM API to figure out a way to avoid that, but maybe that's going to result in every DTM needing to solve the same problems.
The whole purpose of that XTM API is to allow different solutions for that to be created. Konstantin has made a very good case for such an API to exist, based around 3 markedly different approaches.
Whether we allow the API into core to be accessible via extensions is a different issue, but it looks fine for its purpose.
--
Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services