Thread: eXtensible Transaction Manager API (v2)

eXtensible Transaction Manager API (v2)

From
Konstantin Knizhnik
Date:
Hi,

PostgresProffesional cluster teams wants to propose new version of
eXtensible Transaction Manager API.
Previous discussion concerning this patch can be found here:

http://www.postgresql.org/message-id/F2766B97-555D-424F-B29F-E0CA0F6D1D74@postgrespro.ru

The API patch itself is small enough, but we think that it will be
strange to provide just API without examples of its usage.

We have implemented various implementations of distributed transaction
manager based on this API:
pg_dtm (based ion snapshot sharing) and pg_tsdtm (CSN based on local
system time).
Based on this two DTM implementation we have developed various "cluster"
implementations:
multimaster+pg_dtm, multimaster+pg_tsdtm, pg_shard+pg_dtm,
pg_shard+pg_tsdtm, postgres_fdw+pg_dtm, postgres_fdw+pg+tsdtm,...
Multimaster is based on logical replication is something like BDR but
synchronous: provide consistency across cluster.

But we want to make this patch as small as possible.
So we decided to include in it only pg_tsdtm and patch of postgres_fdw
allowing to use it with pg_tsdtm.
pg_tsdtm is simpler than pg_dtm because last one includes arbiter with
RAFT protocol (centralized service)
and sockhub for efficient multiplexing backend connections.
Also, in theory, pg_tsdtm provides better scalability, because it is
decentralized.

Architecture of DTM and tsDTM as well as benchmark results are available
at WiKi page:

     https://wiki.postgresql.org/wiki/DTM

Please notice pg-tsdtm is just reference implementation of DTM using
this XTM API.
The primary idea of this patch is to add XTM API to PostgreSQL code,
allowing to implement own transaction managers as
Postgres extension. So please review first of all XTM API itself and not
pg_tsdtm which is just and example of its usage.

The complete PostgreSQL branch with all our changes can be found here:

     https://github.com/postgrespro/postgres_cluster


-- Konstantin Knizhnik Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachment

Re: eXtensible Transaction Manager API (v2)

From
David Steele
Date:
On 2/10/16 12:50 PM, Konstantin Knizhnik wrote:

> PostgresProffesional cluster teams wants to propose new version of
> eXtensible Transaction Manager API.
> Previous discussion concerning this patch can be found here:
>
> http://www.postgresql.org/message-id/F2766B97-555D-424F-B29F-E0CA0F6D1D74@postgrespro.ru

I see a lot of discussion on this thread but little in the way of consensus.

> The API patch itself is small enough, but we think that it will be
> strange to provide just API without examples of its usage.

It's not all that small, though it does apply cleanly even after a few
months.  At least that indicates there is not a lot of churn in this area.

I'm concerned about the lack of response or reviewers for this patch.
It may be because everyone believes they had their say on the original
thread, or because it seems like a big change to go into the last CF, or
for other reasons altogether.

I think you should try to make it clear why this patch would be a win
for 9.6.

Is anyone willing to volunteer a review or make an argument for the
importance of this patch?

--
-David
david@pgmasters.net


Re: eXtensible Transaction Manager API (v2)

From
Robert Haas
Date:
On Fri, Mar 11, 2016 at 1:11 PM, David Steele <david@pgmasters.net> wrote:
> On 2/10/16 12:50 PM, Konstantin Knizhnik wrote:
>> PostgresProffesional cluster teams wants to propose new version of
>> eXtensible Transaction Manager API.
>> Previous discussion concerning this patch can be found here:
>>
>> http://www.postgresql.org/message-id/F2766B97-555D-424F-B29F-E0CA0F6D1D74@postgrespro.ru
>
> I see a lot of discussion on this thread but little in the way of consensus.
>
>> The API patch itself is small enough, but we think that it will be
>> strange to provide just API without examples of its usage.
>
> It's not all that small, though it does apply cleanly even after a few
> months.  At least that indicates there is not a lot of churn in this area.
>
> I'm concerned about the lack of response or reviewers for this patch.
> It may be because everyone believes they had their say on the original
> thread, or because it seems like a big change to go into the last CF, or
> for other reasons altogether.
>
> I think you should try to make it clear why this patch would be a win
> for 9.6.
>
> Is anyone willing to volunteer a review or make an argument for the
> importance of this patch?

There's been a lot of discussion on another thread about this patch.
The subject is "The plan for FDW-based sharding", but the thread kind
of got partially hijacked by this issue.  The net-net of that is that
I don't think we have a clear enough idea about where we're going with
global transaction management to make it a good idea to adopt an API
like this.  For example, if we later decide we want to put the
functionality in core, will we keep the hooks around for the sake of
alternative non-core implementations?  I just don't believe this
technology is nearly mature enough to commit to at this point.

Konstantin does not agree with my assessment, perhaps unsurprisingly.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: eXtensible Transaction Manager API (v2)

From
David Steele
Date:
On 3/11/16 1:30 PM, Robert Haas wrote:

> There's been a lot of discussion on another thread about this patch.
> The subject is "The plan for FDW-based sharding", but the thread kind
> of got partially hijacked by this issue.  The net-net of that is that
> I don't think we have a clear enough idea about where we're going with
> global transaction management to make it a good idea to adopt an API
> like this.  For example, if we later decide we want to put the
> functionality in core, will we keep the hooks around for the sake of
> alternative non-core implementations?  I just don't believe this
> technology is nearly mature enough to commit to at this point.

Ah yes, I forgot about the related discussion on that thread.  Pasting
here for reference:

http://www.postgresql.org/message-id/20160223164335.GA11285@momjian.us

> Konstantin does not agree with my assessment, perhaps unsurprisingly.

I'm certainly no stranger to feeling strongly about a patch!

--
-David
david@pgmasters.net


Re: eXtensible Transaction Manager API (v2)

From
Oleg Bartunov
Date:


On Fri, Mar 11, 2016 at 7:11 PM, David Steele <david@pgmasters.net> wrote:
On 2/10/16 12:50 PM, Konstantin Knizhnik wrote:

> PostgresProffesional cluster teams wants to propose new version of
> eXtensible Transaction Manager API.
> Previous discussion concerning this patch can be found here:
>
> http://www.postgresql.org/message-id/F2766B97-555D-424F-B29F-E0CA0F6D1D74@postgrespro.ru

I see a lot of discussion on this thread but little in the way of consensus.

> The API patch itself is small enough, but we think that it will be
> strange to provide just API without examples of its usage.

It's not all that small, though it does apply cleanly even after a few
months.  At least that indicates there is not a lot of churn in this area.

I'm concerned about the lack of response or reviewers for this patch.
It may be because everyone believes they had their say on the original
thread, or because it seems like a big change to go into the last CF, or
for other reasons altogether.

We'll prepare easy setup to play with our solutions, so any developers could see how it works.  Hope this weekend we'll post something about this.

 

I think you should try to make it clear why this patch would be a win
for 9.6.

Looks like discussion shifted to different thread, we'll answer here.

 

Is anyone willing to volunteer a review or make an argument for the
importance of this patch?

--
-David
david@pgmasters.net


Re: eXtensible Transaction Manager API (v2)

From
David Steele
Date:
On 3/11/16 2:00 PM, Oleg Bartunov wrote:
> On Fri, Mar 11, 2016 at 7:11 PM, David Steele <david@pgmasters.net

>     I'm concerned about the lack of response or reviewers for this patch.
>     It may be because everyone believes they had their say on the original
>     thread, or because it seems like a big change to go into the last CF, or
>     for other reasons altogether.
>
>
> We'll prepare easy setup to play with our solutions, so any developers
> could see how it works.  Hope this weekend we'll post something about this.

OK, then for now I'm marking this "waiting for author."  You can switch
it back to "needs review" once you have posted additional material.

--
-David
david@pgmasters.net


Re: eXtensible Transaction Manager API (v2)

From
Tom Lane
Date:
Robert Haas <robertmhaas@gmail.com> writes:
> On Fri, Mar 11, 2016 at 1:11 PM, David Steele <david@pgmasters.net> wrote:
>> Is anyone willing to volunteer a review or make an argument for the
>> importance of this patch?

> There's been a lot of discussion on another thread about this patch.
> The subject is "The plan for FDW-based sharding", but the thread kind
> of got partially hijacked by this issue.  The net-net of that is that
> I don't think we have a clear enough idea about where we're going with
> global transaction management to make it a good idea to adopt an API
> like this.  For example, if we later decide we want to put the
> functionality in core, will we keep the hooks around for the sake of
> alternative non-core implementations?  I just don't believe this
> technology is nearly mature enough to commit to at this point.
> Konstantin does not agree with my assessment, perhaps unsurprisingly.

I re-read the original thread,

http://www.postgresql.org/message-id/flat/F2766B97-555D-424F-B29F-E0CA0F6D1D74@postgrespro.ru

I think there is no question that this is an entirely immature patch.
Not coping with subtransactions is alone sufficient to make it not
credible for production.

Even if the extension API were complete and clearly stable, I have doubts
that there's any great value in integrating it into 9.6, rather than some
later release series.  The above thread makes it clear that pg_dtm is very
much WIP and has easily a year's worth of work before anybody would think
of wanting to deploy it.  So end users don't need this patch in 9.6, and
developers working on pg_dtm shouldn't really have much of a problem
applying the patch locally --- how likely is it that they'd be using a
perfectly stock build of the database apart from this patch?

But my real takeaway from that thread is that there's no great reason
to believe that this API definition *is* stable.  The single existing
use-case is very far from completion, and it's hardly unlikely that
what it needs will change.


I also took a very quick look at the patch itself:

1. No documentation.  For something that purports to be an API
specification, really the documentation should have been written *first*.

2. As noted in the cited thread, it's not clear that
Get/SetTransactionStatus are a useful cutpoint; they don't provide any
real atomicity guarantees.

3. Uh, how can you hook GetNewTransactionId but not ReadNewTransactionId?

4. There seems to be an intention to encapsulate snapshots, but surely
wrapping hooks around GetSnapshotData and XidInMVCCSnapshot is not nearly
enough for that.  Look at all the knowledge snapmgr.c has about snapshot
representation, for example.  And is a function like GetOldestXmin even
meaningful with a different notion of what snapshots are?  (For that
matter, is TransactionId == uint32 still tenable for any other notion
of snapshots?)

5. BTW, why would you hook at XidInMVCCSnapshot rather than making use of
the existing capability to have a custom SnapshotSatisfiesFunc snapshot
checker function?


IMO this is not committable as-is, and I don't think that it's something
that will become committable during this 'fest.  I think we'd be well
advised to boot it to the 2016-09 CF and focus our efforts on other stuff
that has a better chance of getting finished this month.
        regards, tom lane



Re: eXtensible Transaction Manager API (v2)

From
Konstantin Knizhnik
Date:
On 03/11/2016 11:35 PM, Tom Lane wrote:
> Robert Haas <robertmhaas@gmail.com> writes:
>> On Fri, Mar 11, 2016 at 1:11 PM, David Steele <david@pgmasters.net> wrote:
>>> Is anyone willing to volunteer a review or make an argument for the
>>> importance of this patch?
>> There's been a lot of discussion on another thread about this patch.
>> The subject is "The plan for FDW-based sharding", but the thread kind
>> of got partially hijacked by this issue.  The net-net of that is that
>> I don't think we have a clear enough idea about where we're going with
>> global transaction management to make it a good idea to adopt an API
>> like this.  For example, if we later decide we want to put the
>> functionality in core, will we keep the hooks around for the sake of
>> alternative non-core implementations?  I just don't believe this
>> technology is nearly mature enough to commit to at this point.
>> Konstantin does not agree with my assessment, perhaps unsurprisingly.
> I re-read the original thread,
>
> http://www.postgresql.org/message-id/flat/F2766B97-555D-424F-B29F-E0CA0F6D1D74@postgrespro.ru
>
> I think there is no question that this is an entirely immature patch.
> Not coping with subtransactions is alone sufficient to make it not
> credible for production.

Lack of subtractions support is not a limitation of XTM API.
It is limitation of current pg_dtm implementation. And another DTM implementation - pg_tsdtm supports subtransactions.

>
> Even if the extension API were complete and clearly stable, I have doubts
> that there's any great value in integrating it into 9.6, rather than some
> later release series.  The above thread makes it clear that pg_dtm is very
> much WIP and has easily a year's worth of work before anybody would think
> of wanting to deploy it.  So end users don't need this patch in 9.6, and
> developers working on pg_dtm shouldn't really have much of a problem
> applying the patch locally --- how likely is it that they'd be using a
> perfectly stock build of the database apart from this patch?

I agree with you that pg_dtm is very far from production.
But I wan to notice two things:

1. pg_dtm and pg_tsdtm are not complete cluster solutions, them are just one (relatively small) part of them.
pg_tsdtm seems to be even more "mature", may be because it is simpler and do not have many limitations which pg_dtm has
(likesubtrasaction support).
 

2. Them can be quite easily integrated with other (existed) cluster solutions. We have integrated bother of them with
postgres_fwdand pg_shard.
 
Postgres_fdw is also not a ready solution, but just a mechanism which can be used also for sharding.
But pg_shard & CitusDB  are quite popular solutions for distributed execution of queries which provide good performance
foranalytic and single node OLTP queries.
 
Integration with DTMs  adds ACID semantic for distributed transactions and makes it possible to support more complex
OLTPand OLAP queries involving multiple nodes.
 

Such integration is already done,  performance was evaluated, so it is not quite correct to say that we need a year or
moreto make pg_dtm/pg_tsdtm ready to deploy.
 


>
> But my real takeaway from that thread is that there's no great reason
> to believe that this API definition *is* stable.  The single existing
> use-case is very far from completion, and it's hardly unlikely that
> what it needs will change.
>
Sorry, may be I am completely wrong, but I do not thing that it is possible to develop stable API if nobody is using
it.
It is like "fill pool with a water only after you learn how to swim".


> I also took a very quick look at the patch itself:
>
> 1. No documentation.  For something that purports to be an API
> specification, really the documentation should have been written *first*.

Sorry, it was my fault. I have already written documentation and it will be included in next version of the patch.
But please notice, that starting work on DTM we do not have good understanding with PostgreSQL TM features have to be
changed.
Only during work on pg_dtm, pg_tsdtm, multimaster current view of XTM has been formed.

And yet another moment: we have not introduce new abstractions in XTM.
We just override existed PostgreSQL functions.
Certainly when some internal functions become part of API, it should be much better documented.


> 2. As noted in the cited thread, it's not clear that
> Get/SetTransactionStatus are a useful cutpoint; they don't provide any
> real atomicity guarantees.

I wonder how such guarantees can be provided at API level?
Atomicity means that all other transaction either see this transaction as committed, either uncommitted.
So transaction commit should be coordinated with visibility check.
In case of pg_dtm atomicity is simply enforced by the fact that decision whether to commit transaction is taken by
centralcoordinator.
 
When it decides that transaction is committed, it marks it as committed in all subsequently obtained snapshots.

In case of pg_tsdtm there is no central arbiter, so we have to introduce "in-doubt" state of transaction, when it is
notknown whether transaction is
 
committed or aborted and any other transaction accessing tuples updated but this transaction has to wait while its
statusis "in-doubt".
 
The main challenge of pg_tsdtm is to make this period as short as possible...

But it is details of particular implementation which IMHO have no relation to API itself.


>
> 3. Uh, how can you hook GetNewTransactionId but not ReadNewTransactionId?

Uh-uh-uh:)
ReadNewTransactionId is just reading value of ShmemVariableCache->nextXid,
but unfortunately it is not the only point where nextXid is used - there are about hundred occurrences of nextXid in
Postgrescore.
 
This is why we made a decision that GetNewTransactionId should actually update ShmemVariableCache->nextXid, so that
there is no need to rewrite all this code.
Sorry, but IMHO it is problem of Postgres design and not of XTM;)
We just want to find some compromise which allows XTM to be flexible enough but minimize changes in core code.

> 4. There seems to be an intention to encapsulate snapshots, but surely
> wrapping hooks around GetSnapshotData and XidInMVCCSnapshot is not nearly
> enough for that.  Look at all the knowledge snapmgr.c has about snapshot
> representation, for example.  And is a function like GetOldestXmin even
> meaningful with a different notion of what snapshots are?  (For that
> matter, is TransactionId == uint32 still tenable for any other notion
> of snapshots?)

XTM encapsulation of snapshots allows us to implement pg_dtm.
It does almost the same as Postgres-XL GTM, but without huge amount of #ifdefs.

Representation of XID is yet another compromise point: we do not want to change tuple header format.
So XID is still 32 bit and has the same meanining as in PostgreSQL. If custom implementation of TM wants to use some
otheridentifiers of transactions,
 
like CSN in pg_tsdtm, it has to provide mapping between them and XIDs.


>
> 5. BTW, why would you hook at XidInMVCCSnapshot rather than making use of
> the existing capability to have a custom SnapshotSatisfiesFunc snapshot
> checker function?

HeapTupleSatisfies routines in times/tqual.c have implemented a lot of logic of handling different kind of snapshots,
checking/settinghint bits in tuples,
 
caching,... We do not want to replace or just cut© all this code in DTM implementation.
And XidInMVCCSnapshot is common function finally used by most HeapTupleSatisfies* functions when all other checks are
passed.
So it is really the most convenient place to plug-in custom visibility checking rules. And as far as I remember similar
approachwas used in Postgres-XL.
 

>
>
> IMO this is not committable as-is, and I don't think that it's something
> that will become committable during this 'fest.  I think we'd be well
> advised to boot it to the 2016-09 CF and focus our efforts on other stuff
> that has a better chance of getting finished this month.
>             regards, tom lane


-- 
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company




Re: eXtensible Transaction Manager API (v2)

From
Michael Paquier
Date:
On Fri, Mar 11, 2016 at 9:35 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> IMO this is not committable as-is, and I don't think that it's something
> that will become committable during this 'fest.  I think we'd be well
> advised to boot it to the 2016-09 CF and focus our efforts on other stuff
> that has a better chance of getting finished this month.

Yeah, I would believe that a good first step would be to discuss
deeply about that directly at PGCon for folks that will be there and
interested in the subject. It seems like a good timing to brainstorm
things F2F at the developer unconference for example, a couple of
months before the 1st CF of 9.7. We may perhaps (or not) get to
cleaner picture of what kind of things are wanted in this area.
-- 
Michael



Re: eXtensible Transaction Manager API (v2)

From
Tom Lane
Date:
Michael Paquier <michael.paquier@gmail.com> writes:
> On Fri, Mar 11, 2016 at 9:35 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> IMO this is not committable as-is, and I don't think that it's something
>> that will become committable during this 'fest.  I think we'd be well
>> advised to boot it to the 2016-09 CF and focus our efforts on other stuff
>> that has a better chance of getting finished this month.

> Yeah, I would believe that a good first step would be to discuss
> deeply about that directly at PGCon for folks that will be there and
> interested in the subject. It seems like a good timing to brainstorm
> things F2F at the developer unconference for example, a couple of
> months before the 1st CF of 9.7. We may perhaps (or not) get to
> cleaner picture of what kind of things are wanted in this area.

Yeah, the whole area seems like a great topic for some unconference
sessions.
        regards, tom lane



Re: eXtensible Transaction Manager API (v2)

From
Robert Haas
Date:
On Sat, Mar 12, 2016 at 11:06 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Michael Paquier <michael.paquier@gmail.com> writes:
>> On Fri, Mar 11, 2016 at 9:35 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>>> IMO this is not committable as-is, and I don't think that it's something
>>> that will become committable during this 'fest.  I think we'd be well
>>> advised to boot it to the 2016-09 CF and focus our efforts on other stuff
>>> that has a better chance of getting finished this month.
>
>> Yeah, I would believe that a good first step would be to discuss
>> deeply about that directly at PGCon for folks that will be there and
>> interested in the subject. It seems like a good timing to brainstorm
>> things F2F at the developer unconference for example, a couple of
>> months before the 1st CF of 9.7. We may perhaps (or not) get to
>> cleaner picture of what kind of things are wanted in this area.
>
> Yeah, the whole area seems like a great topic for some unconference
> sessions.

I agree.  I think this is a problem we really need to solve, and I
think talking about it will help us figure out the best solution.  I'd
also be interested in hearing Kevin Grittner's thoughts about
serializability in a distributed environment, since he's obviously
thought about the topic of serializability quite a bit.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: eXtensible Transaction Manager API (v2)

From
Kevin Grittner
Date:
On Sat, Mar 12, 2016 at 11:21 AM, Robert Haas <robertmhaas@gmail.com> wrote:

> I'd also be interested in hearing Kevin Grittner's thoughts about
> serializability in a distributed environment, since he's obviously
> thought about the topic of serializability quite a bit.

I haven't done a thorough search of the academic literature on
this, and I wouldn't be comfortable taking a really solid position
without that; but in thinking about it it seems like there are at
least three distinct problems which may have distinct solutions.

*Physical replication* may be best handled by leveraging the "safe
snapshot" idea already implemented in READ ONLY DEFERRABLE
transactions, and passing through information in the WAL stream to
allow the receiver to identify points where a snapshot can be taken
which cannot see an anomaly.  There should probably be an option to
use the last known safe snapshot or wait for a point in the stream
where one next appears.  This might take as little as a bit or two
per WAL commit record.  It's not clear what the processing overhead
would be -- it wouldn't surprise me if it was "in the noise", nor
would it surprise me if it wasn't.  We would need some careful
benchmarking, and, if performance was an issue, A GUC to control
whether the information was passed along (and, thus, whether
SERIALIZABLE transactions were allowed on the replica).

*Logical replication* (considered for the moment in a
unidirectional context) might best be handled by some reordering of
application of the commits on the replica into "apparent order of
execution" -- which is pretty well defined on the primary based on
commit order adjusted by read-write dependencies.  Basically, the
"simple" implementation would be that WAL is applied normally
unless you receive a commit record which is flagged in some way to
indicate that it is for a serializable transaction which wrote data
and at the time of commit was concurrent with at least one other
serializable transaction which had not completed and was not READ
ONLY.  Such a commit would await information in the WAL stream to
tell it when all such concurrent transactions completed, and would
indicate when such a transaction had a read-write dependency *in*
to the transaction with the suspended commit; commits for any such
transactions must be moved ahead of the suspended commit.  This
would allow logical replication, with all the filtering and such,
to avoid ever showing a state on the replica which contained
serialization anomalies.

*Logical replication with cycles* (where there is some path for
cluster A to replicate to cluster B, and some other path for
cluster B to replicate the same or related data to cluster A) has a
few options.  You could opt for "eventual consistency" --
essentially giving up on the I in ACID and managing the anomalies.
In practice this seems to lead to some form of S2PL at the
application coding level, which is very bad for performance and
concurrency, so I tend to think it should not be the only option.
Built-in S2PL would probably perform better than having it pasted
on at the application layer through some locking API, but for most
workloads is still inferior to SSI in both concurrency and
performance.  Unless a search of the literature turns up some new
alternative, I'm inclined to think that if you want to distribute a
"logical" database over multiple clusters and still manage race
conditions through use of SERIALIZABLE transactions, a distributed
SSI implementation may be the best bet.  That requires the
transaction manager (or something like it) to track non-blocking
predicate "locks" (what the implementation calls a SIReadLock)
across the whole environment, as well as tracking rw-conflicts (our
short name for read-write dependencies) across the whole
environment.  Since SSI also looks at the MVCC state, handling
checks of that without falling victim to race conditions would also
need to be handled somehow.

If I remember correctly, the patches to add the SSI implementation
of SERIALIZABLE transactions were about ten times the size of the
patches to remove S2PL and initially replace it with MVCC.  I don't
have even a gut feel as to how much bigger the distributed form is
likely to be.  On the one hand the *fundamental logic* is all there
and should not need to change; on the other hand the *mechanism*
for acquiring the data to be *used* in that logic would be
different and potentially complex.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: eXtensible Transaction Manager API (v2)

From
Stas Kelvich
Date:
On 12 Mar 2016, at 13:19, Michael Paquier <michael.paquier@gmail.com> wrote:
>
> On Fri, Mar 11, 2016 at 9:35 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> IMO this is not committable as-is, and I don't think that it's something
>> that will become committable during this 'fest.  I think we'd be well
>> advised to boot it to the 2016-09 CF and focus our efforts on other stuff
>> that has a better chance of getting finished this month.
>
> Yeah, I would believe that a good first step would be to discuss
> deeply about that directly at PGCon for folks that will be there and
> interested in the subject. It seems like a good timing to brainstorm
> things F2F at the developer unconference for example, a couple of
> months before the 1st CF of 9.7. We may perhaps (or not) get to
> cleaner picture of what kind of things are wanted in this area.

To give overview of xtm coupled with postgres_fdw from users perspective i’ve packed patched postgres with docker
and provided test case when it is easy to spot violation of READ COMMITTED isolation level without XTM.

This test fills database with users across two shards connected by postgres_fdw and inherits the same table. Then
starts to concurrently transfers money between users in different shards:

begin;
update t set v = v - 1 where u=%d; -- this is user from t_fdw1, first shard
update t set v = v + 1 where u=%d; -- this is user from t_fdw2, second shard
commit;

Also test simultaneously runs reader thread that counts all money in system:

select sum(v) from t;

So in transactional system we expect that sum should be always constant (zero in our case, as we initialize user with
zerobalance). 
But we can see that without tsdtm total amount of money fluctuates around zero.

https://github.com/kelvich/postgres_xtm_docker

---
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company




Re: eXtensible Transaction Manager API (v2)

From
David Steele
Date:
On 3/16/16 7:59 AM, Stas Kelvich wrote:
> On 12 Mar 2016, at 13:19, Michael Paquier <michael.paquier@gmail.com> wrote:
>>
>> On Fri, Mar 11, 2016 at 9:35 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>>> IMO this is not committable as-is, and I don't think that it's something
>>> that will become committable during this 'fest.  I think we'd be well
>>> advised to boot it to the 2016-09 CF and focus our efforts on other stuff
>>> that has a better chance of getting finished this month.
>>
>> Yeah, I would believe that a good first step would be to discuss
>> deeply about that directly at PGCon for folks that will be there and
>> interested in the subject. It seems like a good timing to brainstorm
>> things F2F at the developer unconference for example, a couple of
>> months before the 1st CF of 9.7. We may perhaps (or not) get to
>> cleaner picture of what kind of things are wanted in this area.
> 
> To give overview of xtm coupled with postgres_fdw from users perspective i’ve packed patched postgres with docker
> and provided test case when it is easy to spot violation of READ COMMITTED isolation level without XTM.
> 
> This test fills database with users across two shards connected by postgres_fdw and inherits the same table. Then 
> starts to concurrently transfers money between users in different shards:
> 
> begin;
> update t set v = v - 1 where u=%d; -- this is user from t_fdw1, first shard
> update t set v = v + 1 where u=%d; -- this is user from t_fdw2, second shard
> commit;
> 
> Also test simultaneously runs reader thread that counts all money in system:
> 
> select sum(v) from t;
> 
> So in transactional system we expect that sum should be always constant (zero in our case, as we initialize user with
zerobalance).
 
> But we can see that without tsdtm total amount of money fluctuates around zero.
> 
> https://github.com/kelvich/postgres_xtm_docker

This is an interesting example but I don't believe it does much to
address the concerns that were raised in this thread.

As far as I can see the consensus is that this patch should not be
considered for the current CF so I have marked it "returned with feedback".

If possible please follow Michael's advice and create a session at the
PGCon unconference in May.  I'm certain there will be a lot of interest.

-- 
-David
david@pgmasters.net