RE: Transactions involving multiple postgres foreign servers, take 2 - Mailing list pgsql-hackers

From tsunakawa.takay@fujitsu.com
Subject RE: Transactions involving multiple postgres foreign servers, take 2
Date
Msg-id TYAPR01MB2990419BD8F4E66D01E01CDFFE360@TYAPR01MB2990.jpnprd01.prod.outlook.com
Whole thread Raw
In response to Re: Transactions involving multiple postgres foreign servers, take 2  (Masahiko Sawada <masahiko.sawada@2ndquadrant.com>)
Responses Re: Transactions involving multiple postgres foreign servers, take 2  (Masahiko Sawada <masahiko.sawada@2ndquadrant.com>)
List pgsql-hackers
From: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>
> I don't think it's always possible to avoid raising errors in advance.
> Considering how postgres_fdw can implement your idea, I think
> postgres_fdw would need PG_TRY() and PG_CATCH() for its connection
> management. It has a connection cache in the local memory using HTAB.
> It needs to create an entry for the first time to connect (e.g., when
> prepare and commit prepared a transaction are performed by different
> processes) and it needs to re-connect the foreign server when the
> entry is invalidated. In both cases, ERROR could happen. I guess the
> same is true for other FDW implementations. Possibly other FDWs might
> need more work for example cleanup or releasing resources. I think

Why does the client backend have to create a new connection cache entry during PREPARE or COMMIT PREPARE?  Doesn't the
clientbackend naturally continue to use connections that it has used in its current transaction?
 


> that the pros of your idea are to make the transaction manager simple
> since we don't need resolvers and launcher but the cons are to bring
> the complexity to FDW implementation codes instead. Also, IMHO I don't
> think it's safe way that FDW does neither re-throwing an error nor
> abort transaction when an error occurs.

No, I didn't say the resolver is unnecessary.  The resolver takes care of terminating remote transactions when the
clientbackend encountered an error during COMMIT/ROLLBACK PREPARED.
 


> In terms of performance you're concerned, I wonder if we can somewhat
> eliminate the bottleneck if multiple resolvers are able to run on one
> database in the future. For example, if we could launch resolver
> processes as many as connections on the database, individual backend
> processes could have one resolver process. Since there would be
> contention and inter-process communication it still brings some
> overhead but it might be negligible comparing to network round trip.

Do you mean that if concurrent 200 clients each update data on two foreign servers, there are 400 resolvers?  ...That's
overuseof resources.
 


Regards
Takayuki Tsunakawa

    

pgsql-hackers by date:

Previous
From: Ashutosh Bapat
Date:
Subject: Re: Dynamic gathering the values for seq_page_cost/xxx_cost
Next
From: Surafel Temesgen
Date:
Subject: Re: FETCH FIRST clause PERCENT option