Re: Global temporary tables - Mailing list pgsql-hackers
From | Pavel Stehule |
---|---|
Subject | Re: Global temporary tables |
Date | |
Msg-id | CAFj8pRCMmU7XPYpfA7FZS4pFBPwHZ=+B6wF6Z72LTXHEzfkDFA@mail.gmail.com Whole thread Raw |
In response to | Re: Global temporary tables (Konstantin Knizhnik <k.knizhnik@postgrespro.ru>) |
Responses |
Re: Global temporary tables
|
List | pgsql-hackers |
út 20. 8. 2019 v 18:42 odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru> napsal:
On 20.08.2019 19:06, Pavel Stehule wrote:As I wrote at the beginning of this thread, one of the problems with temporary table sis that it is not possible to use them at replica.
Global temp tables allows to share metadata between master and replica.I am not sure if I understand to last sentence. Global temp tables should be replicated on replica servers. But the content should not be replicated. This should be session specific.
Obviously.
When we run OLAP queries at replica, it will be great if we can do
insert into temp_table (select ...);
With local temp tables it is not possible just because you can not create temp table at replica.
But global temp table can be created at master and populated with data at replica.
yes
I perform small investigation: how difficult it will be to support inserts in temp tables at replica.
First my impression was that it can be done in tricky but simple way.
By making small changes changing just three places:
1. Prohibit non-select statements in read-only transactions
2. Xid assignment (return FrozenTransactionId)
3. Transaction commit/abort
I managed to provide normal work with global temp tables at replica.
But there is one problem with this approach: it is not possible to undo changes in temp tables so rollback doesn't work.
I tried another solution, but assigning some dummy Xids to standby transactions.
But this approach require much more changes:
- Initialize page for such transaction in CLOG
- Mark transaction as committed/aborted in XCLOG
- Change snapshot check in visibility function
And still I didn't find safe way to cleanup CLOG space.
Alternative solution is to implement "local CLOG" for such transactions.
The straightforward solution is to use hashtable. But it may cause memory overflow if we have long living backend which performs huge number of transactions.
Also in this case we need to change visibility check functions.
So I have implemented simplest solution with frozen xid and force backend termination in case of transaction rollback (so user will no see inconsistent behavior).
Attached please find global_private_temp_replica.patch which implements this approach.
It will be nice if somebody can suggest better solution for temporary tables at replica.This is another hard issue. Probably backend temination should be acceptable solution. I don't understand well to this area, but if replica allows writing (to global temp tables), then replica have to have local CLOG.
There are several problems:
1. How to choose XID for writing transaction at standby. The simplest solution is to just add 0x7fffffff to the current XID.
It eliminates possibility of conflict with normal XIDs (received from master).
But requires changes in visibility functions. Visibility check function do not know OID of tuple owner, just XID stored in the tuple header. It should make a decision just based on this XID.
2. How to perform cleanup of not needed XIDs. Right now there is quite complex logic of how to free CLOG pages.
3. How to implement visibility rules to such XIDs.
in theory every session can have own CLOG. When you finish session, you can truncate this file.
I thought about implementation of special table access method for temporary tables.CLOG for global temp tables can be more simple then standard CLOG. Data are not shared, and life of data (and number of transactions) can be low.Another solution is wait on ZHeap storage and replica can to have own UNDO log.
+1
I am trying to understand now if it is the only possible approach or there are simpler solutions.-- Konstantin Knizhnik Postgres Professional: http://www.postgrespro.com The Russian Postgres Company
pgsql-hackers by date: