Patches for TODO item: Avoid truncating empty OCDR temp tables on COMMIT - Mailing list pgsql-hackers

From Gurjeet Singh
Subject Patches for TODO item: Avoid truncating empty OCDR temp tables on COMMIT
Date
Msg-id CABwTF4UgtAgX=xWrz+VL-Hj6K8XopnFTzq-rT62=mmWVNuqORA@mail.gmail.com
Whole thread Raw
Responses Re: Patches for TODO item: Avoid truncating empty OCDR temp tables on COMMIT
List pgsql-hackers
TODO Item: Prevent temporary tables created with ON COMMIT DELETE ROWS from repeatedly truncating the table on every commit if the table is already empty

Please find attached two patches, implementing two different approaches to fix the issue of COMMIT truncating empty TEMP tables that have the ON COMMIT DELETE ROWS attribute.

v2.patch: This approach introduces a boolean 'rd_rows_inserted' in RelationData struct, and sets this struct to true for every TEMP table in which a row is inserted. During commit, we avoid truncating those OCDR temp tables that haven't been inserted into in this transaction.

v3.patch: This is the original suggestion by Robert Haas, where we keep a global variable to indicate if any TEMP table has been a target of INSERT, and if not, we skip truncating all OCDR temp tables. The downside of this approach I see is that that if a transaction inserts a row into even one of the OCDR temp tables, we end up attempting truncating all temp tables, even those that are empty.

I am attaching the test case, a psql script, I used to get the timing of BEGIN and COMMIT operations. I executed the test like this:

$ for (( i = 1 ; i <= 4; ++i )) ; do pgsql -f ~/empty_temp_tables_test.psql | tee post_patchv2_run${i}.log; done

And then extracted the timing info of BEGIN and COMMITs using this pipeline:

$ grep -A 1 -E 'BEGIN|COMMIT' post_patchv2_run4.log | grep Time: | cut -d : -f 2 | cut -d ' ' -f 2

Also attached is the PDF of the test runs. It includes the times, their averages and '% Change' across the averages. '% Change' column is derived as round((pre_patch_avg - post_patch_avg)/pre_patch_avg*100, 2).

The tests start with a VACUUM FULL, of the database. This is to ensure that there are no dead rows in pg_class and other system tables, leftover from previous run. It also helps in bringing all the database tables into shared_buffers, so this also helps in decreasing variability of the test runs.

I tried quite hard to eliminate any variability of the test environment, and for this I disabled Autovacuum, increased checkpoint_segments, increased shared_buffers, etc. I then isolated each type of test into session of its own, by disconnecting and reconnecting again. But during the last test I realized that the disconnection is not instantaneous, and the backend process from the previous process lingered around for a few seconds, for as log as 7-8 seconds, consuming nearly 100% CPU. And during this period then next connection running the test was also consuming about 100% CPU.

So even though I tried to isolate the tests, I am sure this delay in backend death and the CPU consumption by the dying process must be interfering with the results. So test results need to be taken with a pinch of salt.
--
Attachment

pgsql-hackers by date:

Previous
From: Alvaro Herrera
Date:
Subject: Re: recent ALTER whatever .. SET SCHEMA refactoring
Next
From: Heikki Linnakangas
Date:
Subject: Re: Curious buildfarm failures