Re: [PING] [PATCH v2] parallel pg_restore: avoid disk seeks when jumping short distance forward - Mailing list pgsql-hackers
From | Chao Li |
---|---|
Subject | Re: [PING] [PATCH v2] parallel pg_restore: avoid disk seeks when jumping short distance forward |
Date | |
Msg-id | 25A25667-AC82-46FD-A5FC-0B2AA0CC7DB9@gmail.com Whole thread Raw |
In response to | Re: [PING] [PATCH v2] parallel pg_restore: avoid disk seeks when jumping short distance forward (Tom Lane <tgl@sss.pgh.pa.us>) |
Responses |
Re: [PING] [PATCH v2] parallel pg_restore: avoid disk seeks when jumping short distance forward
|
List | pgsql-hackers |
On Oct 14, 2025, at 05:07, Tom Lane <tgl@sss.pgh.pa.us> wrote:
I've pushed the parts of that patch set that I thought were
uncontroversial. What's left is the business about increasing
DEFAULT_IO_BUFFER_SIZE and then adjusting the tests appropriately.
So, v4-0001 attached is the previous v3-0002 to increase
DEFAULT_IO_BUFFER_SIZE, plus additions in compress_none.c to make
--compress=none also produce predictably large data blocks.
I decided that if we're going to rely on that behavior as part
of the solution for this thread's original problem, we'd better
make it happen for all compression options.
0002 adds a test case in 002_pg_dump.pl to exercise --compress=none,
because without that we don't have any coverage of the new code
0001 added in compress_none.c. That makes for a small increase
in the runtime of 002_pg_dump.pl, but I'm inclined to think it's
worth doing.
0003 modifies the existing test cases that manually compress
blobs.toc files so that they also compress toc.dat. I feel
like it's mostly an oversight that that wasn't done to begin
with; if it had been done, we'd have caught the Gzip_read bug
right away. Also, AFAICT, this doesn't cost anything measurable
in test runtime.
0004 increases the row width in the existing test case that says
it's trying to push more than DEFAULT_IO_BUFFER_SIZE through
the compressors. While I agree with the premise, this solution
is hugely expensive: it adds about 12% to the already-long runtime
of 002_pg_dump.pl. I'd like to find a better way, but ran out of
energy for today. (I think the reason this costs so much is that
it's effectively iterated hundreds of times because of
002_pg_dump.pl's more or less cross-product approach to testing
everything. Maybe we should pull it out of that structure?)
In v4 patch, the code changes are straightforward. 0001 changes compress_none.c to write data to a 128K buffer first, then only flush the buffer when it’s filled up. 0002, 0003 and 0004 add more test cases. I have no comment to the code diff.
I tested DEFAULT_IO_BUFFER_SIZE with 4K, 32K, 64K, 128K and 256K. Looks like increasing the buffer size doesn’t improve the performance significantly. Actually, with the buffer size 64K, 128K and 256K, the test results are very close. I tested both with lz4 and none compression. I am not suggesting tuning the buffer size. These data are only for your reference.
To do the test, I created a test db and filled in several GB of data.
```
256K ====
% time pg_dump -Fd --compress=lz4 -f dump_A.dir evantest
pg_dump -Fd --compress=lz4 -f dump_A.dir evantest 3.37s user 0.82s system 57% cpu 7.249 total
% time pg_restore -f /dev/null dump_A.dir
pg_restore -f /dev/null dump_A.dir 0.24s user 0.19s system 43% cpu 0.991 total
% time pg_dump -Fd --compress=none -f dump_A.dir evantest
pg_dump -Fd --compress=none -f dump_A.dir evantest 2.34s user 1.72s system 68% cpu 5.949 total
% time pg_restore -f /dev/null dump_A.dir
pg_restore -f /dev/null dump_A.dir 0.02s user 0.19s system 22% cpu 0.921 total
128K ===
% time pg_dump -Fd --compress=lz4 -f dump_A.dir evantest
pg_dump -Fd --compress=lz4 -f dump_A.dir evantest 3.38s user 0.85s system 64% cpu 6.525 total
% time pg_restore -f /dev/null dump_A.dir
pg_restore -f /dev/null dump_A.dir 0.28s user 0.21s system 47% cpu 1.042 total
% time pg_dump -Fd --compress=none -f dump_A.dir evantest
pg_dump -Fd --compress=none -f dump_A.dir evantest 2.34s user 1.67s system 68% cpu 5.835 total
% time pg_restore -f /dev/null dump_A.dir
pg_restore -f /dev/null dump_A.dir 0.03s user 0.22s system 22% cpu 1.118 total
64K ===
% time pg_dump -Fd --compress=lz4 -f dump_A.dir evantest
pg_dump -Fd --compress=lz4 -f dump_A.dir evantest 3.39s user 0.92s system 63% cpu 6.761 total
% time pg_restore -f /dev/null dump_A.dir
pg_restore -f /dev/null dump_A.dir 0.33s user 0.24s system 40% cpu 1.420 total
% time pg_dump -Fd --compress=none -f dump_A.dir evantest
pg_dump -Fd --compress=none -f dump_A.dir evantest 2.35s user 1.74s system 69% cpu 5.849 total
% time pg_restore -f /dev/null dump_A.dir
pg_restore -f /dev/null dump_A.dir 0.04s user 0.22s system 27% cpu 0.939 total
32K ===
% time pg_dump -Fd --compress=lz4 -f dump_A.dir evantest
pg_dump -Fd --compress=lz4 -f dump_A.dir evantest 3.43s user 0.94s system 58% cpu 7.416 total
% time pg_restore -f /dev/null dump_A.dir
pg_restore -f /dev/null dump_A.dir 0.34s user 0.22s system 56% cpu 0.983 total
% time pg_dump -Fd --compress=none -f dump_A.dir evantest
pg_dump -Fd --compress=none -f dump_A.dir evantest 2.34s user 1.75s system 67% cpu 6.070 total
% time pg_restore -f /dev/null dump_A.dir
pg_restore -f /dev/null dump_A.dir 0.05s user 0.23s system 29% cpu 0.926 total
4k====
% time pg_dump -Fd --compress=lz4 -f dump_A.dir evantest
pg_dump -Fd --compress=lz4 -f dump_A.dir evantest 3.45s user 0.94s system 60% cpu 7.298 total
% time pg_restore -f /dev/null dump_A.dir
pg_restore -f /dev/null dump_A.dir 0.37s user 0.29s system 64% cpu 1.016 total
% time pg_dump -Fd --compress=none -f dump_A.dir evantest
pg_dump -Fd --compress=none -f dump_A.dir evantest 2.33s user 1.78s system 69% cpu 5.947 total
% time pg_restore -f /dev/null dump_A.dir
pg_restore -f /dev/null dump_A.dir 0.12s user 0.29s system 40% cpu 1.009 total
```
--
Chao Li (Evan)
HighGo Software Co., Ltd.
https://www.highgo.com/
HighGo Software Co., Ltd.
https://www.highgo.com/
pgsql-hackers by date: