Re: Patch: dumping tables data in multiple chunks in pg_dump - Mailing list pgsql-hackers

From Hannu Krosing
Subject Re: Patch: dumping tables data in multiple chunks in pg_dump
Date
Msg-id CAMT0RQQAH1a8kY-mx7B07Uzn3T_zeaU9detqFFtW36_k67Su+A@mail.gmail.com
Whole thread Raw
In response to Re: Patch: dumping tables data in multiple chunks in pg_dump  (Hannu Krosing <hannuk@google.com>)
Responses Re: Patch: dumping tables data in multiple chunks in pg_dump
List pgsql-hackers
Going up to 16 workers did not improve performance , but this is
expected, as the disk behind the database can only do 4TB/hour of
reads, which is now the bottleneck. (408/352/*3600 = 4172 GB/h)

$ time ./pg_dump --format=directory -h 10.58.80.2 -U postgres
--huge-table-chunk-pages=131072 -j 16 -f /tmp/parallel16.dump largedb
real    5m44.900s
user    53m50.491s
sys     5m47.602s

And 4 workers showed near-linear speedup from single worker

hannuk@pgn2:~/work/postgres/src/bin/pg_dump$ time ./pg_dump
--format=directory -h 10.58.80.2 -U postgres
--huge-table-chunk-pages=131072 -j 4 -f /tmp/parallel4.dump largedb
real    10m32.074s
user    38m54.436s
sys     2m58.216s

The database runs on a 64vCPU VM with 128GB RAM, so most of the table
will be read in from the disk






On Thu, Nov 13, 2025 at 7:02 PM Hannu Krosing <hannuk@google.com> wrote:
>
> I just ran a test by generating a 408GB table and then dumping it both ways
>
> $ time pg_dump --format=directory -h 10.58.80.2 -U postgres -f
> /tmp/plain.dump largedb
>
> real    39m54.968s
> user    37m21.557s
> sys     2m32.422s
>
> $ time ./pg_dump --format=directory -h 10.58.80.2 -U postgres
> --huge-table-chunk-pages=131072 -j 8 -f /tmp/parallel8.dump largedb
>
> real    5m52.965s
> user    40m27.284s
> sys     3m53.339s
>
> So parallel dump with 8 workers using 1GB (128k pages) chunks runs
> almost 7 times faster than the sequential dump.
>
> this was a table that had no TOAST part. I will run some more tests
> with TOASTed tables next and expect similar or better improvements.
>
>
>
> On Wed, Nov 12, 2025 at 1:59 PM Ashutosh Bapat
> <ashutosh.bapat.oss@gmail.com> wrote:
> >
> > Hi Hannu,
> >
> > On Tue, Nov 11, 2025 at 9:00 PM Hannu Krosing <hannuk@google.com> wrote:
> > >
> > > Attached is a patch that adds the ability to dump table data in multiple chunks.
> > >
> > > Looking for feedback at this point:
> > >  1) what have I missed
> > >  2) should I implement something to avoid single-page chunks
> > >
> > > The flag --huge-table-chunk-pages which tells the directory format
> > > dump to dump tables where the main fork has more pages than this in
> > > multiple chunks of given number of pages,
> > >
> > > The main use case is speeding up parallel dumps in case of one or a
> > > small number of HUGE tables so parts of these can be dumped in
> > > parallel.
> >
> > Have you measured speed up? Can you please share the numbers?
> >
> > --
> > Best Wishes,
> > Ashutosh Bapat



pgsql-hackers by date:

Previous
From: Rustam Khamidullin
Date:
Subject: Re: [PATCH] Speed up of vac_update_datfrozenxid.
Next
From: "Vitaly Davydov"
Date:
Subject: Re: Newly created replication slot may be invalidated by checkpoint