Re: Dump/Restore performance improvement - Mailing list pgsql-performance

From Tom Lane
Subject Re: Dump/Restore performance improvement
Date
Msg-id 29496.1094404063@sss.pgh.pa.us
Whole thread Raw
In response to Dump/Restore performance improvement  (Adi Alurkar <adi@sf.net>)
List pgsql-performance
Adi Alurkar <adi@sf.net> writes:
> 1) Add a new config paramter  e.g work_maintanence_max_mem  this will
> the max memory postgresql *can* claim if need be.

> 2) During the dump phase of the DB  postgresql  estimates the
> "work_maintenance_mem" that would be required to create the index in
> memory(if possible) and add's a
> SET work_maintenance_mem="the value calculated"  (IF this value is less
> than work_maintanence_max_mem. )

This seems fairly pointless to me.  How is this different from just
setting maintenance_work_mem as large as you can stand before importing
the dump?

Making any decisions at dump time seems wrong to me in the first place;
pg_dump should not be expected to know what conditions the restore will
be run under.  I'm not sure that's what you're proposing, but I don't
see what the point is in practice.  It's already the case that
maintenance_work_mem is treated as the maximum memory you can use,
rather than what you will use even if you don't need it all.

            regards, tom lane

pgsql-performance by date:

Previous
From: Geoffrey
Date:
Subject: Re: fsync vs open_sync
Next
From: Pierre-Frédéric Caillaud
Date:
Subject: Re: fsync vs open_sync