Hannu Krosing wrote:
> > Our very extensibility is our weakness for upgrades. Can it be worked around?
> > Anyone have any ideas?
>
> Perhaps we can keep an old postgres binary + old backend around and then
> use it in single-user mode to do a pg_dump into our running backend.
That brings up an interesting idea. Right now we dump the entire
database out to a file, delete the old database, and load in the file.
What if we could move over one table at a time? Copy out the table,
load it into the new database, then delete the old table and move on to
the next. That would allow use to upgrade having free space for just
the largest table. Another idea would be to record and remove all
indexes in the old database. That certainly would save disk space
during the upgrade.
However, the limiting factor is that we don't have a mechanism to have
both databases running at the same time currently. Seems this may be
the direction to head in.
> BTW, how hard would it be to move pg_dump inside the backend (perhaps
> using a dynamically loaded function to save space when not used) so that
> it could be used like COPY ?
>
> pg> DUMP table [ WITH 'other cmdline options' ] TO stdout ;
>
> pg> DUMP * [ WITH 'other cmdline options' ] TO stdout ;
Intersting idea, but I am not sure what that buys us. Having pg_dump
separate makes maintenance easier.
-- Bruce Momjian | http://candle.pha.pa.us pgman@candle.pha.pa.us | (610)
853-3000+ If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup. | Drexel Hill,
Pennsylvania19026