Re: Practical maximums (was Re: PostgreSQL theoretical - Mailing list pgsql-general

From Jeff Davis
Subject Re: Practical maximums (was Re: PostgreSQL theoretical
Date
Msg-id 1154970001.12968.17.camel@dogma.v10.wvs
Whole thread Raw
In response to Re: Practical maximums (was Re: PostgreSQL theoretical  (Ron Johnson <ron.l.johnson@cox.net>)
Responses Re: Practical maximums (was Re: PostgreSQL theoretical  (Ron Johnson <ron.l.johnson@cox.net>)
List pgsql-general
On Mon, 2006-07-31 at 09:53 -0500, Ron Johnson wrote:

> > The evasive answer is that you probably don't run regular full pg_dump
> > on such databases.
>
> Hmmm.
>

You might want to use PITR for incremental backup or maintain a standby
system using Slony-I ( www.slony.info ).

> >> Are there any plans of making a multi-threaded, or even
> >> multi-process pg_dump?
> >
> > What do you hope to accomplish by that?  pg_dump is not CPU bound.
>
> Write to multiple tape drives at the same time, thereby reducing the
> total wall time of the backup process.

pg_dump just produces output. You could pretty easily stripe that output
across multiple devices just by using some scripts. Just make sure to
write a script that can reconstruct the data again when you need to
restore. You don't need multi-threaded pg_dump, you just need to use a
script that produces multiple output streams. Multi-threaded design is
only useful for CPU-bound applications.

Doing full backups of that much data is always a challenge, and I don't
think PostgreSQL has limitations that another database doesn't.

Regards,
    Jeff Davis


pgsql-general by date:

Previous
From: Csaba Nagy
Date:
Subject: Re: XPath question - big trouble
Next
From: Marian POPESCU
Date:
Subject: Re: XPath question - big trouble