Re: Performance of pg_dump on PGSQL 8.0 - Mailing list pgsql-performance

From Scott Marlowe
Subject Re: Performance of pg_dump on PGSQL 8.0
Date
Msg-id 1150303450.26538.9.camel@state.g2switchworks.com
Whole thread Raw
In response to Performance of pg_dump on PGSQL 8.0  ("John E. Vincent" <pgsql-performance@lusis.org>)
Responses Re: Performance of pg_dump on PGSQL 8.0
Re: Performance of pg_dump on PGSQL 8.0
List pgsql-performance
On Wed, 2006-06-14 at 09:47, John E. Vincent wrote:
> -- this is the third time I've tried sending this and I never saw it get
> through to the list. Sorry if multiple copies show up.
>
> Hi all,

BUNCHES SNIPPED

> work_mem = 1048576 ( I know this is high but you should see some of our
> sorts and aggregates)

Ummm.  That's REALLY high.  You might want to consider lowering the
global value here, and then crank it up on a case by case basis, like
during nighttime report generation.  Just one or two queries could
theoretically run your machine out of memory right now.  Just put a "set
work_mem=1000000" in your script before the big query runs.

> We're inserting around 3mil rows a night if you count staging, info, dim
> and fact tables. The vacuum issue is a whole other problem but right now
> I'm concerned about just the backup on the current hardware.
>
> I've got some space to burn so I could go to an uncompressed backup and
> compress it later during the day.

That's exactly what we do.  We just do a normal backup, and have a
script that gzips anything in the backup directory that doesn't end in
.gz...  If you've got space to burn, as you say, then use it at least a
few days to see how it affects backup speeds.

Seeing as how you're CPU bound, most likely the problem is just the
compressed backup.

pgsql-performance by date:

Previous
From: Tom Lane
Date:
Subject: Re: Performance of pg_dump on PGSQL 8.0
Next
From: Mischa Sandberg
Date:
Subject: Re: Solaris shared_buffers anomaly?