Re: PostgreSQL 8.4 performance tuning questions - Mailing list pgsql-performance

From Kevin Grittner
Subject Re: PostgreSQL 8.4 performance tuning questions
Date
Msg-id 4A71C3230200002500029162@gw.wicourts.gov
Whole thread Raw
In response to Re: PostgreSQL 8.4 performance tuning questions  (Scott Carey <scott@richrelevance.com>)
Responses Re: PostgreSQL 8.4 performance tuning questions
Re: PostgreSQL 8.4 performance tuning questions
List pgsql-performance
Scott Carey <scott@richrelevance.com> wrote:

> Now, what needs to be known with the pg_dump is not just how fast
> compression can go (assuming its gzip) but also what the duty cycle
> time of the compression is.  If it is single threaded, there is all
> the network and disk time to cut out of this, as well as all the CPU
> time that pg_dump does without compression.

Well, I established a couple messages back on this thread that pg_dump
piped to psql to a database on the same machine writes the 70GB
database to disk in two hours, while pg_dump to a custom format file
at default compression on the same machine writes the 50GB file in six
hours.  No network involved, less disk space written.  I'll try it
tonight at -Z0.

One thing I've been wondering about is what, exactly, is compressed in
custom format.  Is it like a .tar.gz file, where the compression is a
layer over the top, or are individual entries compressed?  If the
latter, what's the overhead on setting up each compression stream?  Is
there some minimum size before that kicks in?  (I know, I should go
check the code myself.  Maybe in a bit.  Of course, if someone already
knows, it would be quicker....)

-Kevin

pgsql-performance by date:

Previous
From: Scott Carey
Date:
Subject: Re: PostgreSQL 8.4 performance tuning questions
Next
From: Scott Carey
Date:
Subject: Re: PostgreSQL 8.4 performance tuning questions