Re: Very poor performance loading 100M of sql data using copy - Mailing list pgsql-performance

From Greg Smith
Subject Re: Very poor performance loading 100M of sql data using copy
Date
Msg-id Pine.GSO.4.64.0804281404410.14083@westnet.com
Whole thread Raw
In response to Very poor performance loading 100M of sql data using copy  (John Rouillard <rouilj@renesys.com>)
Responses Re: Very poor performance loading 100M of sql data using copy
List pgsql-performance
On Mon, 28 Apr 2008, John Rouillard wrote:

>    2008-04-21 11:36:43 UTC @(2761)i: LOG: checkpoints ... (27 seconds apart)
> so I changed:
>   checkpoint_segments = 30
>   checkpoint_warning = 150

That's good, but you might go higher than 30 for a bulk loading operation
like this, particularly on 8.1 where checkpoints are no fun.  Using 100 is
not unreasonable.

> shared_buffers = 3000
> I don't see any indication in the docs that increasing shared memory
> would help speed up a copy operation.

The index blocks use buffer space, and what ends up happening if there's
not enough memory is they are written out more than they need to be (and
with your I/O hardware you need to avoid writes unless absolutely
necessary).  Theoretically the OS is caching around that situation but
better to avoid it.  You didn't say how much RAM you have, but you should
start by a factor of 10 increase to 30,000 and see if that helps; if so,
try making it large enough to use 1/4 of total server memory.  3000 is
only giving the server 24MB of RAM to work with, and it's unfair to expect
it to work well in that situation.

While not relevant to this exercise you'll need to set
effective_cache_size to a useful value one day as well.

--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD

pgsql-performance by date:

Previous
From: John Rouillard
Date:
Subject: Re: Very poor performance loading 100M of sql data using copy
Next
From: PFC
Date:
Subject: Re: Where do a novice do to make it run faster?