Re: are there any methods to disable updating index before inserting large number tuples? - Mailing list pgsql-general

From Andres Freund
Subject Re: are there any methods to disable updating index before inserting large number tuples?
Date
Msg-id 201111221953.36954.andres@anarazel.de
Whole thread Raw
In response to Re: are there any methods to disable updating index before inserting large number tuples?  (John R Pierce <pierce@hogranch.com>)
Responses Re: are there any methods to disable updating index before inserting large number tuples?  (John R Pierce <pierce@hogranch.com>)
List pgsql-general
Hi,

On Tuesday 22 Nov 2011 19:01:02 John R Pierce wrote:
> On 11/22/11 7:52 AM, Andrew Sullivan wrote:
> > But I think performance on that table is going to be pretty bad.  I
> > suspect that COPY is going to be your friend here.
>
> indeed.  20M rows/hour is 5500 rows/second.  you'd better have a
> seriously fast disk system, say, 20 15k RPM SAS drives in a RAID10 with
> a decent SAS raid controller that has 1GB of writeback battery-or-flash
> backed cache.
20M rows inserted inside one transaction doesn't cause *that* many writes. I
guess the bigger problem than the actual disk throughput because of heap/wal
writes will be the index size once the table gets bigger. As soon as that
reaches a size bigger than the available shared buffers the performance will
suffer rather much.
For that you probably need a sensible partitioning strategy... Which is likely
to be important anyway to be able to throw away old data efficiently.

Using COPY is advantageous in to using INSERT because it can do some operation
in a bulk mode which INSERT cannot do.

How wide will those rows be, how long do you plan to store the data, how are
you querying it?
Andres

pgsql-general by date:

Previous
From: Kenneth Tilton
Date:
Subject: Re: possible race condition in trigger functions on insert operations?
Next
From: Kenneth Tilton
Date:
Subject: Re: possible race condition in trigger functions on insert operations?