Re: Very poor performance loading 100M of sql data using copy - Mailing list pgsql-performance

From John Rouillard
Subject Re: Very poor performance loading 100M of sql data using copy
Date
Msg-id 20080429150432.GO6622@renesys.com
Whole thread Raw
In response to Re: Very poor performance loading 100M of sql data using copy  (Shane Ambler <pgsql@Sheeky.Biz>)
List pgsql-performance
On Tue, Apr 29, 2008 at 05:19:59AM +0930, Shane Ambler wrote:
> John Rouillard wrote:
>
> >We can't do this as we are backfilling a couple of months of data
> >into tables with existing data.
>
> Is this a one off data loading of historic data or an ongoing thing?

Yes it's a one off bulk data load of many days of data. The daily
loads will also take 3 hour's but that is ok since we only do those
once a day so we have 21 hours of slack in the schedule 8-).

> >>>The only indexes we have to drop are the ones on the primary keys
> >>> (there is one non-primary key index in the database as well).
>
> If this amount of data importing is ongoing then one thought I would try
> is partitioning (this could be worthwhile anyway with the amount of data
> you appear to have).
> Create an inherited table for the month being imported, load the data
> into it, then add the check constraints, indexes, and modify the
> rules/triggers to handle the inserts to the parent table.

Hmm, interesting idea, worth considering if we have to do this again
(I hope not).

Thaks for the reply.

--
                -- rouilj

John Rouillard
System Administrator
Renesys Corporation
603-244-9084 (cell)
603-643-9300 x 111

pgsql-performance by date:

Previous
From: Vivek Khera
Date:
Subject: Re: Replication Syatem
Next
From: John Rouillard
Date:
Subject: Re: Very poor performance loading 100M of sql data using copy