Re: Very poor performance loading 100M of sql data using copy - Mailing list pgsql-performance

From Shane Ambler
Subject Re: Very poor performance loading 100M of sql data using copy
Date
Msg-id 48162A67.3090500@Sheeky.Biz
Whole thread Raw
In response to Re: Very poor performance loading 100M of sql data using copy  (John Rouillard <rouilj@renesys.com>)
Responses Re: Very poor performance loading 100M of sql data using copy
List pgsql-performance
John Rouillard wrote:

> We can't do this as we are backfilling a couple of months of data
> into tables with existing data.

Is this a one off data loading of historic data or an ongoing thing?


>>> The only indexes we have to drop are the ones on the primary keys
>>>  (there is one non-primary key index in the database as well).

If this amount of data importing is ongoing then one thought I would try
is partitioning (this could be worthwhile anyway with the amount of data
you appear to have).
Create an inherited table for the month being imported, load the data
into it, then add the check constraints, indexes, and modify the
rules/triggers to handle the inserts to the parent table.



--

Shane Ambler
pgSQL (at) Sheeky (dot) Biz

Get Sheeky @ http://Sheeky.Biz

pgsql-performance by date:

Previous
From: Gregory Stark
Date:
Subject: Re: Benchmarks WAS: Sun Talks about MySQL
Next
From: Tino Wildenhain
Date:
Subject: Re: Best practice to load a huge table from ORACLE to PG