Re: optimising data load - Mailing list pgsql-novice

From John Taylor
Subject Re: optimising data load
Date
Msg-id 02052216350900.03723@splash.hq.jtresponse.co.uk
Whole thread Raw
In response to optimising data load  (John Taylor <postgres@jtresponse.co.uk>)
List pgsql-novice
On Wednesday 22 May 2002 16:29, Patrick Hatcher wrote:
> Dump the records from the other dbase to a text file and then use the COPY
> command for Pg.  I update tables nightly with 400K+ records and it only
> takes 1 -2 mins.  You should drop and re-add your indexes and then do a
> vacuum analyze
>

I'm looking into that at the moment.
I'm getting some very variable results.
There are some tables that it is easy to do this for.

However for some tables, I don't get data in the right format, so I need to
perform some queries to get the right values to use when populating.

In this situation I'm not sure if I should drop the indexes to make make the insert faster,
or keep them to make the selects faster.


Thanks
JohnT

pgsql-novice by date:

Previous
From: "Joshua b. Jore"
Date:
Subject: Re: pl/perl Documentation
Next
From: Marc Spitzer
Date:
Subject: Re: Better way to bulk-load millions of CSV records into postgres?