tuning Postgres for large data import (using Copy from) - Mailing list pgsql-performance

Hello,


I'd like to tune Postgres for large data import (using Copy from).


here are a few steps already done:



1) use 3 different disks for:

    -1: source data
    -2: index tablespaces
    -3: data tablespaces


2) define all foreign keys as initially deferred


3) tune some parameters:



    max_connections =20
    shared_buffers =30000
    work_mem = 8192
    maintenance_work_mem = 32768
    checkpoint_segments = 12

    (I also modified the kernel accordingly)




4) runs VACUUM regulary


The server runs RedHat and has 1GB RAM

In the production (which may run on a better server), I plan to:

- import a few millions rows per day,
- keep up to ca 100 millions rows in the db
- delete older data




I've seen a few posting on hash/btree indexes, which say that hash index do
not work very well in Postgres;
currently, I only use btree indexes. Could I gain performances whole using
hash indexes as well ?

How does Postgres handle concurrent copy from on: same table / different
tables ?


I'd be glad on any further suggestion on how to further increase my
performances.




Marc




--
+++ Lassen Sie Ihren Gedanken freien Lauf... z.B. per FreeSMS +++
GMX bietet bis zu 100 FreeSMS/Monat: http://www.gmx.net/de/go/mail

pgsql-performance by date:

Previous
From: PFC
Date:
Subject: Re: Partitioning / Clustering
Next
From: Alex Turner
Date:
Subject: Re: Partitioning / Clustering