Configuration tips for very large database - Mailing list pgsql-performance

From Nico Sabbi
Subject Configuration tips for very large database
Date
Msg-id 54DD2872.3070603@poste.it
Whole thread Raw
Responses Re: Configuration tips for very large database  (Kevin Grittner <kgrittn@ymail.com>)
Re: Configuration tips for very large database  ("ktm@rice.edu" <ktm@rice.edu>)
List pgsql-performance
Hello,
I've been away from  postgres for several years, so please forgive me if
I forgot nearly everything:-)

I've just inherited a database collecting environmental data. There's a
background process continually inserting records (not so often, to say
the truth) and a web interface to query data.
At the moment the record count of the db is 250M and growing all the
time. The 3 main tables have just 3 columns.

Queries get executed very very slowly, say 20 minutes. The most evident
problem I see is that io wait load is almost always 90+% while querying
data, 30-40% when "idle" (so to say).
Obviously disk access is to blame, but I'm a bit surprised because the
cluster where this db is running is not at all old iron: it's a vmware
VM with 16GB ram, 4cpu 2.2Ghz, 128GB disk (half of which used). The disk
system underlying vmware is quite powerful, this postgres is the only
system that runs slowly in this cluster.
I can increase resources if necessary, but..

Even before analyzing queries (that I did) I'd like to know if someone
has already succeeded in running postgres with 200-300M records with
queries running much faster than this. I'd like to compare the current
configuration with a super-optimized one to identify the parameters that
need to be changed.
Any link to a working configuration would be very appreciated.

Thanks for any help,
   Nico


pgsql-performance by date:

Previous
From: David G Johnston
Date:
Subject: Re: query - laziness of lateral join with function
Next
From: Kevin Grittner
Date:
Subject: Re: Configuration tips for very large database