Re: Postgre Eating Up Too Much RAM - Mailing list pgsql-admin

From Kevin Grittner
Subject Re: Postgre Eating Up Too Much RAM
Date
Msg-id 20121114104902.90160@gmx.com
Whole thread Raw
In response to Postgre Eating Up Too Much RAM  (Aaron Bono <aaron.bono@aranya.com>)
Responses Re: Postgre Eating Up Too Much RAM
List pgsql-admin
Aaron Bono wrote:

> (there are currently a little over 200 active connections to the
> database):

How many cores do you have on the system? What sort of storage
systeme? What, exactly, are the symptoms of the problem? Are there
200 active connections when the problem occurs? By "active", do you
mean that there is a user connected or that they are actually running
something?

http://wiki.postgresql.org/wiki/Guide_to_reporting_problems

> max_connections = 1000

If you want to handle a large number of clients concurrently, this is
probably the wrong way to go about it. You will probably get better
performance with a connection pool.

http://wiki.postgresql.org/wiki/Number_Of_Database_Connections

> shared_buffers = 256MB

Depending on your workload, a Linux machine with 32GB RAM should
probably have this set somewhere between 1GB and 8GB.

> vacuum_cost_delay = 20ms

Making VACUUM less aggressive usually backfires and causes
unacceptable performance, although that might not happen for days or
weeks after you make the configuration change.

By the way, the software is called PostgreSQL. It is often shortened
to Postgres, but "Postgre" is just wrong.

-Kevin


pgsql-admin by date:

Previous
From: "Gunnar \"Nick\" Bluth"
Date:
Subject: Re: Postgre Eating Up Too Much RAM
Next
From: Aaron Bono
Date:
Subject: Re: Postgre Eating Up Too Much RAM