Re: How to keep queries low latency as concurrency increases - Mailing list pgsql-performance

From Kevin Grittner
Subject Re: How to keep queries low latency as concurrency increases
Date
Msg-id 20121030115554.306900@gmx.com
Whole thread Raw
In response to How to keep queries low latency as concurrency increases  (Catalin Iacob <iacobcatalin@gmail.com>)
Responses Re: How to keep queries low latency as concurrency increases  (Shaun Thomas <sthomas@optionshouse.com>)
List pgsql-performance
Catalin Iacob wrote:

> Hardware:
> Virtual machine running on top of VMWare
> 4 cores, Intel(R) Xeon(R) CPU E5645 @ 2.40GHz
> 4GB of RAM

You should carefully test transaction-based pools limited to around 8
DB connections. Experiment with different size limits.

http://wiki.postgresql.org/wiki/Number_Of_Database_Connections

> Disk that is virtual enough that I have no idea what it is, I know
> that there's some big storage shared between multiple virtual
> machines. Filesystem is ext4 with default mount options.

Can you change to noatime?

> pgbouncer 1.4.2 installed from Ubuntu's packages on the same
> machine as Postgres. Django connects via TCP/IP to pgbouncer (it
> does one connection and one transaction per request) and pgbouncer
> keeps connections open to Postgres via Unix socket. The Python
> client is self compiled psycopg2-2.4.5.

Is there a good transaction-based connection pooler in Python? You're
better off with a good pool built in to the client application than
with a good pool running as a separate process between the client and
the database, IMO.

>  random_page_cost | 2

For fully cached databases I recommend random_page_cost = 1, and I
always recommend cpu_tuple_cost = 0.03.

-Kevin


pgsql-performance by date:

Previous
From: Tatsuo Ishii
Date:
Subject: Re: out of memory
Next
From: Vincenzo Melandri
Date:
Subject: Seq scan on 10million record table.. why?