Re: performance problem - 10.000 databases - Mailing list pgsql-admin

From scott.marlowe
Subject Re: performance problem - 10.000 databases
Date
Msg-id Pine.LNX.4.33.0310311143320.25769-100000@css120.ihs.com
Whole thread Raw
In response to performance problem - 10.000 databases  (Marek Florianczyk <franki@tpi.pl>)
List pgsql-admin
On 31 Oct 2003, Marek Florianczyk wrote:

> Hi all
>
> We are building hosting with apache + php ( our own mod_virtual module )
> with about 10.000 wirtul domains + PostgreSQL.
> PostgreSQL is on a different machine ( 2 x intel xeon 2.4GHz 1GB RAM
> scsi raid 1+0 )

Tom's right, you need more memory, period, and probably want a very large
RAID1+0 (with like 10 or more disks).


> Has any one idea how to tune postgres, to accept connection faster?

Postgresql will take the amount of time it needs.  Connections, especially
in a contentious environment, aren't cheap.

> Maybe some others settings to speed up server ?
> My settings:
> PostgreSQL:
> max_connections = 512
> shared_buffers = 8192
> max_fsm_relations = 10000
> max_fsm_pages = 100000
> max_locks_per_transaction = 512
> wal_buffers = 32
> sort_mem = 327681
-------------^^^^^^-- THIS IS WAY TOO HIGH. That's ~320Meg!  PER SORT.
Drop this down to something reasonable like 8192 or something. (i.e. 8
meg)  If there were lots of big sorts going on by all 300 users, then
that's 300*320 Meg memory that could get used up.  I.e. swap storm.

Have you adjusted random_page_cost to reflect your I/O setup?  While the
default of 4 is a good number for a single drive server, it's kinda high
for a machine with 4 or more drives in an array.  Figures from 1.2 to 2.0
seem common.  My database under 7.2.4 run best with about 1.4
random_page_cost


pgsql-admin by date:

Previous
From: stewarrb@yahoo.com (Bo Stewart)
Date:
Subject: Postgres Table Size
Next
From: Mike Rylander
Date:
Subject: Re: performance problem - 10.000 databases