Re: performance problem - 10.000 databases - Mailing list pgsql-admin

From Mike Rylander
Subject Re: performance problem - 10.000 databases
Date
Msg-id 200310311051.24898.miker@n2bb.com
Whole thread Raw
In response to Re: performance problem - 10.000 databases  (Marek Florianczyk <franki@tpi.pl>)
Responses Re: performance problem - 10.000 databases
List pgsql-admin
On Friday 31 October 2003 09:59 am, Marek Florianczyk wrote:
> W liście z pią, 31-10-2003, godz. 15:23, Tom Lane pisze:
> > Marek Florianczyk <franki@tpi.pl> writes:
> > > We are building hosting with apache + php ( our own mod_virtual module
> > > ) with about 10.000 wirtul domains + PostgreSQL.
> > > PostgreSQL is on a different machine ( 2 x intel xeon 2.4GHz 1GB RAM
> > > scsi raid 1+0 )
> > > I've made some test's - 3000 databases and 400 clients connected at
> > > same time.
> >
> > You are going to need much more serious iron than that if you want to
> > support 10000 active databases.  The required working set per database
> > is a couple hundred K just for system catalogs (I don't have an exact
> > figure in my head, but it's surely of that order of magnitude).
>
> it's about 3.6M
>
> > So the
> > system catalogs alone would require 2 gig of RAM to keep 'em swapped in;
> > never mind caching any user data.
> >
> > The recommended way to handle this is to use *one* database and create
> > 10000 users each with his own schema.  That should scale a lot better.
> >
> > Also, with a large max_connections setting, you have to beware that your
> > kernel settings are adequate --- particularly the open-files table.
> > It's pretty easy for Postgres to eat all your open files slots.  PG
> > itself will usually survive this condition just fine, but everything
> > else you run on the machine will start falling over :-(.  For safety
> > you should make sure that max_connections * max_files_per_process is
> > comfortably less than the size of the kernel's open-files table.
>
> Yes, I have made some updates, number of process, semaphores, and file
> descriptor. I'm aware of this limitation. On this machine there will be
> only PostgreSQL, nothing else.
> This idea with one database and 10.000 schemas is very interesting, I
> never thought about that. I will make some tests on monday and send
> results to the list.

Following this logic, if you are willing to place the authentication in front
of the database instead of inside it you can use a connection pool and simply
change the search_path each time a new user accesses the database.

>
> greeings
> Marek
>
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 2: you can get off all lists at once with the unregister command
>     (send "unregister YourEmailAddressHere" to majordomo@postgresql.org)

--
Mike Rylander


pgsql-admin by date:

Previous
From: Marek Florianczyk
Date:
Subject: Re: performance problem - 10.000 databases
Next
From: Marek Florianczyk
Date:
Subject: Re: performance problem - 10.000 databases