Re: performance problem - 10.000 databases - Mailing list pgsql-admin

From Marek Florianczyk
Subject Re: performance problem - 10.000 databases
Date
Msg-id 1068056378.28827.172.camel@franki-laptop.tpi.pl
Whole thread Raw
In response to Re: performance problem - 10.000 databases  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: performance problem - 10.000 databases
List pgsql-admin
W liście z śro, 05-11-2003, godz. 18:59, Tom Lane pisze:
> Marek Florianczyk <franki@tpi.pl> writes:
> > Each client was doing:
>
> > 10 x connect,"select * from table[rand(1-4)] where
> > number=[rand(1-1000)]",disconnect--(fetch one row)
>
> Seems like this is testing the cost of connect and disconnect to the
> exclusion of nearly all else.  PG is not designed to process just one
> query per connection --- backend startup is too expensive for that.
> Consider using a connection-pooling module if your application wants
> short-lived connections.

You right, maybe typical php page will have more queries "per view"
How good is connection-pooling module when connection from each virtual
site is uniq? Different user and password, and differen schemas and
permissions, so this connect-pooling module would have to switch between
users, without reconnecting to database? Impossible ?

>
> > I noticed that queries like: "\d table1" "\di" "\dp" are extremly slow,
>
> I thought maybe you'd uncovered a performance issue with lots of
> schemas, but I can't reproduce it here.  I made 10000 schemas each
> containing a table "mytab", which is about the worst case for an
> unqualified "\d mytab", but it doesn't seem excessively slow --- maybe
> about a quarter second to return the one mytab that's actually in my
> search path.  In realistic conditions where the users aren't all using
> the exact same table names, I don't think there's an issue.

But did you do that under some database load ? eg. 100 clients
connected, like in my example ? When I do these queries "\d" without any
clients connected and after ANALYZE it's fast, but only 100 clients is
enough to lengthen query time to 30 sec. :(

I've 3000 schemas named: test[1-3000] and 3000 users named test[1-3000]
in each schema there is four tables (table1 table2 table3 table4 )
each table has 3 column (int,text,int) and some of them has also
indexes.

If you want, I will send perl script that forks to 100 process and
perform my queries.

greetings
Marek

>
>             regards, tom lane
>
> ---------------------------(end of broadcast)---------------------------
> TIP 5: Have you checked our extensive FAQ?
>
>                http://www.postgresql.org/docs/faqs/FAQ.html
>


pgsql-admin by date:

Previous
From: Jeff
Date:
Subject: Re: performance problem - 10.000 databases
Next
From: Marek Florianczyk
Date:
Subject: Re: performance problem - 10.000 databases