Re: performance problem - 10.000 databases - Mailing list pgsql-admin

From Tom Lane
Subject Re: performance problem - 10.000 databases
Date
Msg-id 3453.1068055189@sss.pgh.pa.us
Whole thread Raw
In response to Re: performance problem - 10.000 databases  (Marek Florianczyk <franki@tpi.pl>)
Responses Re: performance problem - 10.000 databases  (Marek Florianczyk <franki@tpi.pl>)
List pgsql-admin
Marek Florianczyk <franki@tpi.pl> writes:
> Each client was doing:

> 10 x connect,"select * from table[rand(1-4)] where
> number=[rand(1-1000)]",disconnect--(fetch one row)

Seems like this is testing the cost of connect and disconnect to the
exclusion of nearly all else.  PG is not designed to process just one
query per connection --- backend startup is too expensive for that.
Consider using a connection-pooling module if your application wants
short-lived connections.

> I noticed that queries like: "\d table1" "\di" "\dp" are extremly slow,

I thought maybe you'd uncovered a performance issue with lots of
schemas, but I can't reproduce it here.  I made 10000 schemas each
containing a table "mytab", which is about the worst case for an
unqualified "\d mytab", but it doesn't seem excessively slow --- maybe
about a quarter second to return the one mytab that's actually in my
search path.  In realistic conditions where the users aren't all using
the exact same table names, I don't think there's an issue.

            regards, tom lane

pgsql-admin by date:

Previous
From: Marek Florianczyk
Date:
Subject: Re: performance problem - 10.000 databases
Next
From: Marek Florianczyk
Date:
Subject: Re: performance problem - 10.000 databases