Re: 答复: response time is very long in PG9.5.5 using psql or jdbc - Mailing list pgsql-bugs

From Tom Lane
Subject Re: 答复: response time is very long in PG9.5.5 using psql or jdbc
Date
Msg-id 30157.1518548280@sss.pgh.pa.us
Whole thread Raw
In response to 答复: response time is very long in PG9.5.5 using psql or jdbc  (石勇虎 <SHIYONGHU651@pingan.com.cn>)
Responses Re: 答复: response time is very longin PG9.5.5 using psql or jdbc  (Andres Freund <andres@anarazel.de>)
Re: response time is very long in PG9.5.5 using psql or jdbc  (David Gould <daveg@sonic.net>)
List pgsql-bugs
=?gb2312?B?yq/Twrui?= <SHIYONGHU651@pingan.com.cn> writes:
> Yes,we have more than 500 thousand of objects,and the total size of the database is almost 10TB.Just as you said,we
mayneed to reduce the objects number,or you have any better solution? 

Hmph.  I tried creating 500000 tables in a test database, and couldn't
detect any obvious performance problem in session startup.  So there's
something very odd about your results.  You might try looking at the
sizes of the system catalogs, e.g like
    select pg_size_pretty(pg_total_relation_size('pg_attribute'));
(In my test database, pg_class is about 80MB and pg_attribute about
800MB.)

> And I also have a question that if the backend's internal caches of catalog is shared with other users or sessions?if
thepgbouncer is usefull? 

pgbouncer or some other connection pooler would help, yes.  But I don't
think the underlying performance ought to be this bad to begin with.

            regards, tom lane


pgsql-bugs by date:

Previous
From: Tom Lane
Date:
Subject: Re: BUG #15060: Row in table not found when using pg function in an expression
Next
From: Andres Freund
Date:
Subject: Re: 答复: response time is very longin PG9.5.5 using psql or jdbc