Re: Postgres is not able to handle more than 4k tables!? - Mailing list pgsql-hackers

From Stephen Frost
Subject Re: Postgres is not able to handle more than 4k tables!?
Date
Msg-id 20200709152924.GB12375@tamriel.snowman.net
Whole thread Raw
In response to Re: Postgres is not able to handle more than 4k tables!?  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-hackers
Greetings,

* Tom Lane (tgl@sss.pgh.pa.us) wrote:
> Stephen Frost <sfrost@snowman.net> writes:
> > * Tom Lane (tgl@sss.pgh.pa.us) wrote:
> >> So, that's really the core of your problem.  We don't promise that
> >> you can run several thousand backends at once.  Usually it's recommended
> >> that you stick a connection pooler in front of a server with (at most)
> >> a few hundred backends.
>
> > Sure, but that doesn't mean things should completely fall over when we
> > do get up to larger numbers of backends, which is definitely pretty
> > common in larger systems.
>
> As I understood the report, it was not "things completely fall over",
> it was "performance gets bad".  But let's get real.  Unless the OP
> has a machine with thousands of CPUs, trying to run this way is
> counterproductive.

Right, the issue is that performance gets bad (or, really, more like
terrible...), and regardless of if it's ideal or not, lots of folks
actually do run PG with thousands of connections, and we know that at
start-up time because they've set max_connections to a sufficiently high
value to support doing exactly that.

> Perhaps in a decade or two such machines will be common enough that
> it'll make sense to try to tune Postgres to run well on them.  Right
> now I feel no hesitation about saying "if it hurts, don't do that".

I disagree that we should completely ignore these use-cases.

Thanks,

Stephen

Attachment

pgsql-hackers by date:

Previous
From: Tom Lane
Date:
Subject: Re: Is this a bug in pg_current_logfile() on Windows?
Next
From: Magnus Hagander
Date:
Subject: Re: Stale external URL in doc?