Re: Skylake-S warning - Mailing list pgsql-hackers

From Andres Freund
Subject Re: Skylake-S warning
Date
Msg-id 20181003225533.heawu7436cuqxipw@alap3.anarazel.de
Whole thread Raw
In response to Skylake-S warning  (Daniel Wood <hexexpert@comcast.net>)
Responses Re: Skylake-S warning  (Daniel Wood <hexexpert@comcast.net>)
Re: Skylake-S warning  (Daniel Wood <hexexpert@comcast.net>)
List pgsql-hackers
Hi,


On 2018-10-03 14:29:39 -0700, Daniel Wood wrote:
> If running benchmarks or you are a customer which is currently
> impacted by GetSnapshotData() on high end multisocket systems be wary
> of Skylake-S.

> Performance differences of nearly 2X can be seen on select only
> pgbench due to nothing else but unlucky choices for max_connections.
> Scale 1000, 192 local clients on a 2 socket 48 core Skylake-S(Xeon
> Platinum 8175M @ 2.50-GHz) system.  pgbench -S

FWIW, I've seen performance differences of that magnitude going back to
at least nehalem. But not on every larger system, interestingly.


> If this is indeed just disadvantageous placement of structures/arrays
> in memory then you might also find that after upgrading a previous
> good choice for max_connections becomes a bad choice if things move
> around.

In the thread around https://www.postgresql.org/message-id/20160411214029.ce3fw6zxim5k6a2r@alap3.anarazel.de
I'd found doing more aggressive padding helped a lot.  Unfortunately I
didn't pursue this further :(


> NOTE2: It is unclear why PG needs to support over 64K sessions.  At
> about 10MB per backend(at the low end) the empty backends alone would
> consume 640GB's of memory!  Changing pgprocnos from int to short gives
> me the following results.

I've argued this before. After that we reduced MAX_BACKENDS to 0x3FFFF -
I personaly think we could go to 16bit without it being a practical
problem.

I really suspect we're going to have to change the layout of PGXACT data
in a way that makes a contiguous scan possible. That'll probably require
some uglyness because a naive "use first free slot" scheme obviously is
sensitive to there being holes.  But I suspect that a scheme were
backends would occasionally try to move themselves to an earlier slot
wouldn't be too hard to implement.

Greetings,

Andres Freund


pgsql-hackers by date:

Previous
From: Andres Freund
Date:
Subject: DROP DATABASE doesn't force other backends to close FDs
Next
From: Peter Geoghegan
Date:
Subject: Re: Making all nbtree entries unique by having heap TIDs participatein comparisons