Hi,
On 2020-07-31 13:39:37 -0400, Tom Lane wrote:
> Robert Haas <robertmhaas@gmail.com> writes:
> > Unfortunately, I don't have time for detailed review of this. I am
> > suspicious that there are substantial performance regressions that you
> > just haven't found yet. I would not take the position that this is a
> > completely hopeless approach, or anything like that, but neither would
> > I conclude that the tests shown so far are anywhere near enough to be
> > confident that there are no problems.
>
> I took a quick look through the v8 patch, since it's marked RFC, and
> my feeling is about the same as Robert's: it is just about impossible
> to believe that doubling (or more) the amount of hashtable manipulation
> involved in allocating a buffer won't hurt common workloads. The
> offered pgbench results don't reassure me; we've so often found that
> pgbench fails to expose performance problems, except maybe when it's
> used just so.
Indeed. The buffer mapping hashtable already is visible as a major
bottleneck in a number of workloads. Even in readonly pgbench if s_b is
large enough (so the hashtable is larger than the cache). Not to speak
of things like a cached sequential scan with a cheap qual and wide rows.
> Robert again:
> > Also, systems with very large shared_buffers settings are becoming
> > more common, and probably will continue to become more common, so I
> > don't think we can dismiss that as an edge case any more. People don't
> > want to run with an 8GB cache on a 1TB server.
>
> I do agree that it'd be great to improve this area. Just not convinced
> that this is how.
Wonder if the temporary fix is just to do explicit hashtable probes for
all pages iff the size of the relation is < s_b / 500 or so. That'll
address the case where small tables are frequently dropped - and
dropping large relations is more expensive from the OS and data loading
perspective, so it's not gonna happen as often.
Greetings,
Andres Freund