Re: [HACKERS] Vacuum: allow usage of more than 1GB of work mem - Mailing list pgsql-hackers

From Alvaro Herrera
Subject Re: [HACKERS] Vacuum: allow usage of more than 1GB of work mem
Date
Msg-id 20180208233919.vrbkbcbfh5buzo3h@alvherre.pgsql
Whole thread Raw
In response to Re: [HACKERS] Vacuum: allow usage of more than 1GB of work mem  (Claudio Freire <klaussfreire@gmail.com>)
Responses Re: [HACKERS] Vacuum: allow usage of more than 1GB of work mem  (Claudio Freire <klaussfreire@gmail.com>)
List pgsql-hackers
Claudio Freire wrote:

> I don't like looping, though, seems overly cumbersome. What's worse?
> maintaining that fragile weird loop that might break (by causing
> unexpected output), or a slight slowdown of the test suite?
>
> I don't know how long it might take on slow machines, but in my
> machine, which isn't a great machine, while the vacuum test isn't fast
> indeed, it's just a tiny fraction of what a simple "make check" takes.
> So it's not a huge slowdown in any case.

Well, what about a machine running tests under valgrind, or the weird
cache-clobbering infuriatingly slow code?  Or buildfarm members running
on really slow hardware?  These days, a few people have spent a lot of
time trying to reduce the total test time, and it'd be bad to lose back
the improvements for no good reason.

I grant you that the looping I proposed is more complicated, but I don't
see any reason why it would break.

Another argument against the LOCK pg_class idea is that it causes an
unnecessary contention point across the whole parallel test group --
with possible weird side effects.  How about a deadlock?

Other than the wait loop I proposed, I think we can make a couple of
very simple improvements to this test case to avoid a slowdown:

1. the DELETE takes about 1/4th of the time and removes about the same
number of rows as the one using the IN clause:
  delete from vactst where random() < 3.0 / 4;

2. Use a new temp table rather than vactst.  Everything is then faster.

3. Figure out the minimum size for the table that triggers the behavior
   you want.  Right now you use 400k tuples -- maybe 100k are sufficient?
   Don't know.

-- 
Álvaro Herrera                https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


pgsql-hackers by date:

Previous
From: Thomas Munro
Date:
Subject: Re: JIT compiling with LLVM v10.0
Next
From: Amit Langote
Date:
Subject: Re: update tuple routing and triggers