Thread: Result Cache node shows per-worker info even for workers not launched

Result Cache node shows per-worker info even for workers not launched

From
Amit Khandekar
Date:
Hi,

If planned parallel workers do not get launched, the Result Cache plan
node shows all-0 stats for each of those workers:

tpch=# set max_parallel_workers TO 0;
SET
tpch=# explain analyze
select avg(l_discount) from orders, lineitem
where
    l_orderkey = o_orderkey
    and o_orderdate < date '1995-03-09'
    and l_shipdate > date '1995-03-09';


QUERY PLAN

---------------------------------------------------------------------------------------------------------------------------------------------------------
 Finalize Aggregate  (cost=315012.87..315012.88 rows=1 width=32)
(actual time=27533.482..27533.598 rows=1 loops=1)
   ->  Gather  (cost=315012.44..315012.85 rows=4 width=32) (actual
time=27533.471..27533.587 rows=1 loops=1)
         Workers Planned: 4
         Workers Launched: 0
         ->  Partial Aggregate  (cost=314012.44..314012.45 rows=1
width=32) (actual time=27533.177..27533.178 rows=1 loops=1)
               ->  Nested Loop  (cost=0.44..309046.68 rows=1986303
width=4) (actual time=0.400..27390.835 rows=748912 loops=1)
                     ->  Parallel Seq Scan on lineitem
(cost=0.00..154513.66 rows=4120499 width=12) (actual
time=0.044..7910.399 rows=16243662 loops=1)
                           Filter: (l_shipdate > '1995-03-09'::date)
                           Rows Removed by Filter: 13756133
                     ->  Result Cache  (cost=0.44..0.53 rows=1
width=4) (actual time=0.001..0.001 rows=0 loops=16243662)
                           Cache Key: lineitem.l_orderkey
                           Hits: 12085749  Misses: 4157913  Evictions:
3256424  Overflows: 0  Memory Usage: 65537kB
                           Worker 0:  Hits: 0  Misses: 0  Evictions: 0
 Overflows: 0  Memory Usage: 0kB
                           Worker 1:  Hits: 0  Misses: 0  Evictions: 0
 Overflows: 0  Memory Usage: 0kB
                           Worker 2:  Hits: 0  Misses: 0  Evictions: 0
 Overflows: 0  Memory Usage: 0kB
                           Worker 3:  Hits: 0  Misses: 0  Evictions: 0
 Overflows: 0  Memory Usage: 0kB
                           ->  Index Scan using orders_pkey on orders
(cost=0.43..0.52 rows=1 width=4) (actual time=0.002..0.002 rows=0
loops=4157913)
                                 Index Cond: (o_orderkey = lineitem.l_orderkey)
                                 Filter: (o_orderdate < '1995-03-09'::date)
                                 Rows Removed by Filter: 1
 Planning Time: 0.211 ms
 Execution Time: 27553.477 ms
(22 rows)

By looking at the other cases like  show_sort_info() or printing
per-worker jit info, I could see that the general policy is that we
skip printing info for workers that are not launched. Attached is a
patch to do the same for Result Cache.

I was earlier thinking about using (instrument[n].nloops == 0) to
check for not-launched workers. But we are already using "if
(rcstate->stats.cache_misses == 0)" for the leader process, so for
consistency I used the same method for workers.

-- 
Thanks,
-Amit Khandekar
Huawei Technologies

Attachment

Re: Result Cache node shows per-worker info even for workers not launched

From
David Rowley
Date:
On Wed, 28 Apr 2021 at 00:39, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
> If planned parallel workers do not get launched, the Result Cache plan
> node shows all-0 stats for each of those workers:

Thanks for reporting this and for the patch.

You're right that there is a problem here. I did in fact have code to
skip workers that didn't have any cache misses right up until v18 of
the patch [1], but I removed it because I was getting some test
instability in the partition_prune regression tests. That was
highlighted by the CFbot machines. I mentioned about that in the final
paragraph of [2]. I didn't mention the exact test there, but I was
talking about the test in partition_prune.sql.

By the time it came to b6002a796, I did end up changing the
partition_prune tests. However, I had to back that version out again
because of some problems with force_parallel_mode = regress buildfarm
animals.  By the time I re-committed Result Cache in 9eacee2e6, I had
changed the partition_prune tests so they did SET enable_resultcache =
0 before running that parallel test. I'd basically decided that the
test was never going to be stable on the buildfarm.

The problem there was that the results would vary depending on if the
parallel worker managed to do anything before the main process did all
the work. Because the tests are pretty small scale, many machines
wouldn't manage to get their worker(s) in gear and running before the
main process had finished the test. This was the reason I removed the
cache_misses == 0 test in explain.c. I'd thought that I could make
that test stable by just always outputting the cache stats line from
workers, even if they didn't assist during execution.

So, given that I removed the parallel test in partition_prune.sql, and
don't have any EXPLAIN ANALYZE output for parallel tests in
resultcache.sql, it should be safe enough to put that cache_misses ==
0 test back into explain.c

I've attached a patch to do this. The explain.c part is pretty similar
to your patch, I just took my original code and comment.

The patch also removes the SET force_parallel_mode = off in
resultcache.sql.  That's no longer needed due to adding this check in
explain.c again.  I also removed the changes I made to the
explain_parallel_append function in partition_prune.sql.  I shouldn't
have included those in 9eacee2e6.

I plan to push this in the next 24 hours or so.

David

[1] https://postgr.es/m/CAApHDvoOmTtNPoF-+Q1dAOMa8vWFsFbyQb_A0iUKTS5nf2DuLw@mail.gmail.com
[2] https://postgr.es/m/CAApHDvrz4f+i1wu-8hyqJ=pxYDroGA5Okgo5rWPOj47RZ6QTmQ@mail.gmail.com

Attachment

Re: Result Cache node shows per-worker info even for workers not launched

From
Bharath Rupireddy
Date:
On Wed, Apr 28, 2021 at 1:54 PM David Rowley <dgrowleyml@gmail.com> wrote:
> I plan to push this in the next 24 hours or so.

I happen to see explain_resultcache in resultcache.sql, seems like two
of the tests still have numbers for cache hits and misses - Hits: 980
Misses: 20, won't these make tests unstable? Will these numbers be
same across machines? Or is it that no buildfarm had caught these? The
comment below says that, the hits and misses are not same across
machines:
-- Ensure we get some evictions.  We're unable to validate the hits and misses
-- here as the number of entries that fit in the cache at once will vary
-- between different machines.

Should we remove the hide_hitmiss parameter in explain_resultcache and
always print N for non-zero and Zero for 0 hits and misses? This
clearly shows that we have 0 or non-zero hits or misses.

Am I missing something?

With Regards,
Bharath Rupireddy.
EnterpriseDB: http://www.enterprisedb.com



Re: Result Cache node shows per-worker info even for workers not launched

From
Amit Khandekar
Date:
On Wed, 28 Apr 2021 at 15:08, Bharath Rupireddy
<bharath.rupireddyforpostgres@gmail.com> wrote:
>
> On Wed, Apr 28, 2021 at 1:54 PM David Rowley <dgrowleyml@gmail.com> wrote:
> > I plan to push this in the next 24 hours or so.
>
> I happen to see explain_resultcache in resultcache.sql, seems like two
> of the tests still have numbers for cache hits and misses - Hits: 980
> Misses: 20, won't these make tests unstable? Will these numbers be
> same across machines? Or is it that no buildfarm had caught these? The
> comment below says that, the hits and misses are not same across
> machines:
> -- Ensure we get some evictions.  We're unable to validate the hits and misses
> -- here as the number of entries that fit in the cache at once will vary
> -- between different machines.
>
> Should we remove the hide_hitmiss parameter in explain_resultcache and
> always print N for non-zero and Zero for 0 hits and misses? This
> clearly shows that we have 0 or non-zero hits or misses.
>
> Am I missing something?

I believe, the assumption here is that with no workers involved, it is
guaranteed to have the exact same cache misses and hits anywhere.



Re: Result Cache node shows per-worker info even for workers not launched

From
David Rowley
Date:
On Wed, 28 Apr 2021 at 21:38, Bharath Rupireddy
<bharath.rupireddyforpostgres@gmail.com> wrote:
>
> On Wed, Apr 28, 2021 at 1:54 PM David Rowley <dgrowleyml@gmail.com> wrote:
> > I plan to push this in the next 24 hours or so.
>
> I happen to see explain_resultcache in resultcache.sql, seems like two
> of the tests still have numbers for cache hits and misses - Hits: 980
> Misses: 20, won't these make tests unstable? Will these numbers be
> same across machines? Or is it that no buildfarm had caught these? The
> comment below says that, the hits and misses are not same across
> machines:
> -- Ensure we get some evictions.  We're unable to validate the hits and misses
> -- here as the number of entries that fit in the cache at once will vary
> -- between different machines.

The only reason it would be unstable is if there are cache evictions.
Evictions will only happen if the cache fills up and we need to make
way for new entries.  A 32-bit machine, for example, will use slightly
less memory for caching items, so the number of evictions is going to
be a bit less on those machine.  Having an unstable number of
evictions will cause the hits and misses to be unstable too.
Otherwise, the number of misses is predictable, it'll be the number of
distinct sets of parameters that we lookup in the cache.  Any repeats
will be a hit.  So hits plus misses should just add up to the number
of times that a normal parameterized nested loop would execute the
inner side, and that's predictable too. It would only change if you
change the query or the data in the table.

> Should we remove the hide_hitmiss parameter in explain_resultcache and
> always print N for non-zero and Zero for 0 hits and misses? This
> clearly shows that we have 0 or non-zero hits or misses.

I added that because if there are no evictions then the hits and
misses should be perfectly stable, providing the test is small enough
not to exceed work_mem and fill the cache.   If someone was to run the
tests with a small work_mem, then there would be no shortage of other
tests that would fail due to plan changes.  These tests were designed
to be small enough so there's no danger of getting close to work_mem
and filling the cache.

However, I did add 1 test that sets work_mem down to 64kB to ensure
the eviction code does get some exercise. You'll notice that I pass
"true" to explain_resultcache() to hide the hits and misses there.  We
can't test the exact number of hits/misses/evictions there, but we can
at least tell apart the zero and non-zero by how I coded
explain_resultcache() to replace with Zero or N depending on if the
number was zero or above zero.

David



Re: Result Cache node shows per-worker info even for workers not launched

From
Amit Khandekar
Date:
On Wed, 28 Apr 2021 at 16:14, David Rowley <dgrowleyml@gmail.com> wrote:
> However, I did add 1 test that sets work_mem down to 64kB to ensure
> the eviction code does get some exercise. You'll notice that I pass
> "true" to explain_resultcache() to hide the hits and misses there.  We
> can't test the exact number of hits/misses/evictions there, but we can
> at least tell apart the zero and non-zero by how I coded
> explain_resultcache() to replace with Zero or N depending on if the
> number was zero or above zero.

Thanks for the explanation. I did realize after replying to Bharat
upthread, that I was wrong in assuming that the cache misses and cache
hits are always stable for non-parallel scans.



Re: Result Cache node shows per-worker info even for workers not launched

From
Amit Khandekar
Date:
On Wed, 28 Apr 2021 at 13:54, David Rowley <dgrowleyml@gmail.com> wrote:

> So, given that I removed the parallel test in partition_prune.sql, and
> don't have any EXPLAIN ANALYZE output for parallel tests in
> resultcache.sql, it should be safe enough to put that cache_misses ==
> 0 test back into explain.c
>
> I've attached a patch to do this. The explain.c part is pretty similar
> to your patch, I just took my original code and comment.

Sounds good. And thanks for the cleanup patch, and the brief history.
Patch looks ok to me.



Re: Result Cache node shows per-worker info even for workers not launched

From
David Rowley
Date:
On Wed, 28 Apr 2021 at 23:05, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
>
> On Wed, 28 Apr 2021 at 13:54, David Rowley <dgrowleyml@gmail.com> wrote:
> > I've attached a patch to do this. The explain.c part is pretty similar
> > to your patch, I just took my original code and comment.
>
> Sounds good. And thanks for the cleanup patch, and the brief history.
> Patch looks ok to me.

Thanks for the review.  I pushed the patch with a small additional
change to further tidy up show_resultcache_info().

David