Thread: [PATCH] Improve EXPLAIN ANALYZE overhead by sampling

[PATCH] Improve EXPLAIN ANALYZE overhead by sampling

From
Martijn van Oosterhout
Date:
This was a suggestion made back in March that would dramatically reduce
the overhead of EXPLAIN ANALYZE on queries that loop continuously over
the same nodes.

http://archives.postgresql.org/pgsql-hackers/2006-03/msg01114.php

What it does behave normally for the first 50 tuples of any node, but
after that it starts sampling at ever increasing intervals, the
intervals controlled by an exponential function. So for a node
iterating over 1 million tuples it takes around 15,000 samples. The
result is that EXPLAIN ANALYZE has a much reduced effect on the total
execution time.

Without EXPLAIN ANALYZE:

postgres=# select count(*) from generate_series(1,1000000);
  count
---------
 1000000
(1 row)

Time: 2303.599 ms

EXPLAIN ANALYZE without patch:

postgres=# explain analyze select count(*) from generate_series(1,1000000);
                                                             QUERY PLAN
            

------------------------------------------------------------------------------------------------------------------------------------
 Aggregate  (cost=15.00..15.01 rows=1 width=0) (actual time=8022.070..8022.073 rows=1 loops=1)
   ->  Function Scan on generate_series  (cost=0.00..12.50 rows=1000 width=0) (actual time=1381.762..4873.369
rows=1000000loops=1) 
 Total runtime: 8042.472 ms
(3 rows)

Time: 8043.401 ms

EXPLAIN ANALYZE with patch:

postgres=# explain analyze select count(*) from generate_series(1,1000000);
                                                             QUERY PLAN
            

------------------------------------------------------------------------------------------------------------------------------------
 Aggregate  (cost=15.00..15.01 rows=1 width=0) (actual time=2469.491..2469.494 rows=1 loops=1)
   ->  Function Scan on generate_series  (cost=0.00..12.50 rows=1000 width=0) (actual time=1405.002..2187.417
rows=1000000loops=1) 
 Total runtime: 2496.529 ms
(3 rows)

Time: 2497.488 ms

As you can see, the overhead goes from 5.7 seconds to 0.2 seconds.
Obviously this is an extreme case, but it will probably help in a lot
of other cases people have been complaining about.

- To get this close it needs to get an estimate of the sampling overhead.
It does this by a little calibration loop that is run once per backend.
If you don't do this, you end up assuming all tuples take the same time
as tuples with the overhead, resulting in nodes apparently taking
longer than their parent nodes. Incidently, I measured the overhead to
be about 3.6us per tuple per node on my (admittedly slightly old)
machine.

Note that the resulting times still include the overhead actually
incurred, I didn't filter it out. I want the times to remain reflecting
reality as closely as possible.

- I also removed InstrStopNodeMulti and made InstrStopNode take a tuple
count parameter instead. This is much clearer all round.

- I also didn't make it optional. I'm unsure about whether it should be
optional or not, given the number of cases where it will make a
difference to be very few.

- The tuple counter for sampling restarts every loop. Thus a node that is
called repeatedly only returning one value each time will still measure
every tuple, though its parent node won't. We'll need some field
testing to see if that remains a significant effect.

- I don't let the user know anywhere how many samples it took. Is this
something users should care about?

Any comments?
--
Martijn van Oosterhout   <kleptog@svana.org>   http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to litigate.

Attachment

Re: [PATCH] Improve EXPLAIN ANALYZE overhead by sampling

From
Simon Riggs
Date:
On Tue, 2006-05-09 at 22:37 +0200, Martijn van Oosterhout wrote:
> This was a suggestion made back in March that would dramatically reduce
> the overhead of EXPLAIN ANALYZE on queries that loop continuously over
> the same nodes.
>
> http://archives.postgresql.org/pgsql-hackers/2006-03/msg01114.php
>
> As you can see, the overhead goes from 5.7 seconds to 0.2 seconds.
> Obviously this is an extreme case, but it will probably help in a lot
> of other cases people have been complaining about.

This seems much more useful behaviour than currently. Running an EXPLAIN
ANALYZE for large queries can be a real pain, especially on a production
box which is in live use - so tuning a test tool has a meaningful effect
on other users performance too.

There's a lot of thought gone in here, so I'd vote yes, though without
having done a detailed code review.

--
  Simon Riggs
  EnterpriseDB          http://www.enterprisedb.com


Re: [PATCH] Improve EXPLAIN ANALYZE overhead by sampling

From
"Rocco Altier"
Date:
> - To get this close it needs to get an estimate of the sampling
overhead.
> It does this by a little calibration loop that is run once per
backend.
> If you don't do this, you end up assuming all tuples take the same
time
> as tuples with the overhead, resulting in nodes apparently taking
> longer than their parent nodes. Incidently, I measured the overhead to
> be about 3.6us per tuple per node on my (admittedly slightly old)
> machine.

Could this be deferred until the first explain analyze?  So that we
aren't paying the overhead of the calibration in all backends, even the
ones that won't be explaining?

    -rocco

Re: [PATCH] Improve EXPLAIN ANALYZE overhead by sampling

From
Martijn van Oosterhout
Date:
On Tue, May 09, 2006 at 05:16:57PM -0400, Rocco Altier wrote:
> > - To get this close it needs to get an estimate of the sampling
> > overhead. It does this by a little calibration loop that is run
> > once per backend. If you don't do this, you end up assuming all
> > tuples take the same time as tuples with the overhead, resulting in
> > nodes apparently taking longer than their parent nodes. Incidently,
> > I measured the overhead to be about 3.6us per tuple per node on my
> > (admittedly slightly old) machine.
>
> Could this be deferred until the first explain analyze?  So that we
> aren't paying the overhead of the calibration in all backends, even the
> ones that won't be explaining?

If you look it's only done on the first call to InstrAlloc() which
should be when you run EXPLAIN ANALYZE for the first time.

In any case, the calibration is limited to half a millisecond (that's
500 microseconds), and it'll be a less on fast machines.

Have a nice day,
--
Martijn van Oosterhout   <kleptog@svana.org>   http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to litigate.

Attachment

Re: [PATCH] Improve EXPLAIN ANALYZE overhead by

From
"Luke Lonergan"
Date:
Nice one Martijn - we have immediate need for this, as one of our sizeable
queries under experimentation took 3 hours without EXPLAIN ANALYZE, then
over 20 hours with it...

- Luke


On 5/9/06 2:38 PM, "Martijn van Oosterhout" <kleptog@svana.org> wrote:

> On Tue, May 09, 2006 at 05:16:57PM -0400, Rocco Altier wrote:
>>> - To get this close it needs to get an estimate of the sampling
>>> overhead. It does this by a little calibration loop that is run
>>> once per backend. If you don't do this, you end up assuming all
>>> tuples take the same time as tuples with the overhead, resulting in
>>> nodes apparently taking longer than their parent nodes. Incidently,
>>> I measured the overhead to be about 3.6us per tuple per node on my
>>> (admittedly slightly old) machine.
>>
>> Could this be deferred until the first explain analyze?  So that we
>> aren't paying the overhead of the calibration in all backends, even the
>> ones that won't be explaining?
>
> If you look it's only done on the first call to InstrAlloc() which
> should be when you run EXPLAIN ANALYZE for the first time.
>
> In any case, the calibration is limited to half a millisecond (that's
> 500 microseconds), and it'll be a less on fast machines.
>
> Have a nice day,



Re: [PATCH] Improve EXPLAIN ANALYZE overhead by sampling

From
Martijn van Oosterhout
Date:
On Wed, May 10, 2006 at 09:16:43PM -0700, Luke Lonergan wrote:
> Nice one Martijn - we have immediate need for this, as one of our sizeable
> queries under experimentation took 3 hours without EXPLAIN ANALYZE, then
> over 20 hours with it...

Did you test it? There are some cases where this might still leave some
noticable overhead (high loop count). I'm just not sure if they occur
all that often in practice...

Have a nice day,
--
Martijn van Oosterhout   <kleptog@svana.org>   http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to litigate.

Attachment

Re: [PATCH] Improve EXPLAIN ANALYZE overhead by

From
"Luke Lonergan"
Date:
Martijn,

On 5/11/06 12:17 AM, "Martijn van Oosterhout" <kleptog@svana.org> wrote:

> Did you test it? There are some cases where this might still leave some
> noticable overhead (high loop count). I'm just not sure if they occur
> all that often in practice...

I've sent it to our team for testing, let's see if we get some info to
forward.

We're running the 10TB TPC-H case and I'm asking for EXPLAIN ANALYZE that
might take days to complete, so we certainly have some test cases for this
;-)

- Luke



Re: [PATCH] Improve EXPLAIN ANALYZE overhead by sampling

From
"Jim C. Nasby"
Date:
On Tue, May 09, 2006 at 10:37:04PM +0200, Martijn van Oosterhout wrote:
> Note that the resulting times still include the overhead actually
> incurred, I didn't filter it out. I want the times to remain reflecting
> reality as closely as possible.

If we actually know the overhead I think it'd be very useful at times to
be able to remove it, especially if you're actually trying to compare to
the planner estimates. Maybe worth adding an option to the command?

> - I also didn't make it optional. I'm unsure about whether it should be
> optional or not, given the number of cases where it will make a
> difference to be very few.

The real question is how important it is to have the real data in the
cases where it would make a difference, and I suspect we can't answer
that until this is out in the field. It *might* be worth a #define or
some other way to disable it that doesn't require patching code, but
probably not.
--
Jim C. Nasby, Sr. Engineering Consultant      jnasby@pervasive.com
Pervasive Software      http://pervasive.com    work: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf       cell: 512-569-9461

Re: [PATCH] Improve EXPLAIN ANALYZE overhead by sampling

From
"Qingqing Zhou"
Date:
"Martijn van Oosterhout" <kleptog@svana.org> wrote
>
> What it does behave normally for the first 50 tuples of any node, but
> after that it starts sampling at ever increasing intervals, the
> intervals controlled by an exponential function.
>

I got two questions after scanning the patch:

(1) For a node with 50 loops and another one 50+10^3 loops, the first
one will be measured 50 times and the second one will be measured 50+10
times? I am not sure if this is rational.

(2) Will this patch instruct multinode without interval? This is because
we always use ntuples=0 for multinode, so the tuplecount will not
change.

Maybe another way is to measure the cost of timing, then substruct it
from the result - but this is a hand-waiving only so far ...

Regards,
Qingqing



Re: [PATCH] Improve EXPLAIN ANALYZE overhead by sampling

From
Martijn van Oosterhout
Date:
On Thu, May 11, 2006 at 06:37:03PM -0500, Jim C. Nasby wrote:
> On Tue, May 09, 2006 at 10:37:04PM +0200, Martijn van Oosterhout wrote:
> > Note that the resulting times still include the overhead actually
> > incurred, I didn't filter it out. I want the times to remain reflecting
> > reality as closely as possible.
>
> If we actually know the overhead I think it'd be very useful at times to
> be able to remove it, especially if you're actually trying to compare to
> the planner estimates. Maybe worth adding an option to the command?

It's not quite as easy as that unfortunatly. Each node can estimate how
much overhead was incurred on that node. However, each node also
includes as part of its timing the overhead of all its decendant nodes.
So to really remove the overhead, the top-level would have to recurse
through the whole tree to decide what to remove.

What I'm hoping is that this patch will make the overhead so low in
normal operation that we don't need to go to that kind of effort.

> > - I also didn't make it optional. I'm unsure about whether it should be
> > optional or not, given the number of cases where it will make a
> > difference to be very few.
>
> The real question is how important it is to have the real data in the
> cases where it would make a difference, and I suspect we can't answer
> that until this is out in the field. It *might* be worth a #define or
> some other way to disable it that doesn't require patching code, but
> probably not.

A #define is doable, though messy. The code isn't all that long anyway
so a few #ifdefs might make it confusing. But I'll see what I can do.

Have a ncie day,
--
Martijn van Oosterhout   <kleptog@svana.org>   http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to litigate.

Attachment

Re: [PATCH] Improve EXPLAIN ANALYZE overhead by sampling

From
Simon Riggs
Date:
On Fri, 2006-05-12 at 12:22 +0200, Martijn van Oosterhout wrote:
> On Thu, May 11, 2006 at 06:37:03PM -0500, Jim C. Nasby wrote:
> > On Tue, May 09, 2006 at 10:37:04PM +0200, Martijn van Oosterhout wrote:
> > > Note that the resulting times still include the overhead actually
> > > incurred, I didn't filter it out. I want the times to remain reflecting
> > > reality as closely as possible.
> >
> > If we actually know the overhead I think it'd be very useful at times to
> > be able to remove it, especially if you're actually trying to compare to
> > the planner estimates. Maybe worth adding an option to the command?
>
> It's not quite as easy as that unfortunatly. Each node can estimate how
> much overhead was incurred on that node. However, each node also
> includes as part of its timing the overhead of all its decendant nodes.
> So to really remove the overhead, the top-level would have to recurse
> through the whole tree to decide what to remove.

Agreed

--
  Simon Riggs
  EnterpriseDB   http://www.enterprisedb.com


Re: [PATCH] Improve EXPLAIN ANALYZE overhead by sampling

From
Martijn van Oosterhout
Date:
[Sorry for the delay, I'm not subscribed to I didn't see your message
till I checked the archive. Please CC for a quicker response.]

> I got two questions after scanning the patch:
>
> (1) For a node with 50 loops and another one 50+10^3 loops, the first
> one will be measured 50 times and the second one will be measured 50+10
> times? I am not sure if this is rational.

You're miscalculating. For N tuples it samples approximatly 1.5*N^(2/3)
so that would be a bit less than 50+150 samples (my little script
suggests 197 samples).

$ perl -MMath::Complex -e '
for $i (1..1050) {
   if( $i < 50 ) { $s++ }
   else {
    if( $i > $t ) { $s++; $t += cbrt($i); }
   }
}; print "$s\n"; '
197

> (2) Will this patch instruct multinode without interval? This is
> because we always use ntuples=0 for multinode, so the tuplecount will
> not change.

Well, if the tuple count always stays under 50 then it will always
sample. At the time it decides whether to sample or not (the beginning
of the node) it obviously has no idea what will be returned.

Have a ncie day,
--
Martijn van Oosterhout   <kleptog@svana.org>   http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to litigate.

Attachment

Re: [PATCH] Improve EXPLAIN ANALYZE overhead by sampling

From
"Jim C. Nasby"
Date:
On Fri, May 12, 2006 at 12:22:54PM +0200, Martijn van Oosterhout wrote:
> > > - I also didn't make it optional. I'm unsure about whether it should be
> > > optional or not, given the number of cases where it will make a
> > > difference to be very few.
> >
> > The real question is how important it is to have the real data in the
> > cases where it would make a difference, and I suspect we can't answer
> > that until this is out in the field. It *might* be worth a #define or
> > some other way to disable it that doesn't require patching code, but
> > probably not.
>
> A #define is doable, though messy. The code isn't all that long anyway
> so a few #ifdefs might make it confusing. But I'll see what I can do.

If it proves messy, it's probably not worth doing. Presumably anyone
able to tweak a #define could probably apply a patch as well. If you are
going to go through the effort it probably makes the most sense to just
add the remaining syntax to make it dynamic.
--
Jim C. Nasby, Sr. Engineering Consultant      jnasby@pervasive.com
Pervasive Software      http://pervasive.com    work: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf       cell: 512-569-9461

Re: [PATCH] Improve EXPLAIN ANALYZE overhead by sampling

From
Martijn van Oosterhout
Date:
On Mon, May 15, 2006 at 12:09:37AM -0500, Jim C. Nasby wrote:
> On Fri, May 12, 2006 at 12:22:54PM +0200, Martijn van Oosterhout wrote:
> > A #define is doable, though messy. The code isn't all that long anyway
> > so a few #ifdefs might make it confusing. But I'll see what I can do.
>
> If it proves messy, it's probably not worth doing. Presumably anyone
> able to tweak a #define could probably apply a patch as well. If you are
> going to go through the effort it probably makes the most sense to just
> add the remaining syntax to make it dynamic.

Making it configurable via a GUC would be much easier than making ik
optional at compile time because then you just need to skip the tests
for 'to sample or not'. To make it optional at compile time you'd need
to actually take out all the code relating to sampling.

Maybe:

enable_explain_sample (default: yes)

Have a nice day,
--
Martijn van Oosterhout   <kleptog@svana.org>   http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to litigate.

Attachment

Re: [PATCH] Improve EXPLAIN ANALYZE overhead by sampling

From
Bruce Momjian
Date:
Patch applied.  Thanks.

---------------------------------------------------------------------------


Martijn van Oosterhout wrote:
-- Start of PGP signed section.
> This was a suggestion made back in March that would dramatically reduce
> the overhead of EXPLAIN ANALYZE on queries that loop continuously over
> the same nodes.
>
> http://archives.postgresql.org/pgsql-hackers/2006-03/msg01114.php
>
> What it does behave normally for the first 50 tuples of any node, but
> after that it starts sampling at ever increasing intervals, the
> intervals controlled by an exponential function. So for a node
> iterating over 1 million tuples it takes around 15,000 samples. The
> result is that EXPLAIN ANALYZE has a much reduced effect on the total
> execution time.
>
> Without EXPLAIN ANALYZE:
>
> postgres=# select count(*) from generate_series(1,1000000);
>   count
> ---------
>  1000000
> (1 row)
>
> Time: 2303.599 ms
>
> EXPLAIN ANALYZE without patch:
>
> postgres=# explain analyze select count(*) from generate_series(1,1000000);
>                                                              QUERY PLAN
              
>
------------------------------------------------------------------------------------------------------------------------------------
>  Aggregate  (cost=15.00..15.01 rows=1 width=0) (actual time=8022.070..8022.073 rows=1 loops=1)
>    ->  Function Scan on generate_series  (cost=0.00..12.50 rows=1000 width=0) (actual time=1381.762..4873.369
rows=1000000loops=1) 
>  Total runtime: 8042.472 ms
> (3 rows)
>
> Time: 8043.401 ms
>
> EXPLAIN ANALYZE with patch:
>
> postgres=# explain analyze select count(*) from generate_series(1,1000000);
>                                                              QUERY PLAN
              
>
------------------------------------------------------------------------------------------------------------------------------------
>  Aggregate  (cost=15.00..15.01 rows=1 width=0) (actual time=2469.491..2469.494 rows=1 loops=1)
>    ->  Function Scan on generate_series  (cost=0.00..12.50 rows=1000 width=0) (actual time=1405.002..2187.417
rows=1000000loops=1) 
>  Total runtime: 2496.529 ms
> (3 rows)
>
> Time: 2497.488 ms
>
> As you can see, the overhead goes from 5.7 seconds to 0.2 seconds.
> Obviously this is an extreme case, but it will probably help in a lot
> of other cases people have been complaining about.
>
> - To get this close it needs to get an estimate of the sampling overhead.
> It does this by a little calibration loop that is run once per backend.
> If you don't do this, you end up assuming all tuples take the same time
> as tuples with the overhead, resulting in nodes apparently taking
> longer than their parent nodes. Incidently, I measured the overhead to
> be about 3.6us per tuple per node on my (admittedly slightly old)
> machine.
>
> Note that the resulting times still include the overhead actually
> incurred, I didn't filter it out. I want the times to remain reflecting
> reality as closely as possible.
>
> - I also removed InstrStopNodeMulti and made InstrStopNode take a tuple
> count parameter instead. This is much clearer all round.
>
> - I also didn't make it optional. I'm unsure about whether it should be
> optional or not, given the number of cases where it will make a
> difference to be very few.
>
> - The tuple counter for sampling restarts every loop. Thus a node that is
> called repeatedly only returning one value each time will still measure
> every tuple, though its parent node won't. We'll need some field
> testing to see if that remains a significant effect.
>
> - I don't let the user know anywhere how many samples it took. Is this
> something users should care about?
>
> Any comments?
> --
> Martijn van Oosterhout   <kleptog@svana.org>   http://svana.org/kleptog/
> > From each according to his ability. To each according to his ability to litigate.

[ Attachment, skipping... ]
-- End of PGP section, PGP failed!

--
  Bruce Momjian   http://candle.pha.pa.us
  EnterpriseDB    http://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

Re: [PATCH] Improve EXPLAIN ANALYZE overhead by sampling

From
Martijn van Oosterhout
Date:
On Tue, May 30, 2006 at 10:01:49AM -0400, Bruce Momjian wrote:
>
> Patch applied.  Thanks.

I note Tom made some changes to this patch after it went in. For the
record, it was always my intention that samplecount count the number of
_tuples_ returned while sampling, rather than the number of
_iterations_. I'll admit the comment in the header was wrong.

While my original patch had a small error in the case of multiple
tuples returned, it would've been correctable by counting the actual
number of sample. The way it is now, it will show a bias if the number
of tuples returned increases after the first sampled 50 tuples.
However, my knowledge of statistics isn't good enough to determine if
this is an actual problem or not, since the way it is now will sample
more initialially...

Have a nice day,
--
Martijn van Oosterhout   <kleptog@svana.org>   http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to litigate.

Attachment

Re: [PATCH] Improve EXPLAIN ANALYZE overhead by sampling

From
Tom Lane
Date:
Martijn van Oosterhout <kleptog@svana.org> writes:
> I note Tom made some changes to this patch after it went in. For the
> record, it was always my intention that samplecount count the number of
> _tuples_ returned while sampling, rather than the number of
> _iterations_. I'll admit the comment in the header was wrong.

> While my original patch had a small error in the case of multiple
> tuples returned, it would've been correctable by counting the actual
> number of sample. The way it is now, it will show a bias if the number
> of tuples returned increases after the first sampled 50 tuples.

How so?  The number of tuples doesn't enter into it at all.  What the
code is now assuming is that the time per node iteration is constant.
More importantly, it's subtracting off an overhead estimate that's
measured per iteration.  In the math you had before, the overhead was
effectively assumed to be per tuple, which is clearly wrong.

For nodes that return a variable number of tuples, it might be sensible
to presume that the node iteration time is roughly linear in the number
of tuples returned, but I find that debatable.  In any case the sampling
overhead is certainly not dependent on how many tuples an iteration
returns.

This is all really moot at the moment, since we have only two kinds of
nodes: those that always return 1 tuple (until done) and those that
return all their tuples in a single iteration.  If we ever get into
nodes that return varying numbers of tuples per iteration --- say,
exposing btree's page-at-a-time behavior at the plan node level ---
we'd have to rethink this.  But AFAICS we'd need to count both tuples
and iterations to have a model that made any sense at all, so the
extra counter I added is needed anyway.

            regards, tom lane