Thread: Regarding EXPLAIN and width calculations

Regarding EXPLAIN and width calculations

From
Jon Nelson
Date:
What influences the calculation of the 'width' value in query plans?
Specifically, I have two queries which both query the same set of
tables via either UNION or UNION ALL based on the presence (or
absence) of an aggregate function.

Like this:

SELECT a, b FROM foo_1 WHERE a = 'bar'
UNION
SELECT a, b FROM foo_2 WHERE a = 'bar'
UNION
SELECT a, b FROM foo_3 WHERE a = 'bar'
UNION
SELECT a, b FROM foo_4 WHERE a = 'bar'

or this:

SELECT SUB.a, sum(SUB.b) FROM
(SELECT a, b FROM foo_1 WHERE a = 'bar'
UNION ALL
SELECT a, b FROM foo_2 WHERE a = 'bar'
UNION ALL
SELECT a, b FROM foo_3 WHERE a = 'bar'
UNION ALL
SELECT a, b FROM foo_4 WHERE a = 'bar'
) AS SUB GROUP BY SUB.a

The query plan for both queries has an identical inner portion which
consists of an index scan on each table followed by an append
operation. The widths of each subquery are identical. However, in the
case where the aggregate function is used the calculated width of the
"Append" operation is larger:


[UNION ALL/aggregate present] Append  (cost=0.00..271845.68
rows=27180665 width=36)
versus:
[UNION/no aggregate present] Append  (cost=0.00..271845.68
rows=27180665 width=11)

The UNION ALL variation uses a HashAggregate and "guesses" a row count
of 200 (which is crazy talk) and inherits the Append width of 36.
The UNION variation uses a Sort/Unique with a reasonable row count and
the same width as the Append (11).

What's going on there? Why did the UNION ALL Append operation get a
rather larger (more than 3x) row width when, at that state of the
query execution, the contents should be identical to the UNION
variation? Why did the row count go to 200?

--
Jon

Re: Regarding EXPLAIN and width calculations

From
Tom Lane
Date:
Jon Nelson <jnelson+pgsql@jamponi.net> writes:
> What influences the calculation of the 'width' value in query plans?

It's generally the sum of the estimated column widths for all the
columns needed at that particular level of the plan.

> Specifically, I have two queries which both query the same set of
> tables via either UNION or UNION ALL based on the presence (or
> absence) of an aggregate function.

Hard to comment about this with such an incomplete view of the situation
--- in particular, data types would be a critical factor, and I also
wonder if you're admitting to all the columns involved.

            regards, tom lane

Re: Regarding EXPLAIN and width calculations

From
Jon Nelson
Date:
On Fri, Nov 19, 2010 at 12:14 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Jon Nelson <jnelson+pgsql@jamponi.net> writes:
>> What influences the calculation of the 'width' value in query plans?
>
> It's generally the sum of the estimated column widths for all the
> columns needed at that particular level of the plan.
>
>> Specifically, I have two queries which both query the same set of
>> tables via either UNION or UNION ALL based on the presence (or
>> absence) of an aggregate function.
>
> Hard to comment about this with such an incomplete view of the situation
> --- in particular, data types would be a critical factor, and I also
> wonder if you're admitting to all the columns involved.

Here is an example that, while super ugly, does show the problem:


begin;
create temporary table foo_1 as
  SELECT
    CAST(CAST(x % 255 AS text) || '.' || CAST(x % 255 AS text) || '.'
|| CAST(x % 255 AS text) || '.' || CAST(x % 255 AS text) AS inet) as
a,
    (x % 1000) as b
    FROM generate_series( 1, 1000000 ) AS x;
create temporary table foo_2 as
  SELECT
    CAST(CAST(x % 255 AS text) || '.' || CAST(x % 255 AS text) || '.'
|| CAST(x % 255 AS text) || '.' || CAST(x % 255 AS text) AS inet) as
a,
    (x % 1000) as b
    FROM generate_series( 1, 1000000 ) AS x;
create temporary  table foo_3 as
  SELECT
    CAST(CAST(x % 255 AS text) || '.' || CAST(x % 255 AS text) || '.'
|| CAST(x % 255 AS text) || '.' || CAST(x % 255 AS text) AS inet) as
a,
    (x % 1000) as b
    FROM generate_series( 1, 1000000 ) AS x;
create temporary  table foo_4 as
  SELECT
    CAST(CAST(x % 255 AS text) || '.' || CAST(x % 255 AS text) || '.'
|| CAST(x % 255 AS text) || '.' || CAST(x % 255 AS text) AS inet) as
a,
    (x % 1000) as b
    FROM generate_series( 1, 1000000 ) AS x;
create index foo_1_a_idx on foo_1 (a);
create index foo_2_a_idx on foo_2 (a);
create index foo_3_a_idx on foo_3 (a);
create index foo_4_a_idx on foo_4 (a);
analyze foo_1;
analyze foo_2;
analyze foo_3;
analyze foo_4;
explain analyze verbose
 select a, b from foo_1 where a = '1.2.3.4'::inet
 UNION
 select a, b from foo_2 where a = '1.2.3.4'::inet
 UNION
 select a, b from foo_3 where a = '1.2.3.4'::inet
 UNION
 select a, b from foo_4 where a = '1.2.3.4'::inet
 ;

explain analyze verbose
SELECT SUB.a, SUM(SUB.b) AS b FROM
(
 select a, b from foo_1 where a = '1.2.3.4'::inet
 UNION ALL
 select a, b from foo_2 where a = '1.2.3.4'::inet
 UNION ALL
 select a, b from foo_3 where a = '1.2.3.4'::inet
 UNION ALL
 select a, b from foo_4 where a = '1.2.3.4'::inet
 ) AS SUB GROUP BY a;
rollback;

--
Jon

Re: Regarding EXPLAIN and width calculations

From
Tom Lane
Date:
Jon Nelson <jnelson+pgsql@jamponi.net> writes:
> On Fri, Nov 19, 2010 at 12:14 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> Hard to comment about this with such an incomplete view of the situation
>> --- in particular, data types would be a critical factor, and I also
>> wonder if you're admitting to all the columns involved.

> Here is an example that, while super ugly, does show the problem:

Hm.  In the UNION case, we have a measured average width for the inet
column from ANALYZE for each table, and we just use that.  In the
sub-select case, it seems to be falling back to a default estimate for
the datatype, which surprises me a bit --- at least in HEAD it seems
like it should be smarter.  I'll go look at that.

As for the rowcount estimates, these aren't the same query so there's no
reason for them to be the same.  In the UNION case, it's basically
taking the pessimistic assumption that there are no duplicate rows;
given the lack of cross-column stats there's no way to be much smarter.
In the GROUP BY case, the question is how many distinct values of 'a'
you suppose there are.  This is also pretty hard to be rigorous about
--- we have an idea of that for each table, but no idea how many
cross-table duplications there are.  The 200 is just a default estimate
when it has no idea.  We could probably come up with some better though
not rigorous estimate, but nobody's worked on it.

            regards, tom lane

Re: Regarding EXPLAIN and width calculations

From
Jon Nelson
Date:
On Fri, Nov 19, 2010 at 1:09 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Jon Nelson <jnelson+pgsql@jamponi.net> writes:
>> On Fri, Nov 19, 2010 at 12:14 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>>> Hard to comment about this with such an incomplete view of the situation
>>> --- in particular, data types would be a critical factor, and I also
>>> wonder if you're admitting to all the columns involved.
>
>> Here is an example that, while super ugly, does show the problem:
>
> Hm.  In the UNION case, we have a measured average width for the inet
> column from ANALYZE for each table, and we just use that.  In the
> sub-select case, it seems to be falling back to a default estimate for
> the datatype, which surprises me a bit --- at least in HEAD it seems
> like it should be smarter.  I'll go look at that.
>
> As for the rowcount estimates, these aren't the same query so there's no
> reason for them to be the same.  In the UNION case, it's basically
> taking the pessimistic assumption that there are no duplicate rows;
> given the lack of cross-column stats there's no way to be much smarter.
> In the GROUP BY case, the question is how many distinct values of 'a'
> you suppose there are.  This is also pretty hard to be rigorous about
> --- we have an idea of that for each table, but no idea how many
> cross-table duplications there are.  The 200 is just a default estimate
> when it has no idea.  We could probably come up with some better though
> not rigorous estimate, but nobody's worked on it.

I've run into this '200' issue a *lot* (see the "HashAggregate
consumes all memory before crashing" issue I asked about earlier).  If
I might, perhaps a couple of alternatives seem more reasonable to me:

1. calculate the value as a percentage of the total rows (like, say, 15%)
2. make the value a variable, so that it could be set globally or even
per-query. it would be really nice to be able to *tell* the planner
"there is an expected 30% overlap between these tables".

Could you point me in the general direction in the source as to where
the 200 value comes from? With tables with hundreds of millions or
billions or rows I see the value 40,000. Both 200 and 40,000 seem like
arbitrary values - perhaps they are calculated similarly?

--
Jon