Thread: Understanding histograms

Understanding histograms

From
Len Shapiro
Date:
I hope I am posting to the right list.
I am running Postgresql 8.1.9 and don't understand the behavior of
histograms for data items not in the MVC list.  I teach databases and
want to use Postgres as an example.  I will appreciate any help that
anyone can provide.

Here is the data I am using. I am interested only in the "rank" attribute.

CREATE TABLE Sailors (
   sid Integer NOT NULL,
   sname varchar(20),
   rank integer,
   age real,
   PRIMARY KEY (sid));

I insert 30 sailor rows:

INSERT INTO Sailors VALUES (3, 'Andrew', 10,  30.0);
INSERT INTO Sailors VALUES (17, 'Bart',  5,  30.2);
INSERT INTO Sailors VALUES (29, 'Beth',    3,  30.4);
INSERT INTO Sailors VALUES (28, 'Bryant',    3,  30.6);
INSERT INTO Sailors VALUES (4, 'Cynthia',  9,  30.8);
INSERT INTO Sailors VALUES (16, 'David',  9,  30.9);
INSERT INTO Sailors VALUES (27, 'Fei',    3,  31.0);
INSERT INTO Sailors VALUES (12, 'James',    3,  32.0);
INSERT INTO Sailors VALUES (30, 'Janice',    3,  33.0);
INSERT INTO Sailors VALUES (2, 'Jim', 8,  34.5);
INSERT INTO Sailors VALUES (15, 'Jingke', 10,  35.0);
INSERT INTO Sailors VALUES (26, 'Jonathan',9,  36.0);
INSERT INTO Sailors VALUES (24, 'Kal',    3,  36.6);
INSERT INTO Sailors VALUES (14, 'Karen', 8,  37.8);
INSERT INTO Sailors VALUES (8, 'Karla',7,  39.0);
INSERT INTO Sailors VALUES (25, 'Kristen', 10, 39.5);
INSERT INTO Sailors VALUES (19, 'Len',   8,  40.0);
INSERT INTO Sailors VALUES (7, 'Lois',   8,  41.0);
INSERT INTO Sailors VALUES (13, 'Mark', 7,  43.0);
INSERT INTO Sailors VALUES (18, 'Melanie', 1,  44.0);
INSERT INTO Sailors VALUES (5, 'Niru',  5,  46.0);
INSERT INTO Sailors VALUES (23, 'Pavel',    3,  48.0);
INSERT INTO Sailors VALUES (1, 'Sergio', 7,  50.0);
INSERT INTO Sailors VALUES (6, 'Suhui', 1,  51.0);
INSERT INTO Sailors VALUES (22, 'Suresh',9,  52.0);
INSERT INTO Sailors VALUES (20, 'Tim',7,  54.0);
INSERT INTO Sailors VALUES (21, 'Tom', 10,  56.0);
INSERT INTO Sailors VALUES (11, 'Warren',    3,  58.0);
INSERT INTO Sailors VALUES (10, 'WuChang',9,  59.0);
INSERT INTO Sailors VALUES (9, 'WuChi', 10,  60.0);

after analyzing, I access the pg_stats table with

SELECT  n_distinct, most_common_vals,
        most_common_freqs,   histogram_bounds
FROM pg_stats WHERE tablename = 'sailors' AND attname = 'rank';

and I get:

n_distinct      most_common_vals        most_common_freqs
histogram_bounds
-0.233333
        {3,9,10,7,8}    {0.266667,0.166667,0.166667,0.133333,0.133333}
{1,5}

I have two questions.  I'd appreciate any info you can provide,
including pointers to the source code.

1. Why does Postgres come up with a negative n_distinct?  It
apparently thinks that the number of rank values will increase as the
number of sailors increases.  What/where is the algorithm that decides
that?

2. The most_common_vals and their frequencies make sense.  They say
that the values {3,9,10,7,8} occur a total of 26 times, so other
values occur a total of 4 times.  The other, less common, values are 1
and 5, each occuring twice, so the histogram {1,5} is appropriate.
If I run the query
EXPLAIN SELECT * from sailors where rank = const;
for any const not in the MVC list, I get the plan

Seq Scan on sailors  (cost=0.00..1.38 rows=2 width=21)
  Filter: (rank = const)

The "rows=2" estimate makes sense when const = 1 or 5, but it makes no
sense to me for other values of const not in the MVC list.
For example, if I run the query
EXPLAIN SELECT * from sailors where rank = -1000;
Postgres still gives an estimate of "row=2".
Can someone please explain?

Thanks,

Len Shapiro
Portland State University

Re: Understanding histograms

From
Tom Lane
Date:
Len Shapiro <len@cs.pdx.edu> writes:
> 1. Why does Postgres come up with a negative n_distinct?

It's a fractional representation.  Per the docs:

> stadistinct    float4         The number of distinct nonnull data values in the column. A value greater than zero is
theactual number of distinct values. A value less than zero is the negative of a fraction of the number of rows in the
table(for example, a column in which values appear about twice on the average could be represented by stadistinct =
-0.5).A zero value means the number of distinct values is unknown 

> The "rows=2" estimate makes sense when const = 1 or 5, but it makes no
> sense to me for other values of const not in the MVC list.
> For example, if I run the query
> EXPLAIN SELECT * from sailors where rank = -1000;
> Postgres still gives an estimate of "row=2".

I'm not sure what estimate you'd expect instead?  The code has a built in
assumption that no value not present in the MCV list can be more
frequent than the last member of the MCV list, so it's definitely not
gonna guess *more* than 2.

            regards, tom lane

Re: Understanding histograms

From
"Len Shapiro"
Date:
Tom,

Thank you for your prompt reply.

On Tue, Apr 29, 2008 at 10:19 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Len Shapiro <len@cs.pdx.edu> writes:
>  > 1. Why does Postgres come up with a negative n_distinct?
>
>  It's a fractional representation.  Per the docs:
>
>  > stadistinct   float4          The number of distinct nonnull data values in the column. A value greater than zero
isthe actual number of distinct values. A value less than zero is the negative of a fraction of the number of rows in
thetable (for example, a column in which values appear about twice on the average could be represented by stadistinct =
-0.5).A zero value means the number of distinct values is unknown 

I asked about n_distinct, whose documentation reads in part "The
negated form is used when ANALYZE believes that the number of distinct
values is likely to increase as the table grows".  and I asked about
why ANALYZE believes that the number of distinct values is likely to
increase.  I'm unclear why you quoted to me the documentation on
stadistinct.
>
>
>  > The "rows=2" estimate makes sense when const = 1 or 5, but it makes no
>  > sense to me for other values of const not in the MVC list.
>  > For example, if I run the query
>  > EXPLAIN SELECT * from sailors where rank = -1000;
>  > Postgres still gives an estimate of "row=2".
>
>  I'm not sure what estimate you'd expect instead?

Instead I would expect an estimate of "rows=0" for values of const
that are not in the MCV list and not in the histogram.  When the
histogram has less than the maximum number of entries, implying (I am
guessing here) that all non-MCV values are in the histogram list, this
seems like a simple strategy and has the virtue of being accurate.

Where in the source is the code that manipulates the histogram?

> The code has a built in
>  assumption that no value not present in the MCV list can be more
>  frequent than the last member of the MCV list, so it's definitely not
>  gonna guess *more* than 2.

That's interesting.  Where is this in the source code?

Thanks for all your help.

All the best,

Len Shapiro

>                         regards, tom lane
>

Re: Understanding histograms

From
Tom Lane
Date:
"Len Shapiro" <lenshap@gmail.com> writes:
> I asked about n_distinct, whose documentation reads in part "The
> negated form is used when ANALYZE believes that the number of distinct
> values is likely to increase as the table grows".  and I asked about
> why ANALYZE believes that the number of distinct values is likely to
> increase.  I'm unclear why you quoted to me the documentation on
> stadistinct.

n_distinct is just a view of stadistinct.  I assumed you'd poked around
in the code enough to know that ...

>>> The "rows=2" estimate makes sense when const = 1 or 5, but it makes no
>>> sense to me for other values of const not in the MVC list.
>>
>> I'm not sure what estimate you'd expect instead?

> Instead I would expect an estimate of "rows=0" for values of const
> that are not in the MCV list and not in the histogram.

Surely that's not very sane?  The MCV list plus histogram generally
don't include every value in the table.  IIRC the estimate for values
not present in the MCV list is (1 - sum(MCV frequencies)) divided by
(n_distinct - number of MCV entries), which amounts to assuming that
all values not present in the MCV list occur equally often.  The weak
spot of course is that the n_distinct estimate may be pretty inaccurate.

> Where in the source is the code that manipulates the histogram?

commands/analyze.c builds it, and most of the estimation with it
happens in utils/adt/selfuncs.c.

            regards, tom lane

Re: Understanding histograms

From
Jeff Davis
Date:
On Wed, 2008-04-30 at 10:43 -0400, Tom Lane wrote:
> > Instead I would expect an estimate of "rows=0" for values of const
> > that are not in the MCV list and not in the histogram.
>
> Surely that's not very sane?  The MCV list plus histogram generally
> don't include every value in the table.  IIRC the estimate for values
> not present in the MCV list is (1 - sum(MCV frequencies)) divided by
> (n_distinct - number of MCV entries), which amounts to assuming that
> all values not present in the MCV list occur equally often.  The weak
> spot of course is that the n_distinct estimate may be pretty inaccurate.

My understanding of Len's question is that, although the MCV list plus
the histogram don't include every distinct value in the general case,
they do include every value in the specific case where the histogram is
not full.

Essentially, this seems like using the histogram to extend the MCV list
such that, together, they represent all distinct values. This idea only
seems to help when the number of distinct values is greater than the
max size of MCVs, but less than the max size of MCVs plus histogram
bounds.

I'm not sure how much of a gain this is, because right now that could
be accomplished by increasing the statistics for that column (and
therefore all of your distinct values would fit in the MCV list). Also
the statistics aren't guaranteed to be perfectly up-to-date, so an
estimate of zero might be risky.

Regards,
    Jeff Davis


Re: Understanding histograms

From
Tom Lane
Date:
Jeff Davis <pgsql@j-davis.com> writes:
> On Wed, 2008-04-30 at 10:43 -0400, Tom Lane wrote:
>> Surely that's not very sane?  The MCV list plus histogram generally
>> don't include every value in the table.

> My understanding of Len's question is that, although the MCV list plus
> the histogram don't include every distinct value in the general case,
> they do include every value in the specific case where the histogram is
> not full.

I don't believe that's true.  It's possible that a small histogram means
that you are seeing every value that was in ANALYZE's sample, but it's
a mighty long leap from that to the assumption that there are no other
values in the table.  In any case that seems more an artifact of the
implementation than a property the histogram would be guaranteed to
have.

> ... the statistics aren't guaranteed to be perfectly up-to-date, so an
> estimate of zero might be risky.

Right.  As a matter of policy we never estimate less than one matching
row; and I've seriously considered pushing that up to at least two rows
except when we see that the query condition matches a unique constraint.
You can get really bad join plans from overly-small estimates.

            regards, tom lane

Re: Understanding histograms

From
Gregory Stark
Date:
"Tom Lane" <tgl@sss.pgh.pa.us> writes:

> Right.  As a matter of policy we never estimate less than one matching
> row; and I've seriously considered pushing that up to at least two rows
> except when we see that the query condition matches a unique constraint.
> You can get really bad join plans from overly-small estimates.

This is something that needs some serious thought though. In the case of
partitioned tables I've seen someone get badly messed up plans because they
had a couple hundred partitions each of which estimated to return 1 row. In
fact of course they all returned 0 rows except the correct partition. (This
was in a join so no constraint exclusion)

--
  Gregory Stark
  EnterpriseDB          http://www.enterprisedb.com
  Ask me about EnterpriseDB's 24x7 Postgres support!

Re: Understanding histograms

From
Tom Lane
Date:
Gregory Stark <stark@enterprisedb.com> writes:
> This is something that needs some serious thought though. In the case of
> partitioned tables I've seen someone get badly messed up plans because they
> had a couple hundred partitions each of which estimated to return 1 row. In
> fact of course they all returned 0 rows except the correct partition. (This
> was in a join so no constraint exclusion)

Yeah, one of the things we need to have a "serious" partitioning
solution is to get the planner's estimation code to understand
what's happening there.

            regards, tom lane