Re: - Mailing list pgsql-performance

From Tom Lane
Subject Re:
Date
Msg-id 5644.1173199253@sss.pgh.pa.us
Whole thread Raw
In response to Re:  (Jeff Cole <cole.jeff@gmail.com>)
Responses Re:
List pgsql-performance
Jeff Cole <cole.jeff@gmail.com> writes:
> Hi Tom, thanks for the response.  Here are the pg_stats.  I think I
> understand what the stats say, but I don't know what to conclude from
> them.

OK, the symptom_id row claims there are only 80 distinct values of
symptom_id in symptom_reports.  This is a bit low (looks like the true
state of affairs is that all but 2 of the 108 entries of symptoms are
represented in symptom_reports), but it's not horridly off considering
that you're using the rather low default statistics_target.  What
happens is that the planner expects that on average only 80 rows of the
inner join will need to be scanned to find a match for a given symptoms.id,
and this makes the nestloop look cheap.  However, per your previous
EXPLAIN ANALYZE:

>     ->  Nested Loop IN Join  (cost=149.05..586.26 rows=85 width=0)  (actual time=54.517..3441.115 rows=106 loops=1)
>           Join Filter: ("outer".id = "inner".symptom_id)
>           ->  Seq Scan on symptoms  (cost=0.00..3.08 rows=108  width=4) (actual time=0.007..0.273 rows=108 loops=1)
>           ->  Hash IN Join  (cost=149.05..603.90 rows=13074 width=4)  (actual time=0.078..24.503 rows=3773 loops=108)


the *actual* average number of rows scanned is 3773.  I'm not sure why
this should be --- is it possible that the distribution of keys in
symptom_reports is wildly uneven?  This could happen if all of the
physically earlier rows in symptom_reports contain the same small set
of symptom_ids, but the stats don't seem to indicate such a skew.

            regards, tom lane

pgsql-performance by date:

Previous
From: Csaba Nagy
Date:
Subject: Re: Insert performance
Next
From: Richard Troy
Date:
Subject: Re: Estimate the size of the SQL file generated by pg_dump utility