Re: Shouldn't we have a way to avoid "risky" plans? - Mailing list pgsql-performance

From Josh Berkus
Subject Re: Shouldn't we have a way to avoid "risky" plans?
Date
Msg-id 4D8A8AB8.7040401@agliodbs.com
Whole thread Raw
In response to Re: Shouldn't we have a way to avoid "risky" plans?  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: Shouldn't we have a way to avoid "risky" plans?  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-performance
> If the planner starts operating on the basis of worst case rather than
> expected-case performance, the complaints will be far more numerous than
> they are today.

Yeah, I don't think that's the way to go.  The other thought I had was
to accumulate a "risk" stat the same as we accumulate a "cost" stat.

However, I'm thinking that I'm overengineering what seems to be a fairly
isolated problem, in that we might simply need to adjust the costing on
this kind of a plan.

Also, can I say that the cost figures in this plan are extremely
confusing?  Is it really necessary to show them the way we do?

Merge Join  (cost=29.16..1648.00 rows=382 width=78) (actual
time=57215.167..57215.216 rows=1 loops=1)
   Merge Cond: (rn.node_id = device_nodes.node_id)
   ->  Nested Loop  (cost=0.00..11301882.40 rows=6998 width=62) (actual
time=57209.291..57215.030 rows=112 loops=1)
         Join Filter: (node_ep.node_id = rn.node_id)
         ->  Nested Loop  (cost=0.00..11003966.85 rows=90276 width=46)
(actual time=0.027..52792.422 rows=90195 loops=1)

The first time I saw the above, I thought we had some kind of glibc math
bug on the host system.  Costs are supposed to accumulate upwards.

--
                                  -- Josh Berkus
                                     PostgreSQL Experts Inc.
                                     http://www.pgexperts.com

pgsql-performance by date:

Previous
From: Marti Raudsepp
Date:
Subject: Re: Slow query on CLUTER -ed tables
Next
From: DM
Date:
Subject: pg9.0.3 explain analyze running very slow compared to a different box with much less configuration