Re: bad plan - Mailing list pgsql-performance

From Richard Huxton
Subject Re: bad plan
Date
Msg-id 422E0A1E.1080804@archonet.com
Whole thread Raw
In response to Re: bad plan  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: bad plan  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-performance
Tom Lane wrote:
>
> Putting ORDER BYs in views that you intend to use as components of other
> views is a bad practice from a performance perspective...

There are also a lot of views involved here for very few output columns.
Tom - is the planner smart enough to optimise-out unneeded columns from
a SELECT * view if it's part of a join/subquery and you only use one or
two columns?

Secondly, in the original plan we have:
->  Nested Loop Left Join  (cost=1478.82..1716.37 rows=1 width=201)
(actual time=3254.483..52847.064 rows=31 loops=1)

Now, we've got 31 rows instead of 1 here. The one side of the join ends
up as:
->  Subquery Scan vsp  (cost=985.73..1016.53 rows=1103 width=12) (actual
time=25.328..1668.754 rows=493 loops=31)
->  Merge Join  (cost=985.73..1011.01 rows=1103 width=130) (actual
time=25.321..1666.666 rows=493 loops=31)

Would I be right in thinking the planner doesn't materialise the
subquery because it's expecting 1 loop not 31? If there were 1 row the
plan would seem OK to me.

Is there any mileage in the idea of a "lazy" planner that keeps some
alternative paths around in case they're needed? Or a reactive one that
can re-plan nodes when assumptions turn out to be wrong?

--
   Richard Huxton
   Archonet Ltd

pgsql-performance by date:

Previous
From: Josh Berkus
Date:
Subject: Why would writes to pgsql_tmp bottleneck at 1mb/s?
Next
From: Tom Lane
Date:
Subject: Re: bad plan