Re: Slow 3 Table Join with v bad row estimate - Mailing list pgsql-performance

From David Osborne
Subject Re: Slow 3 Table Join with v bad row estimate
Date
Msg-id CAKmpXCf4JN6T0y202YOO6Lbw-Bi8NWsSyyPaegF81vHZpkjthg@mail.gmail.com
Whole thread Raw
In response to Re: Slow 3 Table Join with v bad row estimate  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: Slow 3 Table Join with v bad row estimate
List pgsql-performance
Ok - wow.

Adding that index, I get the same estimate of 1 row, but a runtime of ~450ms.
A 23000ms improvement.


This is great. So as a general rule of thumb, if I see a Join Filter removing an excessive number of rows, I can check if that condition can be added to an index from the same table which is already being scanned.

Thanks for this!

On 10 November 2015 at 17:05, Tom Lane <tgl@sss.pgh.pa.us> wrote:

But taking a step back, it seems like the core problem in your explain
output is here:

>>    ->  Nested Loop  (cost=1.29..83263.71 rows=1 width=24) (actual time=0.196..23799.930 rows=53595 loops=1)
>>          Join Filter: (o.po_id = p.po_id)
>>          Rows Removed by Join Filter: 23006061
>>          Buffers: shared hit=23217993 dirtied=1

That's an awful lot of rows being formed by the join only to be rejected.
You should try creating an index on
branch_purchase_order_products(po_id, product_code)
so that the po_id condition could be enforced at the inner indexscan
instead of the join.




pgsql-performance by date:

Previous
From: Tom Lane
Date:
Subject: Re: Slow 3 Table Join with v bad row estimate
Next
From: Igor Neyman
Date:
Subject: Re: Slow 3 Table Join with v bad row estimate