Re: Odd sorting behaviour - Mailing list pgsql-performance

From Tom Lane
Subject Re: Odd sorting behaviour
Date
Msg-id 16637.1089867158@sss.pgh.pa.us
Whole thread Raw
In response to Re: Odd sorting behaviour  (Josh Berkus <josh@agliodbs.com>)
Responses Re: Odd sorting behaviour  ("Steinar H. Gunderson" <sgunderson@bigfoot.com>)
List pgsql-performance
Josh Berkus <josh@agliodbs.com> writes:
>> - The "subquery scan o12" phase outputs 1186 rows, yet 83792 are sorted.
> Where
>> do the other ~82000 rows come from?

> I'm puzzled by the "83792" rows as well.  I've a feeling that Explain
> Analyze is failing to output a step.

No, it's not missing anything.  The number being reported here is the
number of rows pulled from the plan node --- but this plan node is on
the inside of a merge join, and one of the properties of merge join is
that it will do partial rescans of its inner input in the presence of
equal keys in the outer input.  If you have, say, 10 occurrences of
"42" in the outer input, then any "42" rows in the inner input have to
be rescanned 10 times.  EXPLAIN ANALYZE will count each of them as 10
rows returned by the input node.

The large multiple here (80-to-one overscan) says that you've got a lot
of duplicate values in the outer input.  This is generally a good
situation to *not* use a mergejoin in ;-).  We do have some logic in the
planner that attempts to estimate the extra cost involved in such
rescanning, but I'm not sure how accurate the cost model is.

> Most of your time is spent in that merge join.   Why don't you try doubling
> sort_mem temporarily to see how it does?  Or even raising shared_buffers?

Raising shared_buffers seems unlikely to help.  I do agree with raising
sort_mem --- not so much to make the merge faster as to encourage the
thing to try a hash join instead.

            regards, tom lane

pgsql-performance by date:

Previous
From: Tom Lane
Date:
Subject: Re: vacuum full 100 mins plus?
Next
From: Shridhar Daithankar
Date:
Subject: Re: Insert are going slower ...