Re: two table join just not fast enough. - Mailing list pgsql-performance

From Brian Fehrle
Subject Re: two table join just not fast enough.
Date
Msg-id 4EB1E14C.4010808@consistentstate.com
Whole thread Raw
In response to Re: two table join just not fast enough.  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-performance
Thanks Tom,
And looks like I pasted an older explain plan, which is almost exactly
the same as the one with 50MB work_mem, except for the hash join
'buckets' part which used more memory and only one 'bucket' so to speak.

When running with the 50MB work_mem over 1MB work_mem, the query went
from an average of 190 ms to 169 ms, so it did help some but it wasn't a
game changer (I even found for this specific query, 6MB of work_mem was
the most that would actually help me).

I have other plans to try to get this thing running faster, I'll be
exploring them tomorrow, as well as looking at using an enum type.

- Brian F

On 11/02/2011 05:53 PM, Tom Lane wrote:
> Brian Fehrle<brianf@consistentstate.com>  writes:
>> I've got a query that I need to squeeze as much speed out of as I can.
> Hmm ... are you really sure this is being run with work_mem = 50MB?
> The hash join is getting "batched", which means the executor thinks it's
> working under a memory constraint significantly less than the size of
> the filtered inner relation, which should be no more than a couple
> megabytes according to this.
>
> I'm not sure how much that will save, since the hashjoin seems to be
> reasonably speedy anyway, but there's not much other fat to trim here.
>
> One minor suggestion is to think whether you really need string
> comparisons here or could convert that to use of an enum type.
> String compares ain't cheap, especially not in non-C locales.
>
>             regards, tom lane


pgsql-performance by date:

Previous
From: Tom Lane
Date:
Subject: Re: two table join just not fast enough.
Next
From: Mark Kirkwood
Date:
Subject: Re: two table join just not fast enough.