Re: Optimizing a huge_table/tiny_table join - Mailing list pgsql-performance

From Mark Kirkwood
Subject Re: Optimizing a huge_table/tiny_table join
Date
Msg-id 44763D4D.6090305@paradise.net.nz
Whole thread Raw
In response to Re: Optimizing a huge_table/tiny_table join  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-performance
Tom Lane wrote:
> <kynn@panix.com> writes:
>>  Limit  (cost=19676.75..21327.99 rows=6000 width=84)
>>    ->  Hash Join  (cost=19676.75..1062244.81 rows=3788315 width=84)
>>          Hash Cond: (upper(("outer".id)::text) = upper(("inner".id)::text))
>>          ->  Seq Scan on huge_table h  (cost=0.00..51292.43 rows=2525543 width=46)
>>          ->  Hash  (cost=19676.00..19676.00 rows=300 width=38)
>>                ->  Seq Scan on tiny_table t  (cost=0.00..19676.00 rows=300 width=38)
>
> Um, if huge_table is so much bigger than tiny_table, why are the cost
> estimates for seqscanning them only about 2.5x different?  There's
> something wacko about your statistics, methinks.
>

This suggests that tiny_table is very wide (i.e a lot of columns
compared to huge_table), or else has thousands of dead tuples.

Do you want to post the descriptions for these tables?

If you are running 8.1.x, then the output of 'ANALYZE VERBOSE
tiny_table' is of interest too.

If you are running a pre-8.1 release, then lets see 'VACUUM VERBOSE
tiny_table'.

Note that after either of these, your plans may be altered (as ANALYZE
will recompute your stats for tiny_table, and VACUUM may truncate pages
full of dead tuples at the end of it)!


pgsql-performance by date:

Previous
From: Tom Lane
Date:
Subject: Re: is it possible to make this faster?
Next
From: Christopher Kings-Lynne
Date:
Subject: Re: lowering priority automatically at connection