Tom Lane wrote:
> I would suggest bumping up the statistics target for usertable.podkey
> (see ALTER TABLE SET STATISTICS). Since it's only an int4 column you
> could make the target 100 (instead of the default 10) without much
> cost. That should give substantially finer-grain detail and hopefully
> bring this estimate down out of the stratosphere. Re-analyze the table
> and see what it gets you.
>
Thanks, that seemed to have helped. Now I have this other query:
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------
Hash Join (cost=46167.67..81194.20 rows=175858 width=13) (actual time=8807.10..16097.85
rows=249551 loops=1)
Hash Cond: ("outer".userkey = "inner".userkey)
-> Seq Scan on u (cost=0.00..25886.79 rows=175858 width=7) (actual
time=0.10..3329.93 rows=249551 loops=1)
Filter: (podkey = 260)
-> Hash (cost=20167.55..20167.55 rows=637355 width=6) (actual time=8806.36..8806.36
rows=0 loops=1)
-> Seq Scan on d (cost=0.00..20167.55 rows=637355 width=6) (actual
time=28.13..7317.19 rows=638045 loops=1)
Total runtime: 16926.00 msec
(7 rows)
How do I read that? Is it creating a hash out of the data in d, then going through u
doing a join?