* Ksenia Marasanova (ksenia.marasanova@gmail.com) wrote:
> test=# explain analyze select * from user_ left join church on
> user_.church_id = church.id;
[...]
> Total runtime: 2025.946 ms
> (6 rows)
>
> test=# set enable_seqscan='false';
> SET
> test=# explain analyze select * from user_ left join church on
> user_.church_id = church.id;
>
[...]
> Total runtime: 2131.364 ms
> (5 rows)
>
>
> I followed some tips on the web and vacuum-ed database, I think the
> query is faster now, almost acceptable, but still interesting to know
> if it possible to optimize it...
I have to say that it does seem a bit slow for only 12,000 rows..
What's the datatype of user_.church_id and church.id? Are you sure you
really want all 12,000 rows every time you run that query? Perhaps
there's a 'where' clause you could apply with an associated index to
limit the query to just what you actually need?
You'll noticed from above, though, that the non-index scan is faster.
I'd expect that when using a left-join query: you have to go through the
entire table on an open left-join like that, a sequencial scan is going
to be the best way to do that. The fact that church.id is hashed makes
the solution the planner came up with almost certainly the best one
possible.
Are you sure a left-join is what you want? Sounds like maybe you've
moved (for some reason) from a regular join to a left join with a
filtering in the application which is probably a bad move... If you can
use at least some filtering in the database I expect that'd help..
Thanks,
Stephen