Re: Join query on 1M row table slow - Mailing list pgsql-general

From scott.marlowe
Subject Re: Join query on 1M row table slow
Date
Msg-id Pine.LNX.4.33.0402101416410.29472-100000@css120.ihs.com
Whole thread Raw
In response to Join query on 1M row table slow  (CSN <cool_screen_name90001@yahoo.com>)
Responses Re: Join query on 1M row table slow
Re: Join query on 1M row table slow
List pgsql-general
On Tue, 10 Feb 2004, CSN wrote:

> I have a pretty simple select query that joins a table
> (p) with 125K rows with another table (pc) with almost
> one million rows:
>
> select p.*
> from product_categories pc
> inner join products p
> on pc.product_id = p.id
> where pc.category_id = $category_id
> order by p.title
> limit 25
> offset $offset
>
> The query usually takes about five seconds to execute
> (all other PG queries perform fast enough). I have
> indexes on everything needed, and EXPLAIN shows
> they're being used. Is there anything else I can do to
> improve performance - such as tweaking some settings
> in the config?

The problem is that in order to do an offset / limit on such a set,
postgresql is gonna have to generate $offset + limit of the joined set.

So, it's quite likely that it's generating the whole set first.

It also looks odd having a select p.* from product_cat pc, but not
selecting anything from the pc table.

Could this be rewritten to something like

select p.* from products p where p.id in (select
product_id from product_categories pc where pc.category_id = $category_id)
order by p.title limit 25 offset $offset

?  Or is that equivalent?


pgsql-general by date:

Previous
From: CSN
Date:
Subject: Re: Join query on 1M row table slow
Next
From: CSN
Date:
Subject: Re: Join query on 1M row table slow