Re: Sequential scan instead of index scan - Mailing list pgsql-performance

From Tom Lane
Subject Re: Sequential scan instead of index scan
Date
Msg-id 10966.1344267248@sss.pgh.pa.us
Whole thread Raw
In response to Re: Sequential scan instead of index scan  (Ioannis Anagnostopoulos <ioannis@anatec.com>)
Responses Re: Sequential scan instead of index scan  (Ioannis Anagnostopoulos <ioannis@anatec.com>)
List pgsql-performance
Ioannis Anagnostopoulos <ioannis@anatec.com> writes:
>         I think this is a pretty good plan and quite quick given the
>         size of the table (88Million rows at present). However in real
>         life the parameter where I search for msg_id is not an array of
>         3 ids but of 300.000 or more. It is then that the query forgets
>         the plan and goes to sequential scan. Is there any way around?

If you've got that many, any(array[....]) is a bad choice.  I'd try
putting the IDs into a VALUES(...) list, or even a temporary table, and
then writing the query as a join.  It is a serious mistake to think that
a seqscan is evil when you're dealing with joining that many rows, btw.
What you should probably be looking for is a hash join plan.

            regards, tom lane

pgsql-performance by date:

Previous
From: Ioannis Anagnostopoulos
Date:
Subject: Re: Sequential scan instead of index scan
Next
From: "Midge Brown"
Date:
Subject: Re: slow query, different plans