Thread: Sequential scan instead of index scan

Sequential scan instead of index scan

From
Ioannis Anagnostopoulos
Date:
Hi, my query is very simple:

select
            msg_id,
            msg_type,
            ship_pos_messages.pos_georef1,
            ship_pos_messages.pos_georef2,
            ship_pos_messages.pos_georef3,
            ship_pos_messages.pos_georef4,
            obj_id,
            ship_speed,
            ship_heading,
            ship_course,
            pos_point
        from
            feed_all_y2012m08.ship_pos_messages
        where
            extract('day' from msg_date_rec) = 1
            AND msg_id = any(ARRAY[7294724,14174174,22254408]);

The msg_id is the pkey on the ship_pos_messages table and in this example it is working fast as it uses the pkey (primary key index) to make the selection. The expplain anayze follows:
"Result  (cost=0.00..86.16 rows=5 width=117) (actual time=128.734..163.319 rows=3 loops=1)"
"  ->  Append  (cost=0.00..86.16 rows=5 width=117) (actual time=128.732..163.315 rows=3 loops=1)"
"        ->  Seq Scan on ship_pos_messages  (cost=0.00..0.00 rows=1 width=100) (actual time=0.001..0.001 rows=0 loops=1)"
"              Filter: ((msg_id = ANY ('{7294724,14174174,22254408}'::integer[])) AND (date_part('day'::text, msg_date_rec) = 1::double precision))"
"        ->  Seq Scan on ship_a_pos_messages ship_pos_messages  (cost=0.00..0.00 rows=1 width=100) (actual time=0.000..0.000 rows=0 loops=1)"
"              Filter: ((msg_id = ANY ('{7294724,14174174,22254408}'::integer[])) AND (date_part('day'::text, msg_date_rec) = 1::double precision))"
"        ->  Bitmap Heap Scan on ship_b_std_pos_messages ship_pos_messages  (cost=13.41..25.42 rows=1 width=128) (actual time=49.127..49.127 rows=0 loops=1)"
"              Recheck Cond: (msg_id = ANY ('{7294724,14174174,22254408}'::integer[]))"
"              Filter: (date_part('day'::text, msg_date_rec) = 1::double precision)"
"              ->  Bitmap Index Scan on ship_b_std_pos_messages_pkey  (cost=0.00..13.41 rows=3 width=0) (actual time=49.125..49.125 rows=0 loops=1)"
"                    Index Cond: (msg_id = ANY ('{7294724,14174174,22254408}'::integer[]))"
"        ->  Bitmap Heap Scan on ship_b_ext_pos_messages ship_pos_messages  (cost=12.80..24.62 rows=1 width=128) (actual time=0.029..0.029 rows=0 loops=1)"
"              Recheck Cond: (msg_id = ANY ('{7294724,14174174,22254408}'::integer[]))"
"              Filter: (date_part('day'::text, msg_date_rec) = 1::double precision)"
"              ->  Bitmap Index Scan on ship_b_ext_pos_messages_pkey  (cost=0.00..12.80 rows=3 width=0) (actual time=0.027..0.027 rows=0 loops=1)"
"                    Index Cond: (msg_id = ANY ('{7294724,14174174,22254408}'::integer[]))"
"        ->  Bitmap Heap Scan on ship_a_pos_messages_wk0 ship_pos_messages  (cost=24.08..36.12 rows=1 width=128) (actual time=79.572..114.152 rows=3 loops=1)"
"              Recheck Cond: (msg_id = ANY ('{7294724,14174174,22254408}'::integer[]))"
"              Filter: (date_part('day'::text, msg_date_rec) = 1::double precision)"
"              ->  Bitmap Index Scan on ship_a_pos_messages_wk0_pkey  (cost=0.00..24.08 rows=3 width=0) (actual time=67.441..67.441 rows=3 loops=1)"
"                    Index Cond: (msg_id = ANY ('{7294724,14174174,22254408}'::integer[]))"
"Total runtime: 180.146 ms"


I think this is a pretty good plan and quite quick given the size of the table (88Million rows at present). However in real life the parameter where I search for msg_id is not an array of 3 ids but of 300.000 or more. It is then that the query forgets the plan and goes to sequential scan. Is there any way around? Or is this the best I can have?

Kind Regards
Yiannis

Re: Sequential scan instead of index scan

From
David Barton
Date:
Hi Yiannis,

Is there anything linking these ids together, or are the relatively random?  If they are relatively random, the rows are likely to be sprinkled amongst many blocks and so a seq scan is the fastest.  I've seen similar problems with indexed queries in a multi-tennant database where the data is so fragmented that once the record volume hits a certain threshold, Postgres decides to table scan rather than use an index.

The query optimiser is unlikely to be able to determine the disk locality of 300k rows and so it just takes a punt on a seq scan.

If you added another filter condition on something indexed e.g. last week or last month or location or something, you might do better if the data does exhibit disk locality.  If the data really is scattered, then a seq scan really will be quicker.

Regards, David

On 06/08/12 23:08, Ioannis Anagnostopoulos wrote:
Hi, my query is very simple:

select
            msg_id,
            msg_type,
            ship_pos_messages.pos_georef1,
            ship_pos_messages.pos_georef2,
            ship_pos_messages.pos_georef3,
            ship_pos_messages.pos_georef4,
            obj_id,
            ship_speed,
            ship_heading,
            ship_course,
            pos_point
        from
            feed_all_y2012m08.ship_pos_messages
        where
            extract('day' from msg_date_rec) = 1
            AND msg_id = any(ARRAY[7294724,14174174,22254408]);

The msg_id is the pkey on the ship_pos_messages table and in this example it is working fast as it uses the pkey (primary key index) to make the selection. The expplain anayze follows:
"Result  (cost=0.00..86.16 rows=5 width=117) (actual time=128.734..163.319 rows=3 loops=1)"
"  ->  Append  (cost=0.00..86.16 rows=5 width=117) (actual time=128.732..163.315 rows=3 loops=1)"
"        ->  Seq Scan on ship_pos_messages  (cost=0.00..0.00 rows=1 width=100) (actual time=0.001..0.001 rows=0 loops=1)"
"              Filter: ((msg_id = ANY ('{7294724,14174174,22254408}'::integer[])) AND (date_part('day'::text, msg_date_rec) = 1::double precision))"
"        ->  Seq Scan on ship_a_pos_messages ship_pos_messages  (cost=0.00..0.00 rows=1 width=100) (actual time=0.000..0.000 rows=0 loops=1)"
"              Filter: ((msg_id = ANY ('{7294724,14174174,22254408}'::integer[])) AND (date_part('day'::text, msg_date_rec) = 1::double precision))"
"        ->  Bitmap Heap Scan on ship_b_std_pos_messages ship_pos_messages  (cost=13.41..25.42 rows=1 width=128) (actual time=49.127..49.127 rows=0 loops=1)"
"              Recheck Cond: (msg_id = ANY ('{7294724,14174174,22254408}'::integer[]))"
"              Filter: (date_part('day'::text, msg_date_rec) = 1::double precision)"
"              ->  Bitmap Index Scan on ship_b_std_pos_messages_pkey  (cost=0.00..13.41 rows=3 width=0) (actual time=49.125..49.125 rows=0 loops=1)"
"                    Index Cond: (msg_id = ANY ('{7294724,14174174,22254408}'::integer[]))"
"        ->  Bitmap Heap Scan on ship_b_ext_pos_messages ship_pos_messages  (cost=12.80..24.62 rows=1 width=128) (actual time=0.029..0.029 rows=0 loops=1)"
"              Recheck Cond: (msg_id = ANY ('{7294724,14174174,22254408}'::integer[]))"
"              Filter: (date_part('day'::text, msg_date_rec) = 1::double precision)"
"              ->  Bitmap Index Scan on ship_b_ext_pos_messages_pkey  (cost=0.00..12.80 rows=3 width=0) (actual time=0.027..0.027 rows=0 loops=1)"
"                    Index Cond: (msg_id = ANY ('{7294724,14174174,22254408}'::integer[]))"
"        ->  Bitmap Heap Scan on ship_a_pos_messages_wk0 ship_pos_messages  (cost=24.08..36.12 rows=1 width=128) (actual time=79.572..114.152 rows=3 loops=1)"
"              Recheck Cond: (msg_id = ANY ('{7294724,14174174,22254408}'::integer[]))"
"              Filter: (date_part('day'::text, msg_date_rec) = 1::double precision)"
"              ->  Bitmap Index Scan on ship_a_pos_messages_wk0_pkey  (cost=0.00..24.08 rows=3 width=0) (actual time=67.441..67.441 rows=3 loops=1)"
"                    Index Cond: (msg_id = ANY ('{7294724,14174174,22254408}'::integer[]))"
"Total runtime: 180.146 ms"


I think this is a pretty good plan and quite quick given the size of the table (88Million rows at present). However in real life the parameter where I search for msg_id is not an array of 3 ids but of 300.000 or more. It is then that the query forgets the plan and goes to sequential scan. Is there any way around? Or is this the best I can have?

Kind Regards
Yiannis

Re: Sequential scan instead of index scan

From
Ioannis Anagnostopoulos
Date:
They are random as the data are coming from multiple threads that are inserting in the database. I see what you say about "linking them", and I may give it a try with the date. The other think that "links" them together is the 4 georef fields, however at that stage I am trying to collect statistics on the georefs population of "msg_id" so I don't know before hand the values to limit my query on them... Do you think an index on "date, msg_id" might do something?

Yiannis

On 06/08/2012 16:16, David Barton wrote:
Hi Yiannis,

Is there anything linking these ids together, or are the relatively random?  If they are relatively random, the rows are likely to be sprinkled amongst many blocks and so a seq scan is the fastest.  I've seen similar problems with indexed queries in a multi-tennant database where the data is so fragmented that once the record volume hits a certain threshold, Postgres decides to table scan rather than use an index.

The query optimiser is unlikely to be able to determine the disk locality of 300k rows and so it just takes a punt on a seq scan.

If you added another filter condition on something indexed e.g. last week or last month or location or something, you might do better if the data does exhibit disk locality.  If the data really is scattered, then a seq scan really will be quicker.

Regards, David

On 06/08/12 23:08, Ioannis Anagnostopoulos wrote:
Hi, my query is very simple:

select
            msg_id,
            msg_type,
            ship_pos_messages.pos_georef1,
            ship_pos_messages.pos_georef2,
            ship_pos_messages.pos_georef3,
            ship_pos_messages.pos_georef4,
            obj_id,
            ship_speed,
            ship_heading,
            ship_course,
            pos_point
        from
            feed_all_y2012m08.ship_pos_messages
        where
            extract('day' from msg_date_rec) = 1
            AND msg_id = any(ARRAY[7294724,14174174,22254408]);

The msg_id is the pkey on the ship_pos_messages table and in this example it is working fast as it uses the pkey (primary key index) to make the selection. The expplain anayze follows:
"Result  (cost=0.00..86.16 rows=5 width=117) (actual time=128.734..163.319 rows=3 loops=1)"
"  ->  Append  (cost=0.00..86.16 rows=5 width=117) (actual time=128.732..163.315 rows=3 loops=1)"
"        ->  Seq Scan on ship_pos_messages  (cost=0.00..0.00 rows=1 width=100) (actual time=0.001..0.001 rows=0 loops=1)"
"              Filter: ((msg_id = ANY ('{7294724,14174174,22254408}'::integer[])) AND (date_part('day'::text, msg_date_rec) = 1::double precision))"
"        ->  Seq Scan on ship_a_pos_messages ship_pos_messages  (cost=0.00..0.00 rows=1 width=100) (actual time=0.000..0.000 rows=0 loops=1)"
"              Filter: ((msg_id = ANY ('{7294724,14174174,22254408}'::integer[])) AND (date_part('day'::text, msg_date_rec) = 1::double precision))"
"        ->  Bitmap Heap Scan on ship_b_std_pos_messages ship_pos_messages  (cost=13.41..25.42 rows=1 width=128) (actual time=49.127..49.127 rows=0 loops=1)"
"              Recheck Cond: (msg_id = ANY ('{7294724,14174174,22254408}'::integer[]))"
"              Filter: (date_part('day'::text, msg_date_rec) = 1::double precision)"
"              ->  Bitmap Index Scan on ship_b_std_pos_messages_pkey  (cost=0.00..13.41 rows=3 width=0) (actual time=49.125..49.125 rows=0 loops=1)"
"                    Index Cond: (msg_id = ANY ('{7294724,14174174,22254408}'::integer[]))"
"        ->  Bitmap Heap Scan on ship_b_ext_pos_messages ship_pos_messages  (cost=12.80..24.62 rows=1 width=128) (actual time=0.029..0.029 rows=0 loops=1)"
"              Recheck Cond: (msg_id = ANY ('{7294724,14174174,22254408}'::integer[]))"
"              Filter: (date_part('day'::text, msg_date_rec) = 1::double precision)"
"              ->  Bitmap Index Scan on ship_b_ext_pos_messages_pkey  (cost=0.00..12.80 rows=3 width=0) (actual time=0.027..0.027 rows=0 loops=1)"
"                    Index Cond: (msg_id = ANY ('{7294724,14174174,22254408}'::integer[]))"
"        ->  Bitmap Heap Scan on ship_a_pos_messages_wk0 ship_pos_messages  (cost=24.08..36.12 rows=1 width=128) (actual time=79.572..114.152 rows=3 loops=1)"
"              Recheck Cond: (msg_id = ANY ('{7294724,14174174,22254408}'::integer[]))"
"              Filter: (date_part('day'::text, msg_date_rec) = 1::double precision)"
"              ->  Bitmap Index Scan on ship_a_pos_messages_wk0_pkey  (cost=0.00..24.08 rows=3 width=0) (actual time=67.441..67.441 rows=3 loops=1)"
"                    Index Cond: (msg_id = ANY ('{7294724,14174174,22254408}'::integer[]))"
"Total runtime: 180.146 ms"


I think this is a pretty good plan and quite quick given the size of the table (88Million rows at present). However in real life the parameter where I search for msg_id is not an array of 3 ids but of 300.000 or more. It is then that the query forgets the plan and goes to sequential scan. Is there any way around? Or is this the best I can have?

Kind Regards
Yiannis


Re: Sequential scan instead of index scan

From
Tom Lane
Date:
Ioannis Anagnostopoulos <ioannis@anatec.com> writes:
>         I think this is a pretty good plan and quite quick given the
>         size of the table (88Million rows at present). However in real
>         life the parameter where I search for msg_id is not an array of
>         3 ids but of 300.000 or more. It is then that the query forgets
>         the plan and goes to sequential scan. Is there any way around?

If you've got that many, any(array[....]) is a bad choice.  I'd try
putting the IDs into a VALUES(...) list, or even a temporary table, and
then writing the query as a join.  It is a serious mistake to think that
a seqscan is evil when you're dealing with joining that many rows, btw.
What you should probably be looking for is a hash join plan.

            regards, tom lane

Re: Sequential scan instead of index scan

From
Ioannis Anagnostopoulos
Date:
On 06/08/2012 16:34, Tom Lane wrote:
Ioannis Anagnostopoulos <ioannis@anatec.com> writes:
        I think this is a pretty good plan and quite quick given the       size of the table (88Million rows at present). However in real       life the parameter where I search for msg_id is not an array of       3 ids but of 300.000 or more. It is then that the query forgets       the plan and goes to sequential scan. Is there any way around?
If you've got that many, any(array[....]) is a bad choice.  I'd try
putting the IDs into a VALUES(...) list, or even a temporary table, and
then writing the query as a join.  It is a serious mistake to think that
a seqscan is evil when you're dealing with joining that many rows, btw.
What you should probably be looking for is a hash join plan.
		regards, tom lane
Ok in that scenario we are back to square one. Following your suggestion my resultant query is this (the temporary table is tmp_tbl_messages)
select
            ship_pos_messages.*
        from
            feed_all_y2012m08.ship_pos_messages join tmp_tbl_messages on (ship_pos_messages.msg_id = tmp_tbl_messages.msg_id)
        where
            extract('day' from msg_date_rec) = 1
            AND date_trunc('day', msg_date_rec) = '2012-08-01';

which gives us the following explain analyse:

"Merge Join  (cost=1214220.48..3818359.46 rows=173574357 width=128) (actual time=465036.958..479089.731 rows=341190 loops=1)"
"  Merge Cond: (feed_all_y2012m08.ship_pos_messages.msg_id = tmp_tbl_messages.msg_id)"
"  ->  Sort  (cost=1178961.70..1179223.51 rows=104725 width=128) (actual time=464796.971..476579.208 rows=19512873 loops=1)"
"        Sort Key: feed_all_y2012m08.ship_pos_messages.msg_id"
"        Sort Method:  external merge  Disk: 1254048kB"
"        ->  Append  (cost=0.00..1170229.60 rows=104725 width=128) (actual time=0.033..438682.971 rows=19512883 loops=1)"
"              ->  Seq Scan on ship_pos_messages  (cost=0.00..0.00 rows=1 width=100) (actual time=0.000..0.000 rows=0 loops=1)"
"                    Filter: ((date_part('day'::text, msg_date_rec) = 1::double precision) AND (date_trunc('day'::text, msg_date_rec) = '2012-08-01 00:00:00'::timestamp without time zone))"
"              ->  Seq Scan on ship_a_pos_messages ship_pos_messages  (cost=0.00..0.00 rows=1 width=100) (actual time=0.000..0.000 rows=0 loops=1)"
"                    Filter: ((date_part('day'::text, msg_date_rec) = 1::double precision) AND (date_trunc('day'::text, msg_date_rec) = '2012-08-01 00:00:00'::timestamp without time zone))"
"              ->  Index Scan using idx_ship_b_std_pos_messages_date_trunc on ship_b_std_pos_messages ship_pos_messages  (cost=0.00..58657.09 rows=5269 width=128) (actual time=0.032..799.171 rows=986344 loops=1)"
"                    Index Cond: (date_trunc('day'::text, msg_date_rec) = '2012-08-01 00:00:00'::timestamp without time zone)"
"                    Filter: (date_part('day'::text, msg_date_rec) = 1::double precision)"
"              ->  Index Scan using idx_ship_b_ext_pos_messages_date_trunc on ship_b_ext_pos_messages ship_pos_messages  (cost=0.00..1694.64 rows=141 width=128) (actual time=0.026..20.661 rows=26979 loops=1)"
"                    Index Cond: (date_trunc('day'::text, msg_date_rec) = '2012-08-01 00:00:00'::timestamp without time zone)"
"                    Filter: (date_part('day'::text, msg_date_rec) = 1::double precision)"
"              ->  Index Scan using idx_ship_a_pos_messages_wk0_date_trunc on ship_a_pos_messages_wk0 ship_pos_messages  (cost=0.00..1109877.86 rows=99313 width=128) (actual time=0.029..435784.376 rows=18499560 loops=1)"
"                    Index Cond: (date_trunc('day'::text, msg_date_rec) = '2012-08-01 00:00:00'::timestamp without time zone)"
"                    Filter: (date_part('day'::text, msg_date_rec) = 1::double precision)"
"  ->  Sort  (cost=35258.79..36087.50 rows=331486 width=8) (actual time=239.908..307.576 rows=349984 loops=1)"
"        Sort Key: tmp_tbl_messages.msg_id"
"        Sort Method:  quicksort  Memory: 28694kB"
"        ->  Seq Scan on tmp_tbl_messages  (cost=0.00..4863.86 rows=331486 width=8) (actual time=0.047..55.227 rows=349984 loops=1)"
"Total runtime: 479336.869 ms"



Which is a Merge join and not a hash. Any ideas how to make it a hash join?

Kind Regards
Yiannis

Re: Sequential scan instead of index scan

From
Tom Lane
Date:
Ioannis Anagnostopoulos <ioannis@anatec.com> writes:
> On 06/08/2012 16:34, Tom Lane wrote:
>> What you should probably be looking for is a hash join plan.

> ...
> Which is a Merge join and not a hash. Any ideas how to make it a hash join?

You might need to ANALYZE the temp table, if you didn't already.  Also
it might be that you need to increase work_mem enough to fit the temp
table into memory.

Another thing that's bothering me is that the rowcount estimates are so
far off, particularly this one:

> "              ->  Index Scan using
> idx_ship_a_pos_messages_wk0_date_trunc on ship_a_pos_messages_wk0
> ship_pos_messages  (cost=0.00..1109877.86 rows=99313 width=128) (actual
> time=0.029..435784.376 rows=18499560 loops=1)"
> "                    Index Cond: (date_trunc('day'::text, msg_date_rec)
> = '2012-08-01 00:00:00'::timestamp without time zone)"
> "                    Filter: (date_part('day'::text, msg_date_rec) =
> 1::double precision)"

Offhand I'd have thought that ANALYZE would gather stats on the
date_trunc expression (because it is indexed) and then you should get
something reasonably accurate for a comparison to a constant.
"Reasonably accurate" meaning "not off by two orders of magnitude".
Practically all of your runtime is going into this one indexscan,
and TBH it seems likely you'd be better off with a seqscan there.

            regards, tom lane

Re: Sequential scan instead of index scan

From
Ioannis Anagnostopoulos
Date:
Offhand I'd have thought that ANALYZE would gather stats on the
date_trunc expression (because it is indexed) and then you should get
something reasonably accurate for a comparison to a constant.
"Reasonably accurate" meaning "not off by two orders of magnitude".
Practically all of your runtime is going into this one indexscan,
and TBH it seems likely you'd be better off with a seqscan there.
		regards, tom lane
You were right, after running ANALYZE on the temp table I eventually got the HASH JOIN we were talking about. Here is the plan:

"Hash Join  (cost=379575.54..1507341.18 rows=95142 width=128) (actual time=3128.940..634179.270 rows=10495795 loops=1)"
"  Hash Cond: (feed_all_y2012m08.ship_pos_messages.msg_id = tmp_tbl_messages.msg_id)"
"  ->  Append  (cost=0.00..1073525.24 rows=95142 width=128) (actual time=37.157..599002.314 rows=18891614 loops=1)"
"        ->  Seq Scan on ship_pos_messages  (cost=0.00..0.00 rows=1 width=100) (actual time=0.001..0.001 rows=0 loops=1)"
"              Filter: ((date_part('day'::text, msg_date_rec) = 2::double precision) AND (date_trunc('day'::text, msg_date_rec) = '2012-08-02 00:00:00'::timestamp without time zone))"
"        ->  Seq Scan on ship_a_pos_messages ship_pos_messages  (cost=0.00..0.00 rows=1 width=100) (actual time=0.000..0.000 rows=0 loops=1)"
"              Filter: ((date_part('day'::text, msg_date_rec) = 2::double precision) AND (date_trunc('day'::text, msg_date_rec) = '2012-08-02 00:00:00'::timestamp without time zone))"
"        ->  Index Scan using idx_ship_b_std_pos_messages_date_trunc on ship_b_std_pos_messages ship_pos_messages  (cost=0.00..48111.95 rows=4323 width=128) (actual time=37.156..23782.030 rows=808692 loops=1)"
"              Index Cond: (date_trunc('day'::text, msg_date_rec) = '2012-08-02 00:00:00'::timestamp without time zone)"
"              Filter: (date_part('day'::text, msg_date_rec) = 2::double precision)"
"        ->  Index Scan using idx_ship_b_ext_pos_messages_date_trunc on ship_b_ext_pos_messages ship_pos_messages  (cost=0.00..1844.30 rows=154 width=128) (actual time=42.042..1270.104 rows=28656 loops=1)"
"              Index Cond: (date_trunc('day'::text, msg_date_rec) = '2012-08-02 00:00:00'::timestamp without time zone)"
"              Filter: (date_part('day'::text, msg_date_rec) = 2::double precision)"
"        ->  Index Scan using idx_ship_a_pos_messages_wk0_date_trunc on ship_a_pos_messages_wk0 ship_pos_messages  (cost=0.00..1023568.99 rows=90663 width=128) (actual time=51.181..571590.415 rows=18054266 loops=1)"
"              Index Cond: (date_trunc('day'::text, msg_date_rec) = '2012-08-02 00:00:00'::timestamp without time zone)"
"              Filter: (date_part('day'::text, msg_date_rec) = 2::double precision)"
"  ->  Hash  (cost=177590.46..177590.46 rows=12311446 width=8) (actual time=3082.762..3082.762 rows=12311446 loops=1)"
"        Buckets: 524288  Batches: 4  Memory Usage: 120316kB"
"        ->  Seq Scan on tmp_tbl_messages  (cost=0.00..177590.46 rows=12311446 width=8) (actual time=0.022..1181.376 rows=12311446 loops=1)"
"Total runtime: 634764.596 ms
"

The time looks reasonable but still quite high for the over night job I am need it for (have to run around 30 of those). So since the join has
been shorted I think I need to do something with the rows difference between actual and expected in the:

"        ->  Index Scan using idx_ship_a_pos_messages_wk0_date_trunc on ship_a_pos_messages_wk0 ship_pos_messages  (cost=0.00..1023568.99 rows=90663 width=128) (actual time=51.181..571590.415 rows=18054266 loops=1)"
"              Index Cond: (date_trunc('day'::text, msg_date_rec) = '2012-08-02 00:00:00'::timestamp without time zone)"
"              Filter: (date_part('day'::text, msg_date_rec) = 2::double precision)"

From what I understand a possible solution is to increase the stats target for the particular column(?). Any suggestion there? I assume we are talking about the
msg_date_rec where the index is build uppon.
Finally, I do understand what you say about the Seq scan. However in this case I have consistently about 10min per execution while the SeqScan was giving me almost nothing at best and usually it was running for so long that
eventually was causing my server problems...

Kind Regards
Yiannis


Re: Sequential scan instead of index scan

From
Jeff Janes
Date:
On Mon, Aug 6, 2012 at 8:08 AM, Ioannis Anagnostopoulos
<ioannis@anatec.com> wrote:
> Hi, my query is very simple:
>
> select
>             msg_id,
>             msg_type,
>             ship_pos_messages.pos_georef1,
>             ship_pos_messages.pos_georef2,
>             ship_pos_messages.pos_georef3,
>             ship_pos_messages.pos_georef4,
>             obj_id,
>             ship_speed,
>             ship_heading,
>             ship_course,
>             pos_point
>         from
>             feed_all_y2012m08.ship_pos_messages
>         where
>             extract('day' from msg_date_rec) = 1
>             AND msg_id = any(ARRAY[7294724,14174174,22254408]);
>
> The msg_id is the pkey on the ship_pos_messages table and in this example it
> is working fast as it uses the pkey (primary key index) to make the
> selection. The expplain anayze follows:
...
>
> I think this is a pretty good plan and quite quick given the size of the
> table (88Million rows at present). However in real life the parameter where
> I search for msg_id is not an array of 3 ids but of 300.000 or more. It is
> then that the query forgets the plan and goes to sequential scan. Is there
> any way around? Or is this the best I can have?

What happens if you set "enable_seqscan=off" and run the query with
the very large list?  (This is an experiment, not a recommendation for
production use)


Cheers,

Jeff

Re: Sequential scan instead of index scan

From
Ioannis Anagnostopoulos
Date:
On 07/08/2012 17:00, Jeff Janes wrote:
> On Mon, Aug 6, 2012 at 8:08 AM, Ioannis Anagnostopoulos
> <ioannis@anatec.com> wrote:
>> Hi, my query is very simple:
>>
>> select
>>              msg_id,
>>              msg_type,
>>              ship_pos_messages.pos_georef1,
>>              ship_pos_messages.pos_georef2,
>>              ship_pos_messages.pos_georef3,
>>              ship_pos_messages.pos_georef4,
>>              obj_id,
>>              ship_speed,
>>              ship_heading,
>>              ship_course,
>>              pos_point
>>          from
>>              feed_all_y2012m08.ship_pos_messages
>>          where
>>              extract('day' from msg_date_rec) = 1
>>              AND msg_id = any(ARRAY[7294724,14174174,22254408]);
>>
>> The msg_id is the pkey on the ship_pos_messages table and in this example it
>> is working fast as it uses the pkey (primary key index) to make the
>> selection. The expplain anayze follows:
> ...
>> I think this is a pretty good plan and quite quick given the size of the
>> table (88Million rows at present). However in real life the parameter where
>> I search for msg_id is not an array of 3 ids but of 300.000 or more. It is
>> then that the query forgets the plan and goes to sequential scan. Is there
>> any way around? Or is this the best I can have?
> What happens if you set "enable_seqscan=off" and run the query with
> the very large list?  (This is an experiment, not a recommendation for
> production use)
>
>
> Cheers,
>
> Jeff
As Tom said, the actual question is not valid. Seq scan are not bad, we
just need to understand the way around it instead of forcing them off.
In my case, the problem was the ARRAY as a parameter (which all together
is not that great for holding so many data). By converting it into a
temporary table and performing an inner join in the query (after
analysing the temp table) you get a nice Hash join (or Merge Join if you
don't analyse the temp table).

cheers Yiannis

Re: Sequential scan instead of index scan

From
Jeff Janes
Date:
On Tue, Aug 7, 2012 at 9:06 AM, Ioannis Anagnostopoulos
<ioannis@anatec.com> wrote:
> On 07/08/2012 17:00, Jeff Janes wrote:
>>
>> What happens if you set "enable_seqscan=off" and run the query with
>> the very large list?  (This is an experiment, not a recommendation for
>> production use)
>>
>>
>> Cheers,
>>
>> Jeff
>
> As Tom said, the actual question is not valid. Seq scan are not bad,

Right, that is why I proposed it as an experiment, not for production use.

> we just
> need to understand the way around it instead of forcing them off.

I think the first step to understanding the way around it is to force
it off, and see what the planner thinks it's next best option is, and
why it thinks that.


> In my
> case, the problem was the ARRAY as a parameter (which all together is not
> that great for holding so many data).

I think the only thing that is great for holding that much data is a
query against live permanent tables which returns it.  Given the
choice between stuffing it in an ARRAY and stuffing it in a temp table
and then manually analyzing it, neither one of those seems
fundamentally better than the other at the scale of 300,000.


> By converting it into a temporary
> table and performing an inner join in the query (after analysing the temp
> table) you get a nice Hash join (or Merge Join if you don't analyse the temp
> table).

I don't see those as being very good.  The "primary key" part of the
query is far more selective than the date part, so what you are doing
is fetching a huge number of rows only to throw out the vast majority
of them.

I think the optimal plan would be a bitmap scan on the indexes of the
"primary key" column.  This should automatically take advantage of the
sequential read nature of the table data to the extent the results are
well clustered, and if they aren't clustered it should benefit from
effective_io_concurrency if that is set appropriately.

Cheers,

Jeff