Unused index influencing sequential scan plan - Mailing list pgsql-performance

From Thom Brown
Subject Unused index influencing sequential scan plan
Date
Msg-id CAA-aLv4aukjF=03XNV4o2UKWqweGd+XaHK0y1aTsgGGXmzVcDA@mail.gmail.com
Whole thread Raw
Responses Re: Unused index influencing sequential scan plan  (Thom Brown <thom@linux.com>)
Re: Unused index influencing sequential scan plan  (Peter Geoghegan <peter@2ndquadrant.com>)
Re: Unused index influencing sequential scan plan  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-performance
Hi all,

I've created a test table containing 21 million random dates and
times, but I get wildly different results when I introduce a
functional index then ANALYSE again, even though it doesn't use the
index:

postgres=# CREATE TABLE test (id serial, sampledate timestamp);
CREATE TABLE
postgres=# INSERT INTO test (sampledate) SELECT '1970-01-01
00:00:00'::timestamp + (random()*1350561945 || ' seconds')::interval
FROM generate_series(1,21000000);
INSERT 0 21000000
postgres=# VACUUM;
VACUUM
postgres=# EXPLAIN SELECT extract(month FROM sampledate), count(*)
FROM test GROUP BY extract(month FROM sampledate);
                              QUERY PLAN
----------------------------------------------------------------------
 HashAggregate  (cost=481014.00..481016.50 rows=200 width=8)
   ->  Seq Scan on test  (cost=0.00..376014.00 rows=21000000 width=8)
(2 rows)

postgres=# ANALYSE;
ANALYZE
postgres=# EXPLAIN SELECT extract(month FROM sampledate), count(*)
FROM test GROUP BY extract(month FROM sampledate);
                                 QUERY PLAN
----------------------------------------------------------------------------
 GroupAggregate  (cost=4078473.42..4498473.90 rows=21000024 width=8)
   ->  Sort  (cost=4078473.42..4130973.48 rows=21000024 width=8)
         Sort Key: (date_part('month'::text, sampledate))
         ->  Seq Scan on test  (cost=0.00..376014.30 rows=21000024 width=8)
(4 rows)

postgres=# CREATE INDEX idx_test_sampledate_month ON test
(extract(month FROM sampledate));
CREATE INDEX
postgres=# EXPLAIN SELECT extract(month FROM sampledate), count(*)
FROM test GROUP BY extract(month FROM sampledate);
                                 QUERY PLAN
----------------------------------------------------------------------------
 GroupAggregate  (cost=4078470.03..4498470.03 rows=21000000 width=8)
   ->  Sort  (cost=4078470.03..4130970.03 rows=21000000 width=8)
         Sort Key: (date_part('month'::text, sampledate))
         ->  Seq Scan on test  (cost=0.00..376014.00 rows=21000000 width=8)
(4 rows)

postgres=# ANALYSE;
ANALYZE
postgres=# EXPLAIN SELECT extract(month FROM sampledate), count(*)
FROM test GROUP BY extract(month FROM sampledate);
                              QUERY PLAN
----------------------------------------------------------------------
 HashAggregate  (cost=481012.85..481013.00 rows=12 width=8)
   ->  Seq Scan on test  (cost=0.00..376013.17 rows=20999934 width=8)
(2 rows)


The estimate is down to almost a 10th of what it was before.  What's going on?

And as a side note, how come it's impossible to get the planner to use
an index-only scan to satisfy the query (disabling sequential and
regular index scans)?

--
Thom


pgsql-performance by date:

Previous
From: "Albe Laurenz"
Date:
Subject: Re: LIKE op with B-Tree Index?
Next
From: Thom Brown
Date:
Subject: Re: Unused index influencing sequential scan plan