Thanks for these suggestions
Unfortunately , I don't have a lot of memory available ( 65 connections ,
work_mem = 64MB in pg conf ).
>> I think index will be of no help here, as (1) you're reading whole table
>> anyway and (2) `amount` is not part of your index.
I did not think that the the field being used in the agg function should
also be part of the index.
I'll try this and check the result.
My problem is that dropping / adding indexes on this table takes a LOT of
time, so I'm stuck with doing the tests using the indexes as is, or doing
the tests on a smaller dataset.
On the smaller dataset ( 1.5 mill records on that table ) the planner did
not take the index into account, even when I omit the amount column:
CREATE INDEX ix_1
ON ddetail
USING btree
(co_id , client_id , doc_no , line_id , batch_no);
SELECT co_id , client_id , doc_no , line_id , batch_no
FROM ddetail
GROUP BY co_id , client_id , doc_no , line_id , batch_no ;
HashAggregate (cost=54695.74..56064.49 rows=136875 width=22)
-> Seq Scan on debfdetail (cost=0.00..37586.44 rows=1368744 width=22)
still does a seq scan instead of the index scan.
I guess it is possible that on the 1.4 million records, it is faster to do a
seq scan ?
So I guess I'll have to try and do this on the 10 mill table and check the
result there.
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Performance-issue-index-not-used-on-GROUP-BY-tp5816702p5816715.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.