Re: Slow query performance on large table - Mailing list pgsql-performance
From | Paul McKay |
---|---|
Subject | Re: Slow query performance on large table |
Date | |
Msg-id | 000001c2e2fc$4a13c200$0c64a8c0@paulspc Whole thread Raw |
In response to | Re: Slow query performance on large table (Tomasz Myrta <jasiek@klaster.net>) |
Responses |
Re: Slow query performance on large table
|
List | pgsql-performance |
Hopefully you guys can help me with another query I've got that's running slow. This time it's across two tables I have clearview=# \d panconversation Table "panconversation" Column | Type | Modifiers -------------+---------+----------- assessment | integer | not null interface | integer | source | integer | destination | integer | protocol | integer | Indexes: idx_panconversation_destination, idx_panconversation_interface, idx_panconversation_protocol, idx_panconversation_source Primary key: panconversation_pkey Unique keys: unq_panconversation Triggers: RI_ConstraintTrigger_52186648, RI_ConstraintTrigger_52186654, RI_ConstraintTrigger_52186660, RI_ConstraintTrigger_52186666 Primary key is assessment Along with the table I was dealing with before, with the index I'd mislaid put back in clearview=# \d measurement Table "measurement" Column | Type | Modifiers ------------+-----------------------+----------- assessment | integer | time | integer | value | character varying(50) | Indexes: idx_measurement_assessment, idx_measurement_time, ind_measurement_ass_time The 'explain analyse' of the query I am running is rather evil. clearview=# explain analyse select source,value clearview-# from measurement, PANConversation clearview-# where PANConversation.assessment = measurement.assessment clearview-# and Interface = 11 clearview-# and Time > 1046184261 and Time < 1046335461 clearview-# ; NOTICE: QUERY PLAN: Hash Join (cost=1532.83..345460.73 rows=75115 width=23) (actual time=1769.84..66687.11 rows=16094 loops=1) -> Seq Scan on measurement (cost=0.00..336706.07 rows=418859 width=15) (actual time=1280.11..59985.47 rows=455788 loops=1) -> Hash (cost=1498.21..1498.21 rows=13848 width=8) (actual time=253.49..253.49 rows=0 loops=1) -> Seq Scan on panconversation (cost=0.00..1498.21 rows=13848 width=8) (actual time=15.64..223.18 rows=13475 loops=1) Total runtime: 66694.82 msec EXPLAIN Anybody shed any light on why the indexes I created aren't being used, and I have these nasty sequential scans? Thanks in advance, Paul. ====================================== Paul Mckay Consultant Partner Servicing Division Clearwater-IT e:paul_mckay@clearwater-it.co.uk t:0161 877 6090 m: 07713 510946 ====================================== -----Original Message----- From: pgsql-performance-owner@postgresql.org [mailto:pgsql-performance-owner@postgresql.org] On Behalf Of Tomasz Myrta Sent: 04 March 2003 17:21 To: Andreas Pflug Cc: pgsql-performance@postgresql.org Subject: Re: [PERFORM] Slow query performance on large table Andreas Pflug wrote: > Tom, > > does this mean that a primary key alone might not be enough? As far as I > understood Paul, the PK looks quite as the newly created index does, so > "create index ind_meas on measurement (assessment,time)" should perform > the same as "... primary key(assessment,time)". > Do possibly non-optimal indices (only assessment, only time as Paul > described earlier) screw up the optimizer, igoring the better option > usiing the PK? Obviously, the index used should be combined of > (assessment,time) but IMHO a PK should be enough. > > regards, > > Andreas You are right - primary key should be ok, but Paul lost it. psql \d shows primary key indexes, but in this case there was no such primary key. Regards, Tomasz Myrta ---------------------------(end of broadcast)--------------------------- TIP 6: Have you searched our list archives? http://archives.postgresql.org
pgsql-performance by date: