need some help understanding sloq query - Mailing list pgsql-sql
From | Esger Abbink |
---|---|
Subject | need some help understanding sloq query |
Date | |
Msg-id | 01112918455100.01788@McIntosh Whole thread Raw |
Responses |
Re: need some help understanding sloq query
|
List | pgsql-sql |
Hi, i have a little performance problem. db (simplified): table current: current_update_id, ... table datasets: set_id, update_id, .... table ents: e_id, set_id, ... table qtys: set_id, e_id, ... indexes are defined on all set_id's & datasets.update_id. an update consists of several sets which in turn consists of several ents, for a specific ent in a set multiple qtys may exist. (normal: 1 update - 1 set - few hundred ents - 1 qty per ent) now i want to do the following: i want to get some specific qty values for the ents of the last update only. so i do a query like: select some_other_fields from ents e, qtys q where e.set_id = q.set_id and e.e_id = q.e_id and e.set_id in (select set_id from datasets where update_id in (select cur_update_id from current)) and q.other_field = some_const ; this query takes ages :( the query plan looks like this: Merge Join (cost=0.00..69979.50 rows=252 width=20) -> Index Scan using qtys_set_id_idx on qtys q (cost=0.00..2054.57 rows=30653width=8) -> Index Scan using ents_set_id_idx on ents e (cost=0.00..66847.20 rows=41196 width=12) SubPlan -> Materialize (cost=1.52..1.52 rows=1 width=4) -> Seq Scan on datasets (cost=0.00..1.52rows=1 width=4) SubPlan -> Seq Scan on current (cost=0.00..1.01rows=1 width=4) after i created an index on the e_id fields the cost went up even higher (about 90.000 for line 1 & 3). this performance isnt acceptable so i started a different approach. instead of joining the larger tables and then checking for interesting set_id's i'd first select the appropriate set_id's (into temp table or view). so i tried something like: select set_id, e_id, some_fields from ents where set_id in (select set_id from datasets where update_id in (select cur_update_id from current)) ; this query is being planned as followed: Seq Scan on ents (cost=0.00..43331.89 rows=41197 width=136) SubPlan -> Materialize (cost=1.02..1.02 rows=1 width=4) -> Seq Scan on datasets (cost=0.00..1.02 rows=1 width=4) InitPlan -> SeqScan on current (cost=0.00..1.01 rows=1 width=4) this query isnt using the created index (!?) and yes, vacuum analyze was done. the 2nd in can be safely changed to = as that is always 1 row. the first in sub-select is returning 1 row about 95% if not 100% of the time, but cant be guaranteed to. if i ignore that and use = for that one as well i get this: Index Scan using ents_set_id_idx on ents (cost=0.00..50.03 rows=37 width=136) InitPlan -> Seq Scan on datasets (cost=0.00..1.02rows=1 width=4) InitPlan -> Seq Scan on current (cost=0.00..1.01 rows=1 width=4) which is more like the performance we want.. using exists instead of in doesnt speed up the 1st query & results in the same plan. so now i'm stumped (and stuck), the 1st approach uses the indices but is way too slow (likely because of the large join?). the 2nd one doesnt use the indices and is (therefore?) way too slow also. clearly i'm doing something stupid here. but what? these tests were done on a db with 30k - 40k rows in the ents & qtys tables but the production db must be able to run with millions of rows and not take minutes (or hours) to cough up a specific set. any help appreciated. Esger Abbink PS. this was on postgres 7.0.3