Re: Postgres slowdown on large table joins - Mailing list pgsql-general
From | Dave Edmondson |
---|---|
Subject | Re: Postgres slowdown on large table joins |
Date | |
Msg-id | 20010219105836.A57504@verdi.jlc.net Whole thread Raw |
In response to | Postgres slowdown on large table joins (Dave Edmondson <david@jlc.net>) |
Responses |
Re: Re: Postgres slowdown on large table joins
|
List | pgsql-general |
Ack! I just timed it at 74 seconds. Added two indexes, here's the query plan... it doesn't seem to be using the indexes at all. I'm sure I'm doing something wrong here... NOTICE: QUERY PLAN: Sort (cost=6707.62..6707.62 rows=10596 width=170) -> Merge Join (cost=1.34..5492.29 rows=10596 width=170) -> Nested Loop (cost=0.00..4943.38 rows=36493 width=154) -> Index Scan using config_pkey on config c (cost=0.00..2.01 rows=1 width=113) -> Seq Scan on data d (cost=0.00..3116.72 rows=145972 width=41) -> Sort (cost=1.34..1.34 rows=12 width=16) -> Seq Scan on prefs p (cost=0.00..1.12 rows=12 width=16) EXPLAIN I actually didn't know anything about indexes before now. Know of anywhere with a good explanation of them? The Postgres user manual wasn't too helpful, it just explained the syntax of the CREATE USER command. Actual commands I entered: create index data_index ON data using btree (conf_id); create index prefs_index ON prefs using btree (conf_id); On Fri, Feb 16, 2001 at 03:27:36PM -0500, Mitch Vincent wrote: > NOTICE: QUERY PLAN: > > Sort (cost=6707.62..6707.62 rows=10596 width=170) > -> Merge Join (cost=1.34..5492.29 rows=10596 width=170) > -> Nested Loop (cost=0.00..4943.38 rows=36493 width=154) > -> Index Scan using config_pkey on config c (cost=0.00..2.01 > rows=1 width=113) > -> Seq Scan on data d (cost=0.00..3116.72 rows=145972 > width=41) > -> Sort (cost=1.34..1.34 rows=12 width=16) > -> Seq Scan on prefs p (cost=0.00..1.12 rows=12 width=16) > > EXPLAIN > > OK, well, an index on data.conf_id and prefs.conf_id will help. > > After you make the indexes do a VACUUM ANALYZE > > Then send me that query plan :-) > > The Join is hurting you but that's to be expected with a larger table, there > isn't much we can do there.. > > The indexes should speed things up a lot though.. Let me know.. > > > A link to some info on EXPLAIN : > http://postgresql.readysetnet.com/users-lounge/docs/7.0/user/c4884.htm > > -Mitch > > > > There you go... I'll be searching around for how to interpret that. > > (I've never done an EXPLAIN before now...) > > > > > SELECT c.unit_id,c.name,c.auxenable,c.bias,c.feedback,c.gain,c.igain, > > > > c.mode,c.reverse,c.setpoint,c.switch1,c.switch2,c.timeout, > > > > c.valvetype,d.active,d.drive_1_s,d.drive_1_f,d.drive_2_s, > > > > d.drive_2_f,d.mval,d.m4val,d.sw1,d.sw2,d.cycle,d.itemp, > > > > d.error,d.aval,d.ts,c.ts,p.degree,c.outputa,c.outputb, > > > > c.outputc,c.rawtemp > > > > FROM config c, data d, prefs p > > > > WHERE c.conf_id = '4' > > > > AND d.conf_id = c.conf_id > > > > AND p.conf_id = c.conf_id > > > > ORDER BY d.ts DESC > > > > LIMIT 1 > > > > > > > > ...it takes an astounding 50 seconds to complete, CPU usage goes to > about > > > > 85% Now, a simple... > > > > > > > > SELECT * > > > > FROM data > > > > ORDER BY ts desc > > > > LIMIT 1 > > > > > > > > ...takes about 16-26 seconds - still sloooow, but not as bad as with > the > > > > table join. What's really causing the slowdown? ...should I just > execute > > > > the command differently? I'm trying to get the latest data in all > three > > > > tables. > > > > > > > > Once the server has 768MB+ of RAM, is it possible to load the entire > table > > > > into memory? should speed things up considerably. > > > > > > > > Thanks, > > > > -- > > David Edmondson <david@jlc.net> > > > -- David Edmondson <david@jlc.net> GMU/FA d-(--) s+: a18>? C++++$ UB++++$ P+>+++++ L- E--- W++ N- o K-> w-- O? M-(--) V? PS+ PE+ Y? PGP t 5 X R+ tv-->! b DI+++ D+ G(--) e>* h!>+ r++ y+>++ ICQ: 79043921 AIM: AbsintheXL #music,#hellven on irc.esper.net
pgsql-general by date: