2 machines, same database, same query, 10 times slower? - Mailing list pgsql-general
From | Antonio Goméz Soto |
---|---|
Subject | 2 machines, same database, same query, 10 times slower? |
Date | |
Msg-id | 4FA93251.5020409@gmail.com Whole thread Raw |
Responses |
Re: 2 machines, same database, same query, 10 times
slower?
|
List | pgsql-general |
Hi, I am running PostgreSQL 8.1, on CentOS 5. I have two machines, same hardware, with the same database layout, they have different data, and the same query run 10 times as slow on one machine compared to the other. I have run ANALYZE on both machines, here are the query plans for both machines. Can someone explain this to me? I have trouble reading EXPLAIN output... Query: explain select company.name as cname, call_flags, bill_duration as call_duration, date_part('epoch',start_time) as start_time, src_name, src_type, src_num, src_id, dial_name, dial_type, dial_num, dial_id, ans_name, ans_type, ans_num, ans_id, sessionid from cdr, company, phoneline, contact where (src_id = contact.id or dial_id = contact.id or ans_id = contact.id) and contact.id = '2' and phoneline.function='contact' and phoneline.lookupid = contact.id and phoneline.status != 'deleted' and (src_company=company.id or dial_company=company.id) and company.id > 2 order by start_time DESC limit 10; This is the query plan on machine #1 (query takes 2 seconds) : Limit (cost=106128.33..106128.36 rows=10 width=160) -> Sort (cost=106128.33..106166.98 rows=15458 width=160) Sort Key: date_part('epoch'::text, cdr.start_time) -> Nested Loop (cost=49.38..104275.65 rows=15458 width=160) -> Nested Loop (cost=2.10..103880.57 rows=7729 width=164) Join Filter: (("outer".src_company = "inner".id) OR ("outer".dial_company = "inner".id)) -> Nested Loop (cost=0.00..103054.09 rows=6595 width=163) Join Filter: (("inner".src_id = "outer".id) OR ("inner".dial_id = "outer".id) OR ("inner".ans_id= "outer".id)) -> Index Scan using contact_pkey on contact (cost=0.00..5.94 rows=1 width=4) Index Cond: (id = 2) -> Seq Scan on cdr (cost=0.00..77039.87 rows=1486187 width=159) -> Materialize (cost=2.10..2.16 rows=5 width=13) -> Seq Scan on company (cost=0.00..2.10 rows=5 width=13) Filter: (id > 2) -> Materialize (cost=47.28..47.30 rows=2 width=4) -> Seq Scan on phoneline (cost=0.00..47.28 rows=2 width=4) Filter: ((("function")::text = 'contact'::text) AND ((status)::text <> 'deleted'::text) AND (lookupid= 2)) (17 rows) This is the query plan on machine two (query takes 38 seconds): Limit (cost=424555.76..424555.79 rows=10 width=170) -> Sort (cost=424555.76..424574.34 rows=7432 width=170) Sort Key: date_part('epoch'::text, cdr.start_time) -> Nested Loop (cost=422353.60..424077.90 rows=7432 width=170) -> Nested Loop (cost=422064.10..423621.19 rows=3716 width=174) Join Filter: (("inner".src_company = "outer".id) OR ("inner".dial_company = "outer".id)) -> Bitmap Heap Scan on company (cost=2.09..49.23 rows=26 width=21) Recheck Cond: (id > 2) -> Bitmap Index Scan on company_pkey (cost=0.00..2.09 rows=26 width=0) Index Cond: (id > 2) -> Materialize (cost=422062.01..422085.24 rows=2323 width=165) -> Nested Loop (cost=0.00..422059.69 rows=2323 width=165) Join Filter: (("inner".src_id = "outer".id) OR ("inner".dial_id = "outer".id) OR ("inner".ans_id= "outer".id)) -> Index Scan using contact_pkey on contact (cost=0.00..6.01 rows=1 width=4) Index Cond: (id = 2) -> Seq Scan on cdr (cost=0.00..408379.70 rows=781370 width=161) -> Materialize (cost=289.50..289.52 rows=2 width=4) -> Seq Scan on phoneline (cost=0.00..289.50 rows=2 width=4) Filter: ((("function")::text = 'contact'::text) AND ((status)::text <> 'deleted'::text) AND (lookupid= 2)) (19 rows) Thanks, Antonio
pgsql-general by date: