tuning SQL - Mailing list pgsql-admin

From Zhang, Anna
Subject tuning SQL
Date
Msg-id 5511D658682A7740BA295CCF1E1233A635A83F@vsvapostal2.bkup3
Whole thread Raw
Responses Re: tuning SQL  ("Peter Darley" <pdarley@Kinesis-CEM.com>)
Re: tuning SQL  ("Ross J. Reedstrom" <reedstrm@rice.edu>)
Re: tuning SQL  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-admin
Hi,
I am running a query on postgres 7.1.3 at Red Hat 7.2 (2 CPUs, 1.5G RAM, 2
drive disk array).
select count(*) from contact a, contact_discard b where a.contacthandle <>
b.contacthandle;
Table contact has over 9 million rows, contact_diacard has around 259,000
rows, both tables define contacthandle as primary key.
Here is the query execution plan:
Aggregate (cost=732021.97..732021.97 rows=1 width=24)
 -> Hash Join (cost=10035.10..731397.95 rows=249608 width=24)
      -> Seq Scan on contact a (cost=0.00..345002.95 rows=9330995 width=12)
      -> Hash (cost=9411.08..9411.08 rows=249608 width=12)
         -> Seq Scan on contact_disacrd b (cost=0.00..9411.08 rows=249608
width=12)
 I started to run this query at 5:00pm yesterday, it still running!!! My
question is Why query plan doesn't use index scan for join, Can we force it
to use index? Or any idea to improve the performance? We have more tables
bigger than contact, and need to join them among? Am I pushing the postgres
to the limit? Help!!!
Shared_buffer = 65536
sort_mem = 32768

Anna Zhang


pgsql-admin by date:

Previous
From: "Radu-Adrian Popescu"
Date:
Subject: Re: Backup database through web and php
Next
From: "Peter Darley"
Date:
Subject: Re: tuning SQL