Is there somthing I need to do on my production server? - Mailing list pgsql-performance

From Joel Fradkin
Subject Is there somthing I need to do on my production server?
Date
Msg-id 002c01c53eb9$ee07af00$797ba8c0@jfradkin
Whole thread Raw
Responses Re: Is there somthing I need to do on my production server?
List pgsql-performance
I am running 8.0.1 on a desktop xp system and a AS4 redhat system.
The redhat will be my production server in a week or so and it is returning
slower the my desk top?
I understand about the perc cards on the Dell (redhat) but my Dell 2 proc
box runs much faster (MSSQL) then my desktop, so I am wondering if I messed
up Linux or have a postgres config issue.

On my desktop (1 proc 2 gigs of memor) I get:
"Merge Join  (cost=7135.56..7296.25 rows=7906 width=228) (actual
time=5281.000..6266.000 rows=160593 loops=1)"
"  Merge Cond: ("outer".locationid = "inner".locationid)"
"  ->  Sort  (cost=955.78..957.07 rows=514 width=79) (actual
time=0.000..0.000 rows=441 loops=1)"
"        Sort Key: l.locationid"
"        ->  Index Scan using ix_location on tbllocation l 
(cost=0.00..932.64 rows=514 width=79) (actual time=0.000..0.000 rows=441
loops=1)"
"              Index Cond: ('SAKS'::text = (clientnum)::text)"
"  ->  Sort  (cost=6179.77..6187.46 rows=3076 width=173) (actual
time=5281.000..5424.000 rows=160594 loops=1)"
"        Sort Key: a.locationid"
"        ->  Merge Left Join  (cost=154.41..6001.57 rows=3076 width=173)
(actual time=94.000..2875.000 rows=177041 loops=1)"
"              Merge Cond: ((("outer".clientnum)::text =
"inner"."?column4?") AND ("outer".jobtitleid = "inner".id))"
"              ->  Index Scan using ix_tblassoc_jobtitleid on tblassociate
a  (cost=0.00..5831.49 rows=3076 width=134) (actual time=0.000..676.000
rows=177041 loops=1)"
"                    Index Cond: ((clientnum)::text = 'SAKS'::text)"
"              ->  Sort  (cost=154.41..154.50 rows=34 width=67) (actual
time=78.000..204.000 rows=158255 loops=1)"
"                    Sort Key: (jt.clientnum)::text, jt.id"
"                    ->  Seq Scan on tbljobtitle jt  (cost=0.00..153.55
rows=34 width=67) (actual time=0.000..31.000 rows=6603 loops=1)"
"                          Filter: (1 = presentationid)"
"Total runtime: 6563.000 ms"
On my production (4 proc, 8 gigs of memory)
"Merge Join  (cost=69667.87..70713.46 rows=15002 width=113) (actual
time=12140.091..12977.841 rows=160593 loops=1)"
"  Merge Cond: ("outer".locationid = "inner".locationid)"
"  ->  Sort  (cost=790.03..791.11 rows=433 width=49) (actual
time=2.936..3.219 rows=441 loops=1)"
"        Sort Key: l.locationid"
"        ->  Index Scan using ix_location on tbllocation l 
(cost=0.00..771.06 rows=433 width=49) (actual time=0.062..1.981 rows=441
loops=1)"
"              Index Cond: ('SAKS'::text = (clientnum)::text)"
"  ->  Sort  (cost=68877.84..69320.17 rows=176933 width=75) (actual
time=12137.081..12305.125 rows=160594 loops=1)"
"        Sort Key: a.locationid"
"        ->  Merge Right Join  (cost=46271.48..48961.53 rows=176933
width=75) (actual time=9096.623..10092.311 rows=177041 loops=1)"
"              Merge Cond: ((("outer".clientnum)::text =
"inner"."?column10?") AND ("outer".id = "inner".jobtitleid))"
"              ->  Index Scan using ix_tbljobtitle_id on tbljobtitle jt 
(cost=0.00..239.76 rows=6604 width=37) (actual time=0.068..12.157 rows=5690
loops=1)"
"                    Filter: (1 = presentationid)"
"              ->  Sort  (cost=46271.48..46713.81 rows=176933 width=53)
(actual time=9081.546..9295.495 rows=177041 loops=1)"
"                    Sort Key: (a.clientnum)::text, a.jobtitleid"
"                    ->  Seq Scan on tblassociate a  (cost=0.00..30849.25
rows=176933 width=53) (actual time=543.931..1674.518 rows=177041 loops=1)"
"                          Filter: ((clientnum)::text = 'SAKS'::text)"
"Total runtime: 13101.402 ms"
 
I am at a bit of a loss as I would have thought my soon to be production box
should be blowing away my desktop?
 
Also stupid newb question?
I am a bit confused looking at the results of explain analyze.
I would have thought the explain analyze select * from viwassoclist where
clientnum ='SAKS'
Would first limit the result set by clientnum = ‘SAKS” is this the bottom
line?
"                    ->  Seq Scan on tblassociate a  (cost=0.00..30849.25
rows=176933 width=53) (actual time=543.931..1674.518 rows=177041 loops=1)"
"                          Filter: ((clientnum)::text = 'SAKS'::text)"
which if I understand this (not saying I do) is taking actual
time=543.931..1674.518 rows=177041 loops=1
this means 1 loop takes between 543 and 1674 milisecs to return 177041 rows?
And the analyzer thought I would take cost=0.00..30849.25?
 
I am just trying to understand if I can do the sql different to get a faster
result.
I am going to try and eliminate my left outer joins and aggregates on select
throughout the app as well as eliminate some unions that exist.
 


Joel Fradkin
 
Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel.  941-753-7111 ext 305
 
jfradkin@wazagua.com
www.wazagua.com
Powered by Wazagua
Providing you with the latest Web-based technology & advanced tools.
© 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc
 This email message is for the use of the intended recipient(s) and may
contain confidential and privileged information.  Any unauthorized review,
use, disclosure or distribution is prohibited.  If you are not the intended
recipient, please contact the sender by reply email and delete and destroy
all copies of the original message, including attachments.
 

 



pgsql-performance by date:

Previous
From: Gourish Singbal
Date:
Subject: Re: Server crashing
Next
From: Joe Conway
Date:
Subject: Re: [sfpug] DATA directory on network attached storage