Re: speed of querry? - Mailing list pgsql-performance

From Joel Fradkin
Subject Re: speed of querry?
Date
Msg-id 000001c541bc$c538fab0$797ba8c0@jfradkin
Whole thread Raw
In response to Re: speed of querry?  ("Dave Held" <dave.held@arrayservicesgrp.com>)
Responses Re: speed of querry?
List pgsql-performance
It is still slower on the Linux box. (included is explain with SET
enable_seqscan = off;
explain analyze select * from viwassoclist where clientnum ='SAKS') See
below.

I did a few other tests (changing drive arrays helped by 1 second was slower
on my raid 10 on the powervault).

Pulling just raw data is much faster on the Linux box.
"Seq Scan on tblresponse_line  (cost=100000000.00..100089717.78 rows=4032078
width=67) (actual time=0.028..4600.431 rows=4032078 loops=1)"
"Total runtime: 6809.399 ms"
Windows box
"Seq Scan on tblresponse_line  (cost=0.00..93203.68 rows=4031968 width=67)
(actual time=16.000..11316.000 rows=4031968 loops=1)"
"Total runtime: 16672.000 ms"

I am going to reload the data bases, just to see what I get.
I am thinking I may have to flatten the files for postgres (eliminate joins
of any kind for reporting etc). Might make a good deal more data, but I
think from the app's point of view it is a good idea anyway, just not sure
how to handle editing.

Joel Fradkin

"Merge Join  (cost=49697.60..50744.71 rows=14987 width=113) (actual
time=11301.160..12171.072 rows=160593 loops=1)"
"  Merge Cond: ("outer".locationid = "inner".locationid)"
"  ->  Sort  (cost=788.81..789.89 rows=432 width=49) (actual
time=3.318..3.603 rows=441 loops=1)"
"        Sort Key: l.locationid"
"        ->  Index Scan using ix_location on tbllocation l
(cost=0.00..769.90 rows=432 width=49) (actual time=0.145..2.283 rows=441
loops=1)"
"              Index Cond: ('SAKS'::text = (clientnum)::text)"
"  ->  Sort  (cost=48908.79..49352.17 rows=177352 width=75) (actual
time=11297.774..11463.780 rows=160594 loops=1)"
"        Sort Key: a.locationid"
"        ->  Merge Right Join  (cost=26247.95..28942.93 rows=177352
width=75) (actual time=8357.010..9335.362 rows=177041 loops=1)"
"              Merge Cond: ((("outer".clientnum)::text =
"inner"."?column10?") AND ("outer".id = "inner".jobtitleid))"
"              ->  Index Scan using ix_tbljobtitle_id on tbljobtitle jt
(cost=0.00..243.76 rows=6604 width=37) (actual time=0.122..12.049 rows=5690
loops=1)"
"                    Filter: (1 = presentationid)"
"              ->  Sort  (cost=26247.95..26691.33 rows=177352 width=53)
(actual time=8342.271..8554.943 rows=177041 loops=1)"
"                    Sort Key: (a.clientnum)::text, a.jobtitleid"
"                    ->  Index Scan using ix_associate_clientnum on
tblassociate a  (cost=0.00..10786.17 rows=177352 width=53) (actual
time=0.166..1126.052 rows=177041 loops=1)"
"                          Index Cond: ((clientnum)::text = 'SAKS'::text)"
"Total runtime: 12287.502 ms"


This is above and beyond toying with the column statistics.  You
are basically telling the planner to use an index.  Try this,
and post the EXPLAIN ANALYZE for the seqscan = off case on the
slow box if it doesn't speed things up for you.

__
David B. Held
Software Engineer/Array Services Group
200 14th Ave. East,  Sartell, MN 56377
320.534.3637 320.253.7800 800.752.8129

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
      subscribe-nomail command to majordomo@postgresql.org so that your
      message can get through to the mailing list cleanly


pgsql-performance by date:

Previous
From: Alan Stange
Date:
Subject: Re: How to improve db performance with $7K?
Next
From: "Dave Held"
Date:
Subject: Re: Intel SRCS16 SATA raid?