Query performance with small data base - Mailing list pgsql-general

From pginfo
Subject Query performance with small data base
Date
Msg-id 3DAEE3FA.BD1DA73@t1.unisoftbg.com
Whole thread Raw
Responses Re: Query performance with small data base  (Stephan Szabo <sszabo@megazone23.bigpanda.com>)
Re: Query performance with small data base  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-general
Hi all,

From couple of days I make some tests for postgresql performance.
The results are not optimistic for the moment.
The idea of tests is to view if postgres is good for our new project.

The test platform:

RH 7.3 on Dual P III 1 GHz , 1GB RAM.

The test query:

 explain analyze select
S.IDS_NUM,S.OP,S.KOL,S.OTN_MED,S.CENA,S.DDS,S.KURS,S.TOT,S.DTO,S.PTO,S.DTON,MED.MNAME
AS MEDNAME,N.MNAME AS NOMENNAME,N.NUM AS NNUM,S.PART,S.IZV  from   A_DOC
D  left outer join A_KLIENTI KL ON(D.IDS_KO=KL.IDS) , A_SKLAD S left
outer join  A_NOMEN N ON(S.IDS_NUM=N.IDS) left outer join  A_MED MED ON
(N.OSN_MED=MED.IDS )     WHERE S.FID=0 AND S.IDS_DOC=D.IDS  AND
D.DATE_OP >= 8353 AND D.DATE_OP <= 9983  ORDER BY  S.IDS_NUM,S.PART,S.OP
;
NOTICE:  QUERY PLAN:

Sort  (cost=14730.62..14730.62 rows=66390 width=236) (actual
time=8544.01..8588.09 rows=66095 loops=1)
  ->  Hash Join  (cost=1459.51..9413.19 rows=66390 width=236) (actual
time=341.04..5225.99 rows=66095 loops=1)
        ->  Hash Join  (cost=460.88..7077.17 rows=67666 width=202)
(actual time=70.67..3702.48 rows=67666 loops=1)
              ->  Hash Join  (cost=456.79..5719.77 rows=67666 width=186)
(actual time=68.62..2650.36 rows=67666 loops=1)
                    ->  Seq Scan on a_sklad s  (cost=0.00..4078.82
rows=67666 width=108) (actual time=0.17..1349.74 rows=67666 loops=1)
                    ->  Hash  (cost=440.03..440.03 rows=6703 width=78)
(actual time=68.01..68.01 rows=0 loops=1)
                          ->  Seq Scan on a_nomen n  (cost=0.00..440.03
rows=6703 width=78) (actual time=0.19..54.32 rows=6703 loops=1)
              ->  Hash  (cost=3.67..3.67 rows=167 width=16) (actual
time=0.98..0.98 rows=0 loops=1)
                    ->  Seq Scan on a_med med  (cost=0.00..3.67 rows=167
width=16) (actual time=0.21..0.69 rows=167 loops=1)
        ->  Hash  (cost=969.10..969.10 rows=11813 width=34) (actual
time=269.81..269.81 rows=0 loops=1)
              ->  Hash Join  (cost=195.77..969.10 rows=11813 width=34)
(actual time=30.41..247.55 rows=12016 loops=1)
                    ->  Seq Scan on a_doc d  (cost=0.00..566.60
rows=11813 width=23) (actual time=0.17..92.28 rows=12016 loops=1)
                    ->  Hash  (cost=184.42..184.42 rows=4542 width=11)
(actual time=29.07..29.07 rows=0 loops=1)
                          ->  Seq Scan on a_klienti kl
(cost=0.00..184.42 rows=4542 width=11) (actual time=0.23..21.03
rows=4542 loops=1)
Total runtime: 8649.46 msec


I have made vacuum analyze.

I have indexes on all the references usen in query and all x.IDS are
varchar(20).

I tested exact the same data on oracle and got it running in 1.5 sec. !

Can I fine tune the server or db to have better result on postgres or it
is normal ?

I the real db we will have in result 600 - 700 000 rows.

Many thanks,
Ivan.


pgsql-general by date:

Previous
From: Philip Hallstrom
Date:
Subject: Re: Boolean to Integer?
Next
From: "Ian Harding"
Date:
Subject: Re: Boolean to Integer?