Re: ported application having performance issues - Mailing list pgsql-performance

From Tom Lane
Subject Re: ported application having performance issues
Date
Msg-id 22503.1120171414@sss.pgh.pa.us
Whole thread Raw
In response to ported application having performance issues  (John Mendenhall <john@surfutopia.net>)
List pgsql-performance
John Mendenhall <john@surfutopia.net> writes:
>  Merge Join  (cost=4272.84..4520.82 rows=1230 width=21) (actual time=3998.771..4603.739 rows=699 loops=1)
>    Merge Cond: ("outer".contact_id = "inner".id)
>    ->  Index Scan using lead_requests_contact_id_idx on lead_requests lr  (cost=0.00..74.09 rows=1435 width=21)
(actualtime=0.070..22.431 rows=1430 loops=1) 
>    ->  Sort  (cost=4272.84..4352.28 rows=31775 width=11) (actual time=3998.554..4130.067 rows=32640 loops=1)
>          Sort Key: c.id
>          ->  Seq Scan on contacts c  (cost=0.00..1896.77 rows=31775 width=11) (actual time=0.040..326.135 rows=32501
loops=1)
>                Filter: (partner_id IS NULL)
>  Total runtime: 4611.323 ms

Hmm ... even on a SPARC, it doesn't seem like it should take 4 seconds
to sort 30000 rows.  You can certainly see that the planner is not
expecting that (it's estimating a sort cost comparable to the scan cost,
which if true would put this in the sub-second ballpark).

Does increasing sort_mem help?

Have you considered using some other datatype than "numeric" for your
keys?  Numeric may be fast on Oracle but it's not amazingly fast on
Postgres.  bigint would be better, if you don't really need 38 digits;
if you do, I'd be inclined to think about plain char or varchar keys.

            regards, tom lane

pgsql-performance by date:

Previous
From: "Joshua D. Drake"
Date:
Subject: Re: ported application having performance issues
Next
From: John Mendenhall
Date:
Subject: Re: ported application having performance issues