Re: General performance questions about postgres on Apple - Mailing list pgsql-performance

From Sean Shanny
Subject Re: General performance questions about postgres on Apple
Date
Msg-id 40396A16.9040301@earthlink.net
Whole thread Raw
In response to Re: General performance questions about postgres on Apple  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: General performance questions about postgres on Apple  (Tom Lane <tgl@sss.pgh.pa.us>)
Re: General performance questions about postgres on Apple  ("scott.marlowe" <scott.marlowe@ihs.com>)
List pgsql-performance
Tom,

We have the following setting for random page cost:

random_page_cost = 1            # units are one sequential page fetch cost

Any suggestions on what to bump it up to?

We are waiting to hear back from Apple on the speed issues, so far we
are not impressed with the hardware in helping in the IO department.
Our DB is about 263GB with indexes now so there is not way it is going
to fit into memory. :-(  I have taken the step of breaking out the data
into month based groups just to keep the table sizes down.  Our current
months table has around 72 million rows in it as of today.  The joys of
building a data warehouse and trying to make it as fast as possible.

Thanks.

--sean



Tom Lane wrote:

>Sean Shanny <shannyconsulting@earthlink.net> writes:
>
>
>>New results with the above changes: (Rather a huge improvement!!!)
>>Thanks Scott.  I will next attempt to make the cpu_* changes to see if
>>it the picks the correct plan.
>>
>>
>
>
>
>>explain analyze  SELECT t1.id, t2.md5, t2.url from referral_temp t2 LEFT
>>OUTER JOIN d_referral t1 ON t2.md5 = t1.referral_md5;
>>                                                                  QUERY PLAN

>>----------------------------------------------------------------------------------------------------------------------------------------------
>> Hash Left Join  (cost=1669281.60..3204008.48 rows=480082 width=149)
>>(actual time=157221.125..-412311.378 rows=502347 loops=1)
>>   Hash Cond: ("outer".md5 = "inner".referral_md5)
>>   ->  Seq Scan on referral_temp t2  (cost=0.00..16034.81 rows=480081
>>width=145) (actual time=11.537..1852.336 rows=502347 loops=1)
>>   ->  Hash  (cost=1356358.48..1356358.48 rows=30344048 width=40)
>>(actual time=157187.530..157187.530 rows=0 loops=1)
>>         ->  Seq Scan on d_referral t1  (cost=0.00..1356358.48
>>rows=30344048 width=40) (actual time=14.134..115048.285 rows=27908024
>>loops=1)
>> Total runtime: 212595.909 ms
>>(6 rows)
>>
>>
>
>It seems like the planner is overestimating the cost of a seqscan
>relative to indexed access.  Note that the above large seqscan is priced
>at 1356358.48 cost units vs 115048.285 actual msec, which says that a
>sequential page fetch is taking about 0.1 msec on your hardware.
>(You should check the actual size of d_referral to verify this, though.)
>The other plan made it look like an indexed fetch was costing several
>milliseconds.  You may have a situation where you need to raise
>random_page_cost, rather than lowering it as people more often do.
>
>What are you using for random_page_cost anyway?  It doesn't look like
>you are at the default.
>
>This also suggests that the performance issue with your RAID array
>has to do with seek time rather than transfer bandwidth...
>
>            regards, tom lane
>
>
>

pgsql-performance by date:

Previous
From: Josh Berkus
Date:
Subject: Re: Column correlation drifts, index ignored again
Next
From: Tom Lane
Date:
Subject: Re: General performance questions about postgres on Apple