Re: Slow update statement

From: Patrick Hatcher
Subject: Re: Slow update statement
Date: ,
Msg-id: 42F6E118.20206@comcast.net
(view: Whole thread, Raw)
In response to: Re: Slow update statement  (Tom Lane)
List: pgsql-performance

Tree view

Slow update statement  (Patrick Hatcher, )
 Re: Slow update statement  (John A Meinel, )
  Re: Slow update statement  (Patrick Hatcher, )
   Re: Slow update statement  (Tom Lane, )
    Re: Slow update statement  (Patrick Hatcher, )
 Re: Slow update statement  (Tom Lane, )
  Re: Slow update statement  (Patrick Hatcher, )
   Re: Slow update statement  (Tom Lane, )

At the time this was the only process running on the box so I set
sort_mem= 228000;
It's a 12G box.

Tom Lane wrote:

>Patrick Hatcher <> writes:
>
>
>> Hash Join  (cost=1246688.42..4127248.31 rows=12702676 width=200)
>>   Hash Cond: ("outer".cus_num = "inner".cus_nbr)
>>   ->  Seq Scan on bcp_ddw_ck_cus b  (cost=0.00..195690.76 rows=12702676
>>width=16)
>>   ->  Hash  (cost=874854.34..874854.34 rows=12880834 width=192)
>>         ->  Seq Scan on cdm_ddw_customer  (cost=0.00..874854.34
>>rows=12880834 width=192)
>>
>>
>
>Yipes, that's a bit of a large hash table, if the planner's estimates
>are on-target.  What do you have work_mem (sort_mem if pre 8.0) set to,
>and how does that compare to actual available RAM?  I'm thinking you
>might have set work_mem too large and the thing is now swap-thrashing.
>
>            regards, tom lane
>
>
>


pgsql-performance by date:

From: Patrick Hatcher
Date:
Subject: Re: Slow update statement
From: Kari Lavikka
Date:
Subject: Re: Finding bottleneck