Re: Slow update statement

From: Tom Lane
Subject: Re: Slow update statement
Date: ,
Msg-id: 20254.1123472905@sss.pgh.pa.us
(view: Whole thread, Raw)
In response to: Re: Slow update statement  (Patrick Hatcher)
Responses: Re: Slow update statement  (Patrick Hatcher)
List: pgsql-performance

Tree view

Slow update statement  (Patrick Hatcher, )
 Re: Slow update statement  (John A Meinel, )
  Re: Slow update statement  (Patrick Hatcher, )
   Re: Slow update statement  (Tom Lane, )
    Re: Slow update statement  (Patrick Hatcher, )
 Re: Slow update statement  (Tom Lane, )
  Re: Slow update statement  (Patrick Hatcher, )
   Re: Slow update statement  (Tom Lane, )

Patrick Hatcher <> writes:
>  Hash Join  (cost=1246688.42..4127248.31 rows=12702676 width=200)
>    Hash Cond: ("outer".cus_num = "inner".cus_nbr)
>    ->  Seq Scan on bcp_ddw_ck_cus b  (cost=0.00..195690.76 rows=12702676
> width=16)
>    ->  Hash  (cost=874854.34..874854.34 rows=12880834 width=192)
>          ->  Seq Scan on cdm_ddw_customer  (cost=0.00..874854.34
> rows=12880834 width=192)

Yipes, that's a bit of a large hash table, if the planner's estimates
are on-target.  What do you have work_mem (sort_mem if pre 8.0) set to,
and how does that compare to actual available RAM?  I'm thinking you
might have set work_mem too large and the thing is now swap-thrashing.

            regards, tom lane


pgsql-performance by date:

From: Patrick Hatcher
Date:
Subject: Re: Slow update statement
From: Kari Lavikka
Date:
Subject: Re: Finding bottleneck