Re: Re: The fastest way to update thousands of rows in moderately sized table - Mailing list pgsql-general

From 林士博
Subject Re: Re: The fastest way to update thousands of rows in moderately sized table
Date
Msg-id CACudzGiMtb5xhNrD9H_c3dVMdrdNcEbjZdksoO9KtdF0XCav1g@mail.gmail.com
Whole thread Raw
In response to Re: The fastest way to update thousands of rows in moderately sized table  (twoflower <standa.kurik@gmail.com>)
Responses Re: The fastest way to update thousands of rows in moderately sized table  (twoflower <standa.kurik@gmail.com>)
List pgsql-general

OK. 
In your case, I can not see any reasons that 
using a temp table to do joint-update is faster than directly updating.

And from the execution plan, index scan just takes very little time.
Most of the time is doing insert/delete.

As you mentioned, fk_assignmentwhere is updated frequently, 
and disabling indexes before updating is not an option you can choose, 
try setting fillfactor of  this table to a smaller value.

And maybe you can also check the time of inserting 8920 rows to this table.
If it is far less than 35s. 
you can consider to do update in this way:
1.create a temp table with all columns of fk_assignmentwhere,
  likes 
  create unlogged table "temp_table" as
  select id, ..... , 1000 as fk_job
  from "TRANSLATION"
  where fk_job = 1000;
2.delete rows from original table.
3.inert rows from temp table to original table.

At the end , may be you can check if postgresql can insert start from the position of HWM(High Water Mark).




2015-07-24 15:58 GMT+09:00 twoflower <standa.kurik@gmail.com>:
林士博 wrote
Can you post execution plan of the original update sql. EXPLAIN (ANALYZE ON, BUFFERS ON) update "TRANSLATION" set fk_assignmentwhere fk_job = 1000;
Here it is:

Update on "TRANSLATION" (cost=0.56..9645.13 rows=3113 width=391) (actual time=35091.036..35091.036 rows=0 loops=1)
   Buffers: shared hit=74842343 read=7242 dirtied=7513
     -> Index Scan using "TRANSLATION_idx_composite_job_last_revision" on "TRANSLATION" (cost=0.56..9645.13 rows=3113 width=391) (actual time=0.042..24.147 rows=8920 loops=1)
        Index Cond: (fk_job = 59004)
        Buffers: shared hit=626

Planning time: 0.362 ms Execution time: 35091.192 ms


View this message in context: Re: The fastest way to update thousands of rows in moderately sized table
Sent from the PostgreSQL - general mailing list archive at Nabble.com.



--
─repica group──────────────────
▼ポイント×電子マネー×メールで店舗販促に必要な機能を全て提供!

▼フォローアップメールや外部連携に対応!
【mail solution】http://ms.repica.jp/

▼9年連続シェアNo.1 個人情報漏えい対策ソフト
【P-Pointer】http://ppointer.jp/

▼単月導入可能!AR動画再生アプリ

▼ITビジネスを創造しながら未来を創る
───────────────────────────

pgsql-general by date:

Previous
From: twoflower
Date:
Subject: Re: The fastest way to update thousands of rows in moderately sized table
Next
From: twoflower
Date:
Subject: Re: The fastest way to update thousands of rows in moderately sized table