Re: Blocked updates and background writer performance - Mailing list pgsql-general

From Cory Tucker
Subject Re: Blocked updates and background writer performance
Date
Msg-id CAG_=8kC5eEujjN8rXoO=wJ=b+0dbmYMNRbg6V8Dtv1gai2zkrQ@mail.gmail.com
Whole thread Raw
In response to Re: Blocked updates and background writer performance  (Dorian Hoxha <dorian.hoxha@gmail.com>)
List pgsql-general
Thanks for the reply Dorian.  For the sake of argument, lets just say I'm definitely not doing what you mentioned. My question was not so much around modeling json storage as it is around tuning the background writer performance.

On Tue, Jan 12, 2016 at 2:14 PM Dorian Hoxha <dorian.hoxha@gmail.com> wrote:
Maybe you're not doing this.... but:
Using "data" json(b)/hstore column for all/most/many fields is an antipattern. Use 'data' ONLY for columns that you know will be dynamic.
This way you'll write less data into static-columns (no key-names overhead and better types) --> less data to disk etc (selects will also be faster).


On Tue, Jan 12, 2016 at 7:25 PM, Cory Tucker <cory.tucker@gmail.com> wrote:
PG 9.4.4 (RDS)

I'm experiencing an issue when trying to update many rows in a single table (one row at a time, but parallelized across ~12 connections).  The issue we see is that the writes will periodically be blocked for a duration of several minutes and then pick back up.  After digging through our monitoring stack, I was able to uncover these stats which seem to allude to it being a background writer performance problem:

bg_writer.png
(apologies for the image)

Our settings for the background writer are pretty standard OOB (I threw in some others that I thought might be helpful, too):

          name           | setting | unit
-------------------------+---------+------
 bgwriter_delay          | 200     | ms
 bgwriter_lru_maxpages   | 100     |
 bgwriter_lru_multiplier | 2       |
 maintenance_work_mem    | 65536   | kB
 max_worker_processes    | 8       |
 work_mem                | 32768   | kB


The table that is being written to contains a jsonb column with a GIN index:

            Table "public.ced"
    Column     |           Type           | Modifiers
---------------+--------------------------+-----------
 id            | bigint                   | not null
 created_at    | timestamp with time zone |
 modified_at   | timestamp with time zone |
 bean_version  | bigint                   | default 0
 account_id    | bigint                   | not null
 data          | jsonb                    | not null
Indexes:
    "ced_pkey" PRIMARY KEY, btree (id)
    "ced_data" gin (data jsonb_path_ops)
    "partition_key_idx" btree (account_id, id)


It seems to me that the background writer just can't keep up with the amount of writes that I am trying to do and freezes all the updates. What are my options to improve the background writer performance here?

thanks
--Cory



Attachment

pgsql-general by date:

Previous
From: Tom Lane
Date:
Subject: Re: Moving a large DB (> 500GB) to another DB with different locale
Next
From: Andreas Joseph Krogh
Date:
Subject: Re: Moving a large DB (> 500GB) to another DB with different locale