Re: Configuration for a new server. - Mailing list pgsql-performance

From Benjamin Krajmalnik
Subject Re: Configuration for a new server.
Date
Msg-id F4E6A2751A2823418A21D4A160B689887B0E39@fletch.stackdump.local
Whole thread Raw
In response to Re: Configuration for a new server.  (Greg Smith <greg@2ndquadrant.com>)
Responses Re: Configuration for a new server.  (Greg Smith <greg@2ndquadrant.com>)
List pgsql-performance

There are approximately 50 tables which get updated with almost 100% records updated every 5 minutes – what is a good number of autovacuum processes to have on these?  The current server I am replacing only has 3 of them but I think I may gain a benefit from having more.


Watch pg_stat_user_tables and you can figure this out for your workload.  There are no generic answers in this area.

What in particular should I be looking at to help me decide?

 

 

Currently I have what I believe to be an aggressive bgwriter setting as follows:

 

bgwriter_delay = 200ms                  # 10-10000ms between rounds

bgwriter_lru_maxpages = 1000            # 0-1000 max buffers written/round    

bgwriter_lru_multiplier = 10            # 0-10.0 multipler on buffers scanned/round

 

Does this look right?


You'd probably be better off decreasing the delay rather than pushing up the other two parameters.  It's easy to tell if you did it right or not; just look at pg_stat_bgwriter.  If buffers_backend is high relative to the others, that means the multiplier or delay is wrong.  Or if maxwritten_clean is increasing fast, that means bgwriter_lru_maxpages is too low.

checkpoints_timed = 261

checkpoints_req = 0

buffers_checkpoint = 49058438

buffers_clean = 3562421

maxwritten_clean = 243

buffers_backend = 11774254

buffers_alloc = 42816578

pgsql-performance by date:

Previous
From: "Benjamin Krajmalnik"
Date:
Subject: Re: Configuration for a new server.
Next
From: "Ross J. Reedstrom"
Date:
Subject: Re: postgres 9 query performance