Hi All,
I've seen the shared_buffers 8GB maximum recommendation repeated many
times. I have several questions in this regard.
- Is this recommendation still true for recent versions of postgres?
(e.g. wasn't it the case only for really old versions where the locks on
shared buffers worked much less efficiently)
- I'm not a huge Linux expert, but I've heard someone saying that
reading from the filesystem cache requires a context switch. I suspect
that such reads are slightly more expensive now after the
Meltdown/Spectre patch in the kernel. Could that be a reason for
increasing the value of shared_buffers?
- Could shared_buffers=128GB or more on a 250 GB RAM server be a
reasonable setting? What downsides could there be?
PS. Some background. We had shared_buffers=8GB initially. In
pg_stat_bgwriter we saw that dirty buffers were written to disk more
frequently by backends than during checkpoints (buffers_clean >
buffers_checkpoint, and buffers_backend > buffers_checkpoint). According
to pg_buffercache extension, there was very small percentage of dirty
pages in shared buffers. The percentage of pages with usagecount >= 3
was also low. Some of our more frequently used tables and indexes are
more than 10 GB in size. This all suggested that probably the bigger
tables and indexes, whenever scanned, are constantly flushing pages from
the shared buffers area. After increasing shared_buffers to 32GB, the
picture started looking healthier. There were 1GB+ of dirty pages in
shared buffers (vs 40-200MB before), 30-60% of pages with usagecount >=
3 (vs 10-40% before), buffers_checkpoint started to grow faster than
buffers_clean or buffers_backend. There is still not all frequently used
data fits in shared_buffers, so we're considering to increase the
parameter more. I wanted to have some idea about how big it could
reasonably be.
PPS. I know any possible answer might be just a recommendation, and
parameter values should be tested for each specific case, but still
wanted to hear your opinion. Thanks.
Regards,
Vitaliy