Thanks for those pitched in. I finally resolved the issue. It turns out that the table bloat was the culprit. Those bloat couldn't be dealt with by manual or auto vacuum. It somehow reached a point that many queries had trouble finish on time even though they used good query plans. The final solution was just do a full vacuum on some of the key tables, and that immediately quiet down the database.
On 2020-02-01 16:17:13 -0700, Justin Lu wrote: > We are seeing very heavy LWLockTranche buffer_mapping in db recently. > > There server had 24 core, 128GB of RAM, SSD data file system, on Unbuntu > 16.04.6. > The shared_buffers was at 32GB. 1/4 of over RAM size. No issue on > checkpoints (avg time 29 min apart). > > After seeing the heavy wait, we added 64GB more RAM and increased > shared_buffers to 48GB, effective_cache_size to 90GB. But it seems there is > no impact on the buffer mapping waits at all.
I suggest doing a perf profile with --call-graph dwarf, to see where this is mostly coming from.
One thing I've seen causing symptoms like this before, is if there's suddenly a larger amount of table truncations, dropping, etc - dropping / truncating a table / index needs to scan all of shared buffers...
From:
"Sterpu Victor" Date: Subject:
Error "is not a table or materialized view" when creating a unique index on amaterialized view on PostgreSQL 9.5.10
Есть вопросы? Напишите нам!
Соглашаюсь с условиями обработки персональных данных
✖
By continuing to browse this website, you agree to the use of cookies. Go to Privacy Policy.