Looks like you can handle 256MB work_mem. Give it a shot and monitor temp_files. You do have log_temp_files = 0, right? Also, keep an eye out for "out of memory" log file errors. What PG version are you using?
Karthik Krishnakumar wrote on 3/23/2023 6:21 AM:
RAM - 256GB
shared_buffers - 64GB
maintenance_work_mem - 1GB
work_mem - 24MB
there are ~50 active connections at any given time.
from pg_badger - average temp file size for bulk insert/update is about 200MB, max size is multiple GB's depending on the table.
it is a write heavy workload - with inserts/updates happening around the clock.
Perhaps take the other approach: increase work_mem to make the bulk inserts fit into memory. You can easily undo work_mem changes. It only requires a sighup: reload, not restart.
What memory do you have now and what is work_mem currently set to? Also, have many concurrent, active connections do you average at a time?
thanks - checked with the devs and it does look like the application is doing some sort of a bulk insert, and at the moment it cannot be changed to use "COPY FROM".
will limiting bulk inserts to match the work_mem(assuming this is the guc that is used in this case) reduce this disk activity?