Thread: Memory settings when running postgres in a docker container
Assuming a machine with:
max_connections = 500
shared_buffers = 16GB
effective_cache_size = 48GB
maintenance_work_mem = 2GB
checkpoint_completion_target = 0.9
wal_buffers = 16MB
default_statistics_target = 100
random_page_cost = 1.1
effective_io_concurrency = 200
work_mem = 8388kB
huge_pages = try
min_wal_size = 1GB
max_wal_size = 4GB
max_worker_processes = 16
max_parallel_workers_per_gather = 4
max_parallel_workers = 16
max_parallel_maintenance_workers = 4
* 16 CPU cores
* 64GB RAM
Set to 500 max connections
A tool like this: https://pgtune.leopard.in.ua/
Will output recommended settings:
shared_buffers = 16GB
effective_cache_size = 48GB
maintenance_work_mem = 2GB
checkpoint_completion_target = 0.9
wal_buffers = 16MB
default_statistics_target = 100
random_page_cost = 1.1
effective_io_concurrency = 200
work_mem = 8388kB
huge_pages = try
min_wal_size = 1GB
max_wal_size = 4GB
max_worker_processes = 16
max_parallel_workers_per_gather = 4
max_parallel_workers = 16
max_parallel_maintenance_workers = 4
And they basically use up all the memory of the machine.
16GB shared buffers, 48GB effective cache size, 8MB of work_mem for some reason...
This seems rather extreme. I feel there should be free memory for emergencies and monitoring solutions.
And then there's the fact that postgres on this machine will be run in a docker container. Which, on Linux, receives 64MB of /dev/shm shared memory by default, but can be increased.
I feel like I should probably actually lower my upper limit for memory, regardless of what the machine actually has, so I can have free memory, and also not bring the container process itself into danger.
Is it as straightforward as putting my limit on, say 20GB, and then giving more /dev/shm to the container? Or is there more to consider?
Regards,
Koen De Groote
i dont get why you think all memroy will be used.
When you say
shared_buffers = 16GB
effective_cache_size = 48GB
effective_cache_size = 48GB
...then this is using only 16GB for shared buffers.
The effective _cache_size doesn't cause any memory to.be allocated. It's just a hint to optomizer ....
On Wed, 20 Nov 2024, 11:16 Koen De Groote, <kdg.dev@gmail.com> wrote:
Assuming a machine with:* 16 CPU cores* 64GB RAMSet to 500 max connectionsA tool like this: https://pgtune.leopard.in.ua/Will output recommended settings:max_connections = 500
shared_buffers = 16GB
effective_cache_size = 48GB
maintenance_work_mem = 2GB
checkpoint_completion_target = 0.9
wal_buffers = 16MB
default_statistics_target = 100
random_page_cost = 1.1
effective_io_concurrency = 200
work_mem = 8388kB
huge_pages = try
min_wal_size = 1GB
max_wal_size = 4GB
max_worker_processes = 16
max_parallel_workers_per_gather = 4
max_parallel_workers = 16
max_parallel_maintenance_workers = 4And they basically use up all the memory of the machine.16GB shared buffers, 48GB effective cache size, 8MB of work_mem for some reason...This seems rather extreme. I feel there should be free memory for emergencies and monitoring solutions.And then there's the fact that postgres on this machine will be run in a docker container. Which, on Linux, receives 64MB of /dev/shm shared memory by default, but can be increased.I feel like I should probably actually lower my upper limit for memory, regardless of what the machine actually has, so I can have free memory, and also not bring the container process itself into danger.Is it as straightforward as putting my limit on, say 20GB, and then giving more /dev/shm to the container? Or is there more to consider?Regards,Koen De Groote
Ah, see, I didn't know that.
On Wed, Nov 20, 2024 at 11:10 PM David Mullineux <dmullx@gmail.com> wrote:
i dont get why you think all memroy will be used.When you sayshared_buffers = 16GB
effective_cache_size = 48GB...then this is using only 16GB for shared buffers.The effective _cache_size doesn't cause any memory to.be allocated. It's just a hint to optomizer ....On Wed, 20 Nov 2024, 11:16 Koen De Groote, <kdg.dev@gmail.com> wrote:Assuming a machine with:* 16 CPU cores* 64GB RAMSet to 500 max connectionsA tool like this: https://pgtune.leopard.in.ua/Will output recommended settings:max_connections = 500
shared_buffers = 16GB
effective_cache_size = 48GB
maintenance_work_mem = 2GB
checkpoint_completion_target = 0.9
wal_buffers = 16MB
default_statistics_target = 100
random_page_cost = 1.1
effective_io_concurrency = 200
work_mem = 8388kB
huge_pages = try
min_wal_size = 1GB
max_wal_size = 4GB
max_worker_processes = 16
max_parallel_workers_per_gather = 4
max_parallel_workers = 16
max_parallel_maintenance_workers = 4And they basically use up all the memory of the machine.16GB shared buffers, 48GB effective cache size, 8MB of work_mem for some reason...This seems rather extreme. I feel there should be free memory for emergencies and monitoring solutions.And then there's the fact that postgres on this machine will be run in a docker container. Which, on Linux, receives 64MB of /dev/shm shared memory by default, but can be increased.I feel like I should probably actually lower my upper limit for memory, regardless of what the machine actually has, so I can have free memory, and also not bring the container process itself into danger.Is it as straightforward as putting my limit on, say 20GB, and then giving more /dev/shm to the container? Or is there more to consider?Regards,Koen De Groote