Memory settings when running postgres in a docker container - Mailing list pgsql-general

From Koen De Groote
Subject Memory settings when running postgres in a docker container
Date
Msg-id CAGbX52Fm=k8hHJKEzo6-mnh7gn91s=Lz_t6B5uF1SotpXH3UeA@mail.gmail.com
Whole thread Raw
Responses Re: Memory settings when running postgres in a docker container
List pgsql-general
Assuming a machine with:

* 16 CPU cores
* 64GB RAM

Set to 500 max connections


Will output recommended settings:

max_connections = 500
shared_buffers = 16GB
effective_cache_size = 48GB
maintenance_work_mem = 2GB
checkpoint_completion_target = 0.9
wal_buffers = 16MB
default_statistics_target = 100
random_page_cost = 1.1
effective_io_concurrency = 200
work_mem = 8388kB
huge_pages = try
min_wal_size = 1GB
max_wal_size = 4GB
max_worker_processes = 16
max_parallel_workers_per_gather = 4
max_parallel_workers = 16
max_parallel_maintenance_workers = 4

And they basically use up all the memory of the machine.

16GB shared buffers, 48GB effective cache size, 8MB of work_mem for some reason...

This seems rather extreme. I feel there should be free memory for emergencies and monitoring solutions.

And then there's the fact that postgres on this machine will be run in a docker container. Which, on Linux, receives 64MB of /dev/shm shared memory by default, but can be increased.

I feel like I should probably actually lower my upper limit for memory, regardless of what the machine actually has, so I can have free memory, and also not bring the container process itself into danger.

Is it as straightforward as putting my limit on, say 20GB, and then giving more /dev/shm to the container? Or is there more to consider?

Regards,
Koen De Groote






pgsql-general by date:

Previous
From: Koen De Groote
Date:
Subject: Re: Running docker in postgres, SHM size of the docker container in postgres 16
Next
From: Philip Couling
Date:
Subject: Re: Validating check constraints without a table scan?