Our company is moving to Aurora, In the past I would take care not to allow postgresql to over-commit memory beyond the actual memory on the server, which meant I would add the buffer pool + (work_mem * max_connections) + (maintenance_work_mem * autovacuum threads)
However as I look at the aroura defaults they are all off the charts, for example, based on the calculations in the config (amazon doesn't make it easy, some settings are in pages, some are in kb, some are who knows what) I see the following settings as default in our aroura config:
The instance size is db.r4.xlarge
this instance size is listed as having 30.5GB of ram
According to my math (If I got it right) in a worst case scenario,
if we maxed out max_connections, work_mem and maintenance_work_mem limits
the db would request 247GB of memory
Additionally amazon has set effective_cache_size = {DBInstanceClassMemory/10922}
which equates to about 2.9MB (which given the other outlandish setting may be the only appropriate setting in the system)
What the hell is amazon doing here? Am I missing the boat on tuning postgresql memory? Is amazon simply counting on the bet that users will never fully utilize an instance?
nobody knows what patches are used there. Max connections over 1000 are not good idea for native Postgres. But maybe there are some patches - or just mostly idle connections are expected.