On Sat, Sep 03, 2022 at 11:40:03PM -0400, Reid Thompson wrote:
> > > + 0, 0, INT_MAX,
> > > + NULL, NULL, NULL
> > I think this needs a maximum like INT_MAX/1024/1024
>
> Is this noting that we'd set a ceiling of 2048MB?
The reason is that you're later multiplying it by 1024*1024, so you need
to limit it to avoid overflowing. Compare with
min_dynamic_shared_memory, Log_RotationSize, maintenance_work_mem,
autovacuum_work_mem.
typo: Explicitely
+ errmsg("request will exceed postgresql.conf defined max_total_backend_memory limit (%lu >
%lu)",
I wouldn't mention postgresql.conf - it could be in
postgresql.auto.conf, or an include file, or a -c parameter.
Suggest: allocation would exceed max_total_backend_memory limit...
+ ereport(LOG, errmsg("decrease reduces reported backend memory allocated below zero; setting reported to
0"));
Suggest: deallocation would decrease backend memory below zero;
+ {"max_total_backend_memory", PGC_SIGHUP, RESOURCES_MEM,
Should this be PGC_SU_BACKEND to allow a superuser to set a higher
limit (or no limit)?
There's compilation warning under mingw cross compile due to
sizeof(long). See d914eb347 and other recent commits which I guess is
the current way to handle this.
http://cfbot.cputube.org/reid-thompson.html
For performance test, you'd want to check what happens with a large
number of max_connections (and maybe a large number of clients). TPS
isn't the only thing that matters. For example, a utility command might
sometimes do a lot of allocations (or deallocations), or a
"parameterized nested loop" may loop over over many outer tuples and
reset for each. There's also a lot of places that reset to a
"per-tuple" context. I started looking at its performance, but nothing
to show yet.
Would you keep people copied on your replies ("reply all") ? Otherwise
I (at least) may miss them. I think that's what's typical on these
lists (and the list tool is smart enough not to send duplicates to
people who are direct recipients).
--
Justin