>>>> I was thinking that we would have a pool of ready servers
>>>> _per_database_. That is, we would be able to configure say 8
>>>> servers in a particular DB, and say 4 in another DB etc. These
>>>> servers could run most of the way through initialization (open
>>>> catalogs, read in syscache etc). Then they would wait until a
>>>> connection for the desired DB was handed to them by the postmaster.
What I'm wondering is just how much work will actually be saved by the
additional complexity.
We are already planning to get rid of the exec() of the backend, right,
and use only a fork() to spawn off the background process? How much of
the startup work consists only of recreating state that is lost by exec?
In particular, I'd imagine that the postmaster process already has open
(or could have open) all the necessary files, shared memory, etc.
This state will be inherited automatically across the fork.
Taking this a little further, one could imagine the postmaster
maintaining the same shared state as any backend (tracking SI cache,
for example). Then a forked copy should be Ready To Go with very
little work except processing the client option string.
However I can see a downside to this: bugs in the backend interaction
stuff would become likely to take down the postmaster along with the
backends. The only thing that makes the postmaster more robust than
the backends is that it *isn't* doing as much as they do.
So probably the Apache-style solution (pre-started backends listen for
client connection requests) is the way to go if there is enough bang
for the buck to justify restructuring the postmaster/backend division
of labor. Question is, how much will that buy that just getting rid
of exec() won't?
regards, tom lane