On 23.04.2018 21:56, Robert Haas wrote:
> On Fri, Jan 19, 2018 at 11:59 AM, Tomas Vondra
> <tomas.vondra@2ndquadrant.com> wrote:
>> Hmmm, that's unfortunate. I guess you'll have process the startup packet
>> in the main process, before it gets forked. At least partially.
> I'm not keen on a design that would involve doing more stuff in the
> postmaster, because that would increase the chances of the postmaster
> accidentally dying, which is really bad. I've been thinking about the
> idea of having a separate "listener" process that receives
> connections, and that the postmaster can restart if it fails. Or
> there could even be multiple listeners if needed. When the listener
> gets a connection, it hands it off to another process that then "owns"
> that connection.
>
> One problem with this is that the process that's going to take over
> the connection needs to get started by the postmaster, not the
> listener. The listener could signal the postmaster to start it, just
> like we do for background workers, but that might add a bit of
> latency. So what I'm thinking is that the postmaster could maintain
> a small (and configurably-sized) pool of preforked workers. That
> might be worth doing independently, as a way to reduce connection
> startup latency, although somebody would have to test it to see
> whether it really works... a lot of the startup work can't be done
> until we know which database the user wants.
>
I agree that starting separate "listener" process(es) is the most
flexible and scalable solution.
I have not implemented this apporach due to the problems with forking
new backend you have mentioned.
But certainly it can be addressed.
--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company