Robbie Harwood <rharwood@redhat.com> writes:
> I need to flush this any time we might be doing encryption because it
> needs to be in a separate request to _secure_write() from what follows
> it. We don't know whether we should be doing encryption until
> connection parameters are parsed; to put it another way,
> port->gss->encrypt will never be true here because it hasn't been parsed
> out of port->gss->gss_encrypt yet.
Wait a second. So the initial connection-request packet is necessarily
unencrypted under this scheme? That seems like a pretty substantial
step backwards from what happens with SSL. Even granting that stuff
like passwords won't be sent till later, the combination of user name
and database name might already be useful info to an eavesdropper.
I would think a design similar to the SSL one (special protocol version
to cause encryption negotiation before the actual connection request
is sent) would be better.
(If I'm totally misunderstanding the context here, my apologies. I've
not been following this thread.)
>> I'm aware that enlargeStringInfo() does check and handle the case where
>> the length ends up >1G, but that feels a bit grotty to me- are you sure
>> you want the generic enlargeStringInfo() to handle that case?
> This is a good point. We definitely have to draw the line somewhere; 1G
> is a high upper bound. Digging around the server code I don't see a
> precedent for what's a good size to stop at. There's
> PG_MAX_AUTH_TOKEN_LENGTH, which is 65k, and the password length in
> auth.c is 1k. Beyond that, SocketBackend() calls pq_getmessage() with
> no maximum length, which causes the enlargeStringInfo() size
> restrictions to be the only control (wrapped in a PG_TRY()).
Note that SocketBackend() only runs *after* we've accepted the user as
authorized. We should be a lot charier of what we're willing to accept
before authorization, IMO. Note MAX_STARTUP_PACKET_LENGTH, which is
only 10K.
regards, tom lane