Re: libpq compression (part 3) - Mailing list pgsql-hackers

From Jacob Champion
Subject Re: libpq compression (part 3)
Date
Msg-id CAOYmi+=b_nyeUVn7BdzSSW85PTP1f405K4TBqRN6W881L=QSUQ@mail.gmail.com
Whole thread Raw
In response to Re: libpq compression (part 3)  (Jacob Burroughs <jburroughs@instructure.com>)
Responses Re: libpq compression (part 3)
List pgsql-hackers
On Tue, May 21, 2024 at 9:14 AM Jacob Burroughs
<jburroughs@instructure.com> wrote:
> On Tue, May 21, 2024 at 10:43 AM Jelte Fennema-Nio <postgres@jeltef.nl> wrote:
> > To help get everyone on the same page I wanted to list all the
> > security concerns in one place:
> >
> > 1. Triggering excessive CPU usage before authentication, by asking for
> > very high compression levels
> > 2. Triggering excessive memory/CPU usage before authentication, by
> > sending a client sending a zipbomb
> > 3. Triggering excessive CPU after authentication, by asking for a very
> > high compression level
> > 4. Triggering excessive memory/CPU after authentication due to
> > zipbombs (i.e. small amount of data extracting to lots of data)
> > 5. CRIME style leakage of information about encrypted data
> >
> > 1 & 2 can easily be solved by not allowing any authentication packets
> > to be compressed. This also has benefits for 5.
>
> This is already addressed by only compressing certain message types.
> If we think it is important that the server reject compressed packets
> of other types I can add that, but it seemed reasonable to just make
> the client never send such packets compressed.

If the server doesn't reject compressed packets pre-authentication,
then case 2 isn't mitigated. (I haven't proven how risky that case is
yet, to be clear.) In other words: if the threat model is that a
client can attack us, we shouldn't assume that it will attack us
politely.

> > 4 would require some safety limits on the amount of data that a
> > (small) compressed message can be decompressed to, and stop
> > decompression of that message once that limit is hit. What that limit
> > should be seems hard to choose though. A few ideas:
> > a. The size of the message reported by the uncompressed header. This
> > would mean that at most the 4GB will be uncompressed, since maximum
> > message length is 4GB (limited by 32bit message length field)
> > b. Allow servers to specify maximum client decompressed message length
> > lower than this 4GB, e.g. messages of more than 100MB of uncompressed
> > size should not be allowed.
>
> Because we are using streaming decompression, this is much less of an
> issue than for things that decompress wholesale onto disk/into memory.

(I agree in general, but since you're designing a protocol extension,
IMO it's not enough that your implementation happens to mitigate
risks. We more or less have to bake those mitigations into the
specification of the extension, because things that aren't servers
have to decompress now. Similar to RFC security considerations.)

--Jacob



pgsql-hackers by date:

Previous
From: Isaac Morland
Date:
Subject: Re: SQL:2011 application time
Next
From: Andres Freund
Date:
Subject: Re: Convert node test compile-time settings into run-time parameters