Re: libpq compression - Mailing list pgsql-hackers

From Konstantin Knizhnik
Subject Re: libpq compression
Date
Msg-id 1b2d418b-f3b1-620e-8456-145d6e943da6@postgrespro.ru
Whole thread Raw
In response to RE: libpq compression  ("Iwata, Aya" <iwata.aya@jp.fujitsu.com>)
Responses RE: libpq compression
List pgsql-hackers

On 09.01.2019 13:25, Iwata, Aya wrote:
> Hi,
>
>>> I agree with the critiques from Robbie Harwood and Michael Paquier
>>> that the way in that compression is being hooked into the existing
>>> architecture looks like a kludge.  I'm not sure I know exactly how it
>>> should be done, but the current approach doesn't look natural; it
>>> looks like it was bolted on.
>> After some time spend reading this patch and investigating different points,
>> mentioned in the discussion, I tend to agree with that. As far as I see it's
>> probably the biggest disagreement here, that keeps things from progressing.
>> I'm interested in this feature, so if Konstantin doesn't mind, I'll post in
>> the near future (after I'll wrap up the current CF) an updated patch I'm working
>> on right now to propose another way of incorporating compression. For now
>> I'm moving patch to the next CF.
> This thread seems to be stopped.
> In last e-mail, Dmitry suggest to update the patch that implements the function in another way, and as far as I saw,
hehas not updated patch yet. (It may be because author has not responded.)
 
> I understand big disagreement is here, however the status is "Needs review".
> There is no review after author update the patch to v9. So I will do.
>
> About the patch, Please update your patch to attach current master. I could not test.
>
> About Documentation, there are typos. Please check it. I am waiting for the reviewer of the sentence because I am not
sogood at English.
 
>
> When you add new protocol message, it needs the information of "Length of message contents in bytes, including
self.".
> It provides supported compression algorithm as a Byte1. I think it better to provide it as a list like the
NegotiateProtocolVersionprotocol.
 
>
> I quickly saw code changes.
>
> +    nread = conn->zstream
> +        ? zpq_read(conn->zstream, conn->inBuffer + conn->inEnd,
> +                   conn->inBufSize - conn->inEnd, &processed)
> +        : pqsecure_read(conn, conn->inBuffer + conn->inEnd,
> +                        conn->inBufSize - conn->inEnd);
>
> How about combine as a #define macro? Because there are same logic in two place.
>
> Do you consider anything about memory control?
> Typically compression algorithm keeps dictionary in memory. I think it needs reset or some method.

Thank you for review.
Attached please find rebased version of the patch.
I fixed all issues you have reported except using list of supported 
compression algorithms.
It will require extra round of communication between client and server 
to make a decision about used compression algorithm.
I still not sure whether it is good idea to make it possible to user to 
explicitly specify compression algorithm.
Right now used streaming compression algorithm is hardcoded and depends 
on --use-zstd ort --use-zlib configuration options.
If client and server were built with the same options, then they are 
able to use compression.

Concerning memory control: there is a call of zpq_free(PqStream) in 
socket_close() function which should deallocate all memory used by 
compressor:

void
zpq_free(ZpqStream *zs)
{
     if (zs != NULL)
     {
         ZSTD_freeCStream(zs->tx_stream);
         ZSTD_freeDStream(zs->rx_stream);
         free(zs);
     }
}

-- 
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


Attachment

pgsql-hackers by date:

Previous
From: Donald Dong
Date:
Subject: Re: Making WAL receiver startup rely on GUC context forprimary_conninfo and primary_slot_name
Next
From: Tomas Vondra
Date:
Subject: Re: FETCH FIRST clause PERCENT option