Re: Force streaming every change in logical decoding - Mailing list pgsql-hackers

From shveta malik
Subject Re: Force streaming every change in logical decoding
Date
Msg-id CAJpy0uD9sFy33QiJNz7G3UBfqb6w1BgWHkMkntTBOyoY5Rv1yA@mail.gmail.com
Whole thread Raw
In response to RE: Force streaming every change in logical decoding  ("Hayato Kuroda (Fujitsu)" <kuroda.hayato@fujitsu.com>)
Responses RE: Force streaming every change in logical decoding
List pgsql-hackers

Going with ' logical_decoding_work_mem' seems a reasonable solution, but since we are mixing
the functionality of developer and production GUC, there is a slight risk that customer/DBAs may end
up setting it to 0 and forget about it and thus hampering system's performance.
Have seen many such cases in previous org.

Adding a new developer parameter seems slightly safe, considering we already have one 
such category supported in postgres. It can be on the same line as that of 'force_parallel_mode'.
It will be purely developer GUC, plus if we want to extend something in future to add/automate 
heavier test-cases or any other streaming related dev option, we can extend the same parameter w/o 
disturbing production's one. (see force_parallel_mode=regress for ref).

thanks
Shveta


On Wed, Dec 21, 2022 at 11:25 AM Hayato Kuroda (Fujitsu) <kuroda.hayato@fujitsu.com> wrote:
Dear Amit,

> The other possibility to achieve what you are saying is that we allow
> a minimum value of logical_decoding_work_mem as 0 which would mean
> stream or serialize each change depending on whether the streaming
> option is enabled.

I understood that logical_decoding_work_mem may double as normal option as
developer option. I think yours is smarter because we can reduce # of GUCs.

> I think we normally don't allow a minimum value
> below a certain threshold for other *_work_mem parameters (like
> maintenance_work_mem, work_mem), so we have followed the same here.
> And, I think it makes sense from the user's perspective because below
> a certain threshold it will just add overhead by either writing small
> changes to the disk or by sending those over the network. However, it
> can be quite useful for testing/debugging. So, not sure, if we should
> restrict setting logical_decoding_work_mem below a certain threshold.
> What do you think?

You mean to say that there is a possibility that users may set a small value without deep
considerations, right? If so, how about using the approach like autovacuum_work_mem?

autovacuum_work_mem has a range [-1, MAX_KIROBYTES], and -1 mean that it follows
maintenance_work_mem. If it is set small value like 5KB, its working memory is rounded
up to 1024KB. See check_autovacuum_work_mem().

Based on that, I suggest followings. Can they solve the problem what you said?

* If logical_decoding_work_mem is set to 0, all transactions are streamed or serialized
  on publisher.
* If logical_decoding_work_mem is set within [1, 63KB], the value is rounded up or ERROR
  is raised.
* If logical_decoding_work_mem  is set greater than or equal to 64KB, the set value
  is used.

Best Regards,
Hayato Kuroda
FUJITSU LIMITED
> Amit Kapila.

pgsql-hackers by date:

Previous
From: Michael Paquier
Date:
Subject: Re: Small miscellaneus fixes (Part II)
Next
From: Bharath Rupireddy
Date:
Subject: Re: Use get_call_result_type() more widely