Thread: Assertion failure while streaming toasted data

Assertion failure while streaming toasted data

From
Pavan Deolasee
Date:
Hi,

While working on an output plugin that uses streaming protocol, I hit an assertion failure. Further investigations revealed a possible bug in core Postgres. This must be new to PG14 since streaming support is new to this release. I extended the test_decoding regression test to demonstrate the failure. PFA

```
2021-05-25 11:32:19.493 IST client backend[68321] pg_regress/stream STATEMENT:  SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '
1', 'stream-changes', '1');
TRAP: FailedAssertion("txn->size == 0", File: "reorderbuffer.c", Line: 3476, PID: 68321)
```

From my preliminary analysis, it looks like we fail to adjust the memory accounting after streaming toasted tuples. More concretely, after `ReorderBufferProcessPartialChange()` processes the in-progress transaction,  `ReorderBufferTruncateTXN()` truncates the accumulated changed in the transaction, but fails to adjust the buffer size for toast chunks. Maybe we are missing a call to `ReorderBufferToastReset()` somewhere? 

From what I see, the assertion only triggers when data is inserted via COPY (multi-insert). 

Let me know if anything else is needed to reproduce this.

Thanks,
Pavan

--
Pavan Deolasee 

Attachment

Re: Assertion failure while streaming toasted data

From
Dilip Kumar
Date:
On Tue, May 25, 2021 at 12:06 PM Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:
>
> Hi,
>
> While working on an output plugin that uses streaming protocol, I hit an assertion failure. Further investigations
revealeda possible bug in core Postgres. This must be new to PG14 since streaming support is new to this release. I
extendedthe test_decoding regression test to demonstrate the failure. PFA 
>
> ```
> 2021-05-25 11:32:19.493 IST client backend[68321] pg_regress/stream STATEMENT:  SELECT data FROM
pg_logical_slot_get_changes('regression_slot',NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', ' 
> 1', 'stream-changes', '1');
> TRAP: FailedAssertion("txn->size == 0", File: "reorderbuffer.c", Line: 3476, PID: 68321)
> ```
>
> From my preliminary analysis, it looks like we fail to adjust the memory accounting after streaming toasted tuples.
Moreconcretely, after `ReorderBufferProcessPartialChange()` processes the in-progress transaction,
`ReorderBufferTruncateTXN()`truncates the accumulated changed in the transaction, but fails to adjust the buffer size
fortoast chunks. Maybe we are missing a call to `ReorderBufferToastReset()` somewhere? 
>
> From what I see, the assertion only triggers when data is inserted via COPY (multi-insert).
>
> Let me know if anything else is needed to reproduce this.

Thanks, I will look into this and let you know if need some help.

--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com



Re: Assertion failure while streaming toasted data

From
Michael Paquier
Date:
On Tue, May 25, 2021 at 12:06:38PM +0530, Pavan Deolasee wrote:
> While working on an output plugin that uses streaming protocol, I hit an
> assertion failure. Further investigations revealed a possible bug in core
> Postgres. This must be new to PG14 since streaming support is new to this
> release. I extended the test_decoding regression test to demonstrate the
> failure. PFA

Thanks, Pavan.  I have added an open item for this one.
--
Michael

Attachment

Re: Assertion failure while streaming toasted data

From
Dilip Kumar
Date:
On Tue, May 25, 2021 at 12:12 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:
>
> On Tue, May 25, 2021 at 12:06 PM Pavan Deolasee
> <pavan.deolasee@gmail.com> wrote:
> >
> > Hi,
> >
> > While working on an output plugin that uses streaming protocol, I hit an assertion failure. Further investigations
revealeda possible bug in core Postgres. This must be new to PG14 since streaming support is new to this release. I
extendedthe test_decoding regression test to demonstrate the failure. PFA 
> >
> > ```
> > 2021-05-25 11:32:19.493 IST client backend[68321] pg_regress/stream STATEMENT:  SELECT data FROM
pg_logical_slot_get_changes('regression_slot',NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', ' 
> > 1', 'stream-changes', '1');
> > TRAP: FailedAssertion("txn->size == 0", File: "reorderbuffer.c", Line: 3476, PID: 68321)
> > ```
> >
> > From my preliminary analysis, it looks like we fail to adjust the memory accounting after streaming toasted tuples.
Moreconcretely, after `ReorderBufferProcessPartialChange()` processes the in-progress transaction,
`ReorderBufferTruncateTXN()`truncates the accumulated changed in the transaction, but fails to adjust the buffer size
fortoast chunks. Maybe we are missing a call to `ReorderBufferToastReset()` somewhere? 
> >
> > From what I see, the assertion only triggers when data is inserted via COPY (multi-insert).
> >
> > Let me know if anything else is needed to reproduce this.
>
> Thanks, I will look into this and let you know if need some help.

I have identified the cause of the issue, basically, the reason is if
we are doing a multi insert operation we don't set the toast cleanup
until we get the last tuple of the xl_multi_insert [1].  Now, with
streaming, we can process the transaction in between the multi-insert
but while doing that the "change->data.tp.clear_toast_afterwards" is
set to false in all the tuples in this stream. And due to this we will
not clean up the toast.

One simple fix could be that we can just clean the toast memory if we
are processing in the streaming mode (as shown in the attached patch).
But maybe that is not the best-optimized solution, ideally, we can
save the toast until we process the last tuple of multi-insert in the
current stream, but I think that's not an easy thing to identify.

[1]
/*
* Reset toast reassembly state only after the last row in the last
* xl_multi_insert_tuple record emitted by one heap_multi_insert()
* call.
*/
if (xlrec->flags & XLH_INSERT_LAST_IN_MULTI &&
(i + 1) == xlrec->ntuples)
change->data.tp.clear_toast_afterwards = true;
else
change->data.tp.clear_toast_afterwards = false;

--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com

Attachment

Re: Assertion failure while streaming toasted data

From
Pavan Deolasee
Date:


On Tue, May 25, 2021 at 1:26 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:


I have identified the cause of the issue, basically, the reason is if
we are doing a multi insert operation we don't set the toast cleanup
until we get the last tuple of the xl_multi_insert [1].  Now, with
streaming, we can process the transaction in between the multi-insert
but while doing that the "change->data.tp.clear_toast_afterwards" is
set to false in all the tuples in this stream. And due to this we will
not clean up the toast.

Thanks. That matches my understanding too.
 

One simple fix could be that we can just clean the toast memory if we
are processing in the streaming mode (as shown in the attached patch).

I am not entirely sure if it works correctly. I'd tried something similar, but the downstream node using
my output plugin gets NULL values for the toast columns.  It's a bit hard to demonstrate that with the
test_decoding plugin, but if you have some other mechanism to test that change with an actual downstream
node receiving and applying changes, it will be useful to test with that.

Thanks,
Pavan
 
--
 Pavan Deolasee                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services

Re: Assertion failure while streaming toasted data

From
Dilip Kumar
Date:
On Tue, May 25, 2021 at 1:45 PM Pavan Deolasee <pavan.deolasee@gmail.com> wrote:
>
> I am not entirely sure if it works correctly. I'd tried something similar, but the downstream node using
> my output plugin gets NULL values for the toast columns.  It's a bit hard to demonstrate that with the
> test_decoding plugin, but if you have some other mechanism to test that change with an actual downstream
> node receiving and applying changes, it will be useful to test with that.

Okay, I will test that.  Thanks.


-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com



Re: Assertion failure while streaming toasted data

From
Pavan Deolasee
Date:


On Tue, May 25, 2021 at 1:49 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:
On Tue, May 25, 2021 at 1:45 PM Pavan Deolasee <pavan.deolasee@gmail.com> wrote:
>
> I am not entirely sure if it works correctly. I'd tried something similar, but the downstream node using
> my output plugin gets NULL values for the toast columns.  It's a bit hard to demonstrate that with the
> test_decoding plugin, but if you have some other mechanism to test that change with an actual downstream
> node receiving and applying changes, it will be useful to test with that.

Okay, I will test that.  Thanks.


I modified test_decoding to print the tuples (like in the non-streamed case) and included your proposed fix. PFA

When the transaction is streamed, I see:
```
+ opening a streamed block for transaction
+ table public.toasted: INSERT: id[integer]:9001 other[text]:'bbb' data[text]:'ccc'
+ table public.toasted: INSERT: id[integer]:9002 other[text]:'ddd' data[text]:'eee'
+ table public.toasted: INSERT: id[integer]:9003 other[text]:'bar' data[text]:unchanged-toast-datum
<snipped>
```

For a non-streamed case, the `data[text]` column shows the actual data. That probably manifests into NULL data when downstream handles it.

Thanks,
Pavan


--
Pavan Deolasee

Attachment

Re: Assertion failure while streaming toasted data

From
Dilip Kumar
Date:
On Tue, May 25, 2021 at 1:59 PM Pavan Deolasee <pavan.deolasee@gmail.com> wrote:

>
> On Tue, May 25, 2021 at 1:49 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:
>>
>> On Tue, May 25, 2021 at 1:45 PM Pavan Deolasee <pavan.deolasee@gmail.com> wrote:
>> >
>> > I am not entirely sure if it works correctly. I'd tried something similar, but the downstream node using
>> > my output plugin gets NULL values for the toast columns.  It's a bit hard to demonstrate that with the
>> > test_decoding plugin, but if you have some other mechanism to test that change with an actual downstream
>> > node receiving and applying changes, it will be useful to test with that.
>>
>> Okay, I will test that.  Thanks.
>>
>
> I modified test_decoding to print the tuples (like in the non-streamed case) and included your proposed fix. PFA
>
> When the transaction is streamed, I see:
> ```
> + opening a streamed block for transaction
> + table public.toasted: INSERT: id[integer]:9001 other[text]:'bbb' data[text]:'ccc'
> + table public.toasted: INSERT: id[integer]:9002 other[text]:'ddd' data[text]:'eee'
> + table public.toasted: INSERT: id[integer]:9003 other[text]:'bar' data[text]:unchanged-toast-datum
> <snipped>
> ```
>
> For a non-streamed case, the `data[text]` column shows the actual data. That probably manifests into NULL data when
downstreamhandles it.
 

Yes, I am able to reproduce this, basically, until we get the last
tuple of the multi insert we can not clear the toast data otherwise we
can never form a complete tuple.  So the only possible fix I can think
of is to consider the multi-insert WAL without the final multi-insert
tuple as partial data then we will avoid streaming until we get the
complete WAL of one multi-insert.  I am working on the patch to fix
this, I will share that in some time.

-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com



Re: Assertion failure while streaming toasted data

From
Dilip Kumar
Date:
On Tue, May 25, 2021 at 2:34 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:
>
> > When the transaction is streamed, I see:
> > ```
> > + opening a streamed block for transaction
> > + table public.toasted: INSERT: id[integer]:9001 other[text]:'bbb' data[text]:'ccc'
> > + table public.toasted: INSERT: id[integer]:9002 other[text]:'ddd' data[text]:'eee'
> > + table public.toasted: INSERT: id[integer]:9003 other[text]:'bar' data[text]:unchanged-toast-datum
> > <snipped>
> > ```
> >
> > For a non-streamed case, the `data[text]` column shows the actual data. That probably manifests into NULL data when
downstreamhandles it.
 
>
> Yes, I am able to reproduce this, basically, until we get the last
> tuple of the multi insert we can not clear the toast data otherwise we
> can never form a complete tuple.  So the only possible fix I can think
> of is to consider the multi-insert WAL without the final multi-insert
> tuple as partial data then we will avoid streaming until we get the
> complete WAL of one multi-insert.  I am working on the patch to fix
> this, I will share that in some time.

The attached patch should fix the issue, now the output is like below

===
opening a streamed block for transaction
table public.toasted: INSERT: id[integer]:9001 other[text]:'bbb'
data[text]:'ccc'
 table public.toasted: INSERT: id[integer]:9002 other[text]:'ddd'
data[text]:'eee'
 table public.toasted: INSERT: id[integer]:9003 other[text]:'bar'
data[text]:'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
<repeat >
===

-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com

Attachment

Re: Assertion failure while streaming toasted data

From
Pavan Deolasee
Date:


On Tue, May 25, 2021 at 2:57 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:

>
> Yes, I am able to reproduce this, basically, until we get the last
> tuple of the multi insert we can not clear the toast data otherwise we
> can never form a complete tuple.  So the only possible fix I can think
> of is to consider the multi-insert WAL without the final multi-insert
> tuple as partial data then we will avoid streaming until we get the
> complete WAL of one multi-insert.  I am working on the patch to fix
> this, I will share that in some time.

The attached patch should fix the issue, now the output is like below


Thanks. This looks fine to me. We should still be able to stream multi-insert transactions (COPY) as and when the copy buffer becomes full and is flushed. That seems to be a reasonable restriction to me.

We should incorporate the regression test in the final patch. I am not entirely sure if what I have done is acceptable (or even works in all scenarios). We could possibly have a long list of tuples instead of doing the exponential magic. Or we should consider lowering the min value for logical_decoding_work_mem and run these tests with a much lower value. In fact, that's how I caught the problem in the first place. I had deliberately lowered the value to 1kB so that streaming code kicks in very often and even for small transactions.

Thanks,
Pavan 

--
Pavan Deolasee
EnterpriseDB: https://www.enterprisedb..com

Re: Assertion failure while streaming toasted data

From
Dilip Kumar
Date:
On Tue, May 25, 2021 at 3:33 PM Pavan Deolasee <pavan.deolasee@gmail.com> wrote:

>> The attached patch should fix the issue, now the output is like below
>>
>
> Thanks. This looks fine to me. We should still be able to stream multi-insert transactions (COPY) as and when the
copybuffer becomes full and is flushed. That seems to be a reasonable restriction to me. 
>
> We should incorporate the regression test in the final patch. I am not entirely sure if what I have done is
acceptable(or even works in all scenarios). We could possibly have a long list of tuples instead of doing the
exponentialmagic. Or we should consider lowering the min value for logical_decoding_work_mem and run these tests with a
muchlower value. In fact, that's how I caught the problem in the first place. I had deliberately lowered the value to
1kBso that streaming code kicks in very often and even for small transactions. 

Thanks for confirming, I will come up with the test and add that to
the next version of the patch.

--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com



Re: Assertion failure while streaming toasted data

From
Amit Kapila
Date:
On Tue, May 25, 2021 at 2:57 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:
>
> On Tue, May 25, 2021 at 2:34 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:
> >
> > > When the transaction is streamed, I see:
> > > ```
> > > + opening a streamed block for transaction
> > > + table public.toasted: INSERT: id[integer]:9001 other[text]:'bbb' data[text]:'ccc'
> > > + table public.toasted: INSERT: id[integer]:9002 other[text]:'ddd' data[text]:'eee'
> > > + table public.toasted: INSERT: id[integer]:9003 other[text]:'bar' data[text]:unchanged-toast-datum
> > > <snipped>
> > > ```
> > >
> > > For a non-streamed case, the `data[text]` column shows the actual data. That probably manifests into NULL data
whendownstream handles it.
 
> >
> > Yes, I am able to reproduce this, basically, until we get the last
> > tuple of the multi insert we can not clear the toast data otherwise we
> > can never form a complete tuple.  So the only possible fix I can think
> > of is to consider the multi-insert WAL without the final multi-insert
> > tuple as partial data then we will avoid streaming until we get the
> > complete WAL of one multi-insert.

Yeah, that sounds reasonable.

> >  I am working on the patch to fix
> > this, I will share that in some time.
>
> The attached patch should fix the issue, now the output is like below
>

Your patch will fix the reported scenario but I don't like the way
multi_insert flag is used to detect incomplete tuple. One problem
could be that even when there are no toast inserts, it won't allow to
stream unless we get the last tuple of multi insert WAL. How about
changing the code such that when we are clearing the toast flag, we
additionally check 'clear_toast_afterwards' flag?

-- 
With Regards,
Amit Kapila.



Re: Assertion failure while streaming toasted data

From
Dilip Kumar
Date:
On Tue, May 25, 2021 at 4:50 PM Amit Kapila <amit.kapila16@gmail.com> wrote:
>
> Your patch will fix the reported scenario but I don't like the way
> multi_insert flag is used to detect incomplete tuple. One problem
> could be that even when there are no toast inserts, it won't allow to
> stream unless we get the last tuple of multi insert WAL. How about
> changing the code such that when we are clearing the toast flag, we
> additionally check 'clear_toast_afterwards' flag?

Yes, that can be done, I will fix this in the next version of the patch.

-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com



Re: Assertion failure while streaming toasted data

From
Dilip Kumar
Date:
On Tue, May 25, 2021 at 5:46 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:
>
> On Tue, May 25, 2021 at 4:50 PM Amit Kapila <amit.kapila16@gmail.com> wrote:
> >
> > Your patch will fix the reported scenario but I don't like the way
> > multi_insert flag is used to detect incomplete tuple. One problem
> > could be that even when there are no toast inserts, it won't allow to
> > stream unless we get the last tuple of multi insert WAL. How about
> > changing the code such that when we are clearing the toast flag, we
> > additionally check 'clear_toast_afterwards' flag?
>
> Yes, that can be done, I will fix this in the next version of the patch.

I have fixed as per the suggestion, and as per the offlist discussion,
I have merged the TOAST and SPEC insert flag and created a single
PARTIAL_CHANGE flag.
I have also added a test case for this.

-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com

Attachment

Re: Assertion failure while streaming toasted data

From
Pavan Deolasee
Date:


On Tue, May 25, 2021 at 6:43 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:

I have also added a test case for this.


Is that test good enough to trigger the original bug? In my experience, I had to add a lot more tuples before the logical_decoding_work_mem threshold was crossed and the streaming kicked in. I would suggest running the test without the fix and check if the assertion hits. If so, we are good to go.

Thanks,
Pavan
 
--
Pavan Deolasee
EnterpriseDB: https://www.enterprisedb..com

Re: Assertion failure while streaming toasted data

From
Dilip Kumar
Date:
On Tue, May 25, 2021 at 6:50 PM Pavan Deolasee <pavan.deolasee@gmail.com> wrote:

> Is that test good enough to trigger the original bug? In my experience, I had to add a lot more tuples before the
logical_decoding_work_memthreshold was crossed and the streaming kicked in. I would suggest running the test without
thefix and check if the assertion hits. If so, we are good to go. 
>
Yes, it is reproducing without fix, I already tested it.  Basically, I
am using the "stream_test" table in "copy stream_test to stdout""
command which already has 20 toasted tuples, each of size 6000 bytes
so that is big enough to cross 64kB.

--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com



Re: Assertion failure while streaming toasted data

From
Pavan Deolasee
Date:


On Tue, May 25, 2021 at 6:55 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:
On Tue, May 25, 2021 at 6:50 PM Pavan Deolasee <pavan.deolasee@gmail.com> wrote:

> Is that test good enough to trigger the original bug? In my experience, I had to add a lot more tuples before the logical_decoding_work_mem threshold was crossed and the streaming kicked in. I would suggest running the test without the fix and check if the assertion hits. If so, we are good to go.
>
Yes, it is reproducing without fix, I already tested it.  Basically, I
am using the "stream_test" table in "copy stream_test to stdout""
command which already has 20 toasted tuples, each of size 6000 bytes
so that is big enough to cross 64kB.


Ok, great! Thanks for confirming.

Thanks,
Pavan 

--
Pavan Deolasee
EnterpriseDB: https://www.enterprisedb..com

Re: Assertion failure while streaming toasted data

From
Amit Kapila
Date:
On Tue, May 25, 2021 at 6:43 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:
>
> On Tue, May 25, 2021 at 5:46 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:
> >
> > On Tue, May 25, 2021 at 4:50 PM Amit Kapila <amit.kapila16@gmail.com> wrote:
> > >
> > > Your patch will fix the reported scenario but I don't like the way
> > > multi_insert flag is used to detect incomplete tuple. One problem
> > > could be that even when there are no toast inserts, it won't allow to
> > > stream unless we get the last tuple of multi insert WAL. How about
> > > changing the code such that when we are clearing the toast flag, we
> > > additionally check 'clear_toast_afterwards' flag?
> >
> > Yes, that can be done, I will fix this in the next version of the patch.
>
> I have fixed as per the suggestion, and as per the offlist discussion,
> I have merged the TOAST and SPEC insert flag and created a single
> PARTIAL_CHANGE flag.
> I have also added a test case for this.
>

When I am trying to execute the new test independently in windows, I
am getting the below error:
'psql' is not recognized as an internal or external command,
operable program or batch file.
2021-05-26 09:09:24.399 IST [3188] ERROR:  program "psql -At -c "copy
stream_test to stdout" contrib_regression" failed
2021-05-26 09:09:24.399 IST [3188] DETAIL:  child process exited with
exit code 1
2021-05-26 09:09:24.399 IST [3188] STATEMENT:  COPY stream_test FROM
program 'psql -At -c "copy stream_test to stdout" contrib_regression';

I have followed below steps:
1. Run the server
2. from command prompt, in test_decoding folder, execute,
pg_regress.exe --bindir=d:/WorkSpace/PostgreSQL/master/installation/bin
--dbname=contrib_regression stream

I searched and didn't find any similar existing tests. Can we think of
any other way to test this code path? We already have one copy test in
toast.sql, isn't it possible to write a similar test here?

-- 
With Regards,
Amit Kapila.



Re: Assertion failure while streaming toasted data

From
Pavan Deolasee
Date:


On Wed, May 26, 2021 at 11:19 AM Amit Kapila <amit.kapila16@gmail.com> wrote:



I searched and didn't find any similar existing tests. Can we think of
any other way to test this code path? We already have one copy test in
toast.sql, isn't it possible to write a similar test here?


Yeah, I wasn't very confident about this either. I just wrote it to reduce the test footprint in the reproducer. I think we can simply include a lot more data and do the copy via stdin.

Alternatively, we can reduce logical_decoding_work_mem minimum value and run the test with a smaller value. But that same GUC is used to decide spilling txn to disk as well. So I am not sure if reducing the compile time default is acceptable or not.

Thanks,
Pavan

--
Pavan Deolasee
EnterpriseDB: https://www.enterprisedb..com

Re: Assertion failure while streaming toasted data

From
Dilip Kumar
Date:
On Wed, May 26, 2021 at 11:19 AM Amit Kapila <amit.kapila16@gmail.com> wrote:
>
> On Tue, May 25, 2021 at 6:43 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:
> >
> > On Tue, May 25, 2021 at 5:46 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:
> > >
> > > On Tue, May 25, 2021 at 4:50 PM Amit Kapila <amit.kapila16@gmail.com> wrote:
> > > >
> > > > Your patch will fix the reported scenario but I don't like the way
> > > > multi_insert flag is used to detect incomplete tuple. One problem
> > > > could be that even when there are no toast inserts, it won't allow to
> > > > stream unless we get the last tuple of multi insert WAL. How about
> > > > changing the code such that when we are clearing the toast flag, we
> > > > additionally check 'clear_toast_afterwards' flag?
> > >
> > > Yes, that can be done, I will fix this in the next version of the patch.
> >
> > I have fixed as per the suggestion, and as per the offlist discussion,
> > I have merged the TOAST and SPEC insert flag and created a single
> > PARTIAL_CHANGE flag.
> > I have also added a test case for this.
> >
>
> When I am trying to execute the new test independently in windows, I
> am getting the below error:
> 'psql' is not recognized as an internal or external command,
> operable program or batch file.
> 2021-05-26 09:09:24.399 IST [3188] ERROR:  program "psql -At -c "copy
> stream_test to stdout" contrib_regression" failed
> 2021-05-26 09:09:24.399 IST [3188] DETAIL:  child process exited with
> exit code 1
> 2021-05-26 09:09:24.399 IST [3188] STATEMENT:  COPY stream_test FROM
> program 'psql -At -c "copy stream_test to stdout" contrib_regression';
>
> I have followed below steps:
> 1. Run the server
> 2. from command prompt, in test_decoding folder, execute,
> pg_regress.exe --bindir=d:/WorkSpace/PostgreSQL/master/installation/bin
> --dbname=contrib_regression stream

Ok

> I searched and didn't find any similar existing tests. Can we think of
> any other way to test this code path? We already have one copy test in
> toast.sql, isn't it possible to write a similar test here?

I will check that and let you know.

-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com



Re: Assertion failure while streaming toasted data

From
Dilip Kumar
Date:
On Wed, May 26, 2021 at 11:37 AM Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:
>

>
> Yeah, I wasn't very confident about this either. I just wrote it to reduce the test footprint in the reproducer. I
thinkwe can simply include a lot more data and do the copy via stdin.
 

That is one way and if we don't find any better way we can do that.

> Alternatively, we can reduce logical_decoding_work_mem minimum value and run the test with a smaller value. But that
sameGUC is used to decide spilling txn to disk as well. So I am not sure if reducing the compile time default is
acceptableor not.
 

In the test decoding config, it is already set to a minimum value which is 64k.

-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com



Re: Assertion failure while streaming toasted data

From
Dilip Kumar
Date:
On Wed, May 26, 2021 at 11:53 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:
>
> On Wed, May 26, 2021 at 11:19 AM Amit Kapila <amit.kapila16@gmail.com> wrote:
> >

>
> > I searched and didn't find any similar existing tests. Can we think of
> > any other way to test this code path? We already have one copy test in
> > toast.sql, isn't it possible to write a similar test here?
>
> I will check that and let you know.

I have followed this approach to write the test.

-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com

Attachment

Re: Assertion failure while streaming toasted data

From
Amit Kapila
Date:
On Wed, May 26, 2021 at 1:47 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:
>
> On Wed, May 26, 2021 at 11:53 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:
> >
> > On Wed, May 26, 2021 at 11:19 AM Amit Kapila <amit.kapila16@gmail.com> wrote:
> > >
>
> >
> > > I searched and didn't find any similar existing tests. Can we think of
> > > any other way to test this code path? We already have one copy test in
> > > toast.sql, isn't it possible to write a similar test here?
> >
> > I will check that and let you know.
>
> I have followed this approach to write the test.
>

The changes look good to me. I have made minor modifications in the
comments, see attached. Let me know what do you think?

-- 
With Regards,
Amit Kapila.

Attachment

Re: Assertion failure while streaming toasted data

From
Dilip Kumar
Date:
On Wed, May 26, 2021 at 5:28 PM Amit Kapila <amit.kapila16@gmail.com> wrote:
>
> On Wed, May 26, 2021 at 1:47 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:
> >
> > On Wed, May 26, 2021 at 11:53 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:
> > >
> > > On Wed, May 26, 2021 at 11:19 AM Amit Kapila <amit.kapila16@gmail.com> wrote:
> > > >
> >
> > >
> > > > I searched and didn't find any similar existing tests. Can we think of
> > > > any other way to test this code path? We already have one copy test in
> > > > toast.sql, isn't it possible to write a similar test here?
> > >
> > > I will check that and let you know.
> >
> > I have followed this approach to write the test.
> >
>
> The changes look good to me. I have made minor modifications in the
> comments, see attached. Let me know what do you think?

Your modification looks good to me.  Thanks!

-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com



Re: Assertion failure while streaming toasted data

From
Amit Kapila
Date:
On Tue, May 25, 2021 at 12:51 PM Michael Paquier <michael@paquier.xyz> wrote:
>
> On Tue, May 25, 2021 at 12:06:38PM +0530, Pavan Deolasee wrote:
> > While working on an output plugin that uses streaming protocol, I hit an
> > assertion failure. Further investigations revealed a possible bug in core
> > Postgres. This must be new to PG14 since streaming support is new to this
> > release. I extended the test_decoding regression test to demonstrate the
> > failure. PFA
>
> Thanks, Pavan.  I have added an open item for this one.
>

I have pushed this a few days ago [1] and closed the open item
corresponding to it.

[1] - https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=6f4bdf81529fdaf6744875b0be99ecb9bfb3b7e0

-- 
With Regards,
Amit Kapila.