Thread: logical decoding vs. VACUUM FULL / CLUSTER on table with TOAST-eddata

logical decoding vs. VACUUM FULL / CLUSTER on table with TOAST-eddata

From
Tomas Vondra
Date:
Hi,

It seems we have pretty annoying problem with logical decoding when
performing VACUUM FULL / CLUSTER on a table with toast-ed data.

The trouble is that the rewritten heap is WAL-logged using XLOG/FPI
records, the TOAST data is logged as regular INSERT records. XLOG/FPI is
ignored in logical decoding, and so reorderbuffer never gets those
records. But we do decode the TOAST data, and reorderbuffer stashes them
in toast_hash hash table. Which gets reset only when handling a row from
the main heap, and that never arrives. So we end up stashing all the
TOAST data in memory :-(

So do VACUUM FULL (or CLUSTER) on a sufficiently large table, and you're
likely to break any logical replication connection. And it does not
matter if you replicate this particular table.

Luckily enough, this can leverage some of the pieces introduced by
e9edc1ba which was meant to deal with rewrites of system tables, and in
raw_heap_insert it added this:

    /*
     * The new relfilenode's relcache entrye doesn't have the necessary
     * information to determine whether a relation should emit data for
     * logical decoding.  Force it to off if necessary.
     */
    if (!RelationIsLogicallyLogged(state->rs_old_rel))
        options |= HEAP_INSERT_NO_LOGICAL;

As raw_heap_insert is used only for heap rewrites, we can simply remove
the if condition and use the HEAP_INSERT_NO_LOGICAL flag for all TOAST
data logged from here.

This does fix the issue, because we still decode the TOAST changes but
there are no data and so

    if (change->data.tp.newtuple != NULL)
    {
        dlist_delete(&change->node);
        ReorderBufferToastAppendChunk(rb, txn, relation,
                                      change);
    }

ends up not stashing the change in the hash table. It's imperfect,
because we still decode the changes (and stash them to disk), but ISTM
that can be fixed by tweaking DecodeInsert a bit to just ignore those
changes entirely.

Attached is a PoC patch with these two fixes.

regards

-- 
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Attachment

Re: logical decoding vs. VACUUM FULL / CLUSTER on table with TOAST-ed data

From
Masahiko Sawada
Date:
On Mon, Nov 19, 2018 at 6:52 AM Tomas Vondra
<tomas.vondra@2ndquadrant.com> wrote:
>
> Hi,
>
> It seems we have pretty annoying problem with logical decoding when
> performing VACUUM FULL / CLUSTER on a table with toast-ed data.
>
> The trouble is that the rewritten heap is WAL-logged using XLOG/FPI
> records, the TOAST data is logged as regular INSERT records. XLOG/FPI is
> ignored in logical decoding, and so reorderbuffer never gets those
> records. But we do decode the TOAST data, and reorderbuffer stashes them
> in toast_hash hash table. Which gets reset only when handling a row from
> the main heap, and that never arrives. So we end up stashing all the
> TOAST data in memory :-(
>
> So do VACUUM FULL (or CLUSTER) on a sufficiently large table, and you're
> likely to break any logical replication connection. And it does not
> matter if you replicate this particular table.
>
> Luckily enough, this can leverage some of the pieces introduced by
> e9edc1ba which was meant to deal with rewrites of system tables, and in
> raw_heap_insert it added this:
>
>     /*
>      * The new relfilenode's relcache entrye doesn't have the necessary
>      * information to determine whether a relation should emit data for
>      * logical decoding.  Force it to off if necessary.
>      */
>     if (!RelationIsLogicallyLogged(state->rs_old_rel))
>         options |= HEAP_INSERT_NO_LOGICAL;
>
> As raw_heap_insert is used only for heap rewrites, we can simply remove
> the if condition and use the HEAP_INSERT_NO_LOGICAL flag for all TOAST
> data logged from here.
>

This fix seems fine to me.

> This does fix the issue, because we still decode the TOAST changes but
> there are no data and so
>
>     if (change->data.tp.newtuple != NULL)
>     {
>         dlist_delete(&change->node);
>         ReorderBufferToastAppendChunk(rb, txn, relation,
>                                       change);
>     }
>
> ends up not stashing the change in the hash table.

With the below change you proposed can we remove the above condition
because toast-insertion changes being processed by the reorder buffer
always have a new tuple? If a toast-insertion record doesn't have a
new tuple it has already ignored when decoding.

> It's imperfect,
> because we still decode the changes (and stash them to disk), but ISTM
> that can be fixed by tweaking DecodeInsert a bit to just ignore those
> changes entirely.
>
> Attached is a PoC patch with these two fixes.
>

I think this change is also fine.

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center


Re: logical decoding vs. VACUUM FULL / CLUSTER on table with TOAST-eddata

From
Tomas Vondra
Date:
On 11/19/18 10:28 AM, Masahiko Sawada wrote:
> On Mon, Nov 19, 2018 at 6:52 AM Tomas Vondra
> <tomas.vondra@2ndquadrant.com> wrote:
>>
>> Hi,
>>
>> It seems we have pretty annoying problem with logical decoding when
>> performing VACUUM FULL / CLUSTER on a table with toast-ed data.
>>
>> The trouble is that the rewritten heap is WAL-logged using XLOG/FPI
>> records, the TOAST data is logged as regular INSERT records. XLOG/FPI is
>> ignored in logical decoding, and so reorderbuffer never gets those
>> records. But we do decode the TOAST data, and reorderbuffer stashes them
>> in toast_hash hash table. Which gets reset only when handling a row from
>> the main heap, and that never arrives. So we end up stashing all the
>> TOAST data in memory :-(
>>
>> So do VACUUM FULL (or CLUSTER) on a sufficiently large table, and you're
>> likely to break any logical replication connection. And it does not
>> matter if you replicate this particular table.
>>
>> Luckily enough, this can leverage some of the pieces introduced by
>> e9edc1ba which was meant to deal with rewrites of system tables, and in
>> raw_heap_insert it added this:
>>
>>      /*
>>       * The new relfilenode's relcache entrye doesn't have the necessary
>>       * information to determine whether a relation should emit data for
>>       * logical decoding.  Force it to off if necessary.
>>       */
>>      if (!RelationIsLogicallyLogged(state->rs_old_rel))
>>          options |= HEAP_INSERT_NO_LOGICAL;
>>
>> As raw_heap_insert is used only for heap rewrites, we can simply remove
>> the if condition and use the HEAP_INSERT_NO_LOGICAL flag for all TOAST
>> data logged from here.
>>
> 
> This fix seems fine to me.
> 
>> This does fix the issue, because we still decode the TOAST changes but
>> there are no data and so
>>
>>      if (change->data.tp.newtuple != NULL)
>>      {
>>          dlist_delete(&change->node);
>>          ReorderBufferToastAppendChunk(rb, txn, relation,
>>                                        change);
>>      }
>>
>> ends up not stashing the change in the hash table.
> 
> With the below change you proposed can we remove the above condition
> because toast-insertion changes being processed by the reorder buffer
> always have a new tuple? If a toast-insertion record doesn't have a
> new tuple it has already ignored when decoding.
> 

Good point. I think you're right the reorderbuffer part may be 
simplified as you propose.

regards

-- 
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Re: logical decoding vs. VACUUM FULL / CLUSTER on table with TOAST-eddata

From
Tomas Vondra
Date:
On 11/19/18 11:44 AM, Tomas Vondra wrote:
> On 11/19/18 10:28 AM, Masahiko Sawada wrote:
>> On Mon, Nov 19, 2018 at 6:52 AM Tomas Vondra
>> <tomas.vondra@2ndquadrant.com> wrote:
>>>
>>> ...
>>>
>>> This does fix the issue, because we still decode the TOAST changes but
>>> there are no data and so
>>>
>>>      if (change->data.tp.newtuple != NULL)
>>>      {
>>>          dlist_delete(&change->node);
>>>          ReorderBufferToastAppendChunk(rb, txn, relation,
>>>                                        change);
>>>      }
>>>
>>> ends up not stashing the change in the hash table.
>>
>> With the below change you proposed can we remove the above condition
>> because toast-insertion changes being processed by the reorder buffer
>> always have a new tuple? If a toast-insertion record doesn't have a
>> new tuple it has already ignored when decoding.
>>
> 
> Good point. I think you're right the reorderbuffer part may be
> simplified as you propose.
> 

OK, here's an updated patch, tweaking the reorderbuffer part. I plan to
push this sometime mid next week.

regards

-- 
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Attachment

Re: logical decoding vs. VACUUM FULL / CLUSTER on table with TOAST-eddata

From
Tomas Vondra
Date:
On 11/24/18 12:20 AM, Tomas Vondra wrote:
> ...
> 
> OK, here's an updated patch, tweaking the reorderbuffer part. I plan
> to push this sometime mid next week.
> 

Pushed and backpatched to 9.4- (same as e9edc1ba).

regards

-- 
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Re: logical decoding vs. VACUUM FULL / CLUSTER on table withTOAST-ed data

From
Andres Freund
Date:
Hi,

On 2018-11-28 02:04:18 +0100, Tomas Vondra wrote:
> 
> On 11/24/18 12:20 AM, Tomas Vondra wrote:
> > ...
> > 
> > OK, here's an updated patch, tweaking the reorderbuffer part. I plan
> > to push this sometime mid next week.
> > 
> 
> Pushed and backpatched to 9.4- (same as e9edc1ba).

Backpatching seems on the more aggressive end of things for an
optimization. Could you at least announce that beforehand next time?

Greetings,

Andres Freund


Re: logical decoding vs. VACUUM FULL / CLUSTER on table with TOAST-eddata

From
Petr Jelinek
Date:
Hi,

On 28/11/2018 02:14, Andres Freund wrote:
> Hi,
> 
> On 2018-11-28 02:04:18 +0100, Tomas Vondra wrote:
>>
>> On 11/24/18 12:20 AM, Tomas Vondra wrote:
>>> ...
>>>
>>> OK, here's an updated patch, tweaking the reorderbuffer part. I plan
>>> to push this sometime mid next week.
>>>
>>
>> Pushed and backpatched to 9.4- (same as e9edc1ba).
> 
> Backpatching seems on the more aggressive end of things for an
> optimization. Could you at least announce that beforehand next time?
> 

Well, it may be optimization, but from what I've seen the problems
arising from this can easily prevent logical replication from working
altogether as reorder buffer hits OOM on bigger tables. So ISTM that it
does warrant backpatch.

-- 
  Petr Jelinek                  http://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, Training & Services


Re: logical decoding vs. VACUUM FULL / CLUSTER on table withTOAST-ed data

From
Andres Freund
Date:
Hi,

On 2018-11-28 03:06:58 +0100, Petr Jelinek wrote:
> On 28/11/2018 02:14, Andres Freund wrote:
> > On 2018-11-28 02:04:18 +0100, Tomas Vondra wrote:
> >> Pushed and backpatched to 9.4- (same as e9edc1ba).
> > 
> > Backpatching seems on the more aggressive end of things for an
> > optimization. Could you at least announce that beforehand next time?
> > 
> 
> Well, it may be optimization, but from what I've seen the problems
> arising from this can easily prevent logical replication from working
> altogether as reorder buffer hits OOM on bigger tables. So ISTM that it
> does warrant backpatch.

I think that's a fair argument to be made. But it should be made both
before the commit and in the commit message.

Greetings,

Andres Freund


Re: logical decoding vs. VACUUM FULL / CLUSTER on table with TOAST-eddata

From
Tomas Vondra
Date:
On 11/28/18 3:31 AM, Andres Freund wrote:
> Hi,
> 
> On 2018-11-28 03:06:58 +0100, Petr Jelinek wrote:
>> On 28/11/2018 02:14, Andres Freund wrote:
>>> On 2018-11-28 02:04:18 +0100, Tomas Vondra wrote:
>>>> Pushed and backpatched to 9.4- (same as e9edc1ba).
>>>
>>> Backpatching seems on the more aggressive end of things for an
>>> optimization. Could you at least announce that beforehand next time?
>>>
>>
>> Well, it may be optimization, but from what I've seen the problems
>> arising from this can easily prevent logical replication from working
>> altogether as reorder buffer hits OOM on bigger tables. So ISTM that it
>> does warrant backpatch.
> 
> I think that's a fair argument to be made. But it should be made
> both before the commit and in the commit message.
> 

Understood. I thought I stated the intent to backpatch when announcing
I'll push it this week, but clearly that did not happen. Oops :-(

That being said, I see this more like a bugfix than an optimization,
because (as Petr already stated) rewrite of any sufficiently large table
can irreparably break the replication. So it's not just slower, it dies.

regards

-- 
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services