Re: Minimal logical decoding on standbys - Mailing list pgsql-hackers

From tushar
Subject Re: Minimal logical decoding on standbys
Date
Msg-id cbda32d2-eeb6-31b9-a88a-566330a46fc4@enterprisedb.com
Whole thread Raw
In response to Re: Minimal logical decoding on standbys  (Petr Jelinek <petr.jelinek@2ndquadrant.com>)
Responses Re: Minimal logical decoding on standbys
Re: Minimal logical decoding on standbys
List pgsql-hackers
Hi,

While testing  this feature  found that - if lots of insert happened on 
the master cluster then pg_recvlogical is not showing the DATA 
information  on logical replication slot which created on SLAVE.

Please refer this scenario -

1)
Create a Master cluster with wal_level=logcal and create logical 
replication slot -
  SELECT * FROM pg_create_logical_replication_slot('master_slot', 
'test_decoding');

2)
Create a Standby  cluster using pg_basebackup ( ./pg_basebackup -D 
slave/ -v -R)  and create logical replication slot -
SELECT * FROM pg_create_logical_replication_slot('standby_slot', 
'test_decoding');

3)
X terminal - start  pg_recvlogical  , provide port=5555 ( slave 
cluster)  and specify slot=standby_slot
./pg_recvlogical -d postgres  -p 5555 -s 1 -F 1  -v --slot=standby_slot  
--start -f -

Y terminal - start  pg_recvlogical  , provide port=5432 ( master 
cluster)  and specify slot=master_slot
./pg_recvlogical -d postgres  -p 5432 -s 1 -F 1  -v --slot=master_slot  
--start -f -

Z terminal - run pg_bench  against Master cluster ( ./pg_bench -i -s 10 
postgres)

Able to see DATA information on Y terminal  but not on X.

but same able to see by firing this below query on SLAVE cluster -

SELECT * FROM pg_logical_slot_get_changes('standby_slot', NULL, NULL);

Is it expected ?

regards,
tushar

On 12/17/2018 10:46 PM, Petr Jelinek wrote:
> Hi,
>
> On 12/12/2018 21:41, Andres Freund wrote:
>> I don't like the approach of managing the catalog horizon via those
>> periodically logged catalog xmin announcements.  I think we instead
>> should build ontop of the records we already have and use to compute
>> snapshot conflicts.  As of HEAD we don't know whether such tables are
>> catalog tables, but that's just a bool that we need to include in the
>> records, a basically immeasurable overhead given the size of those
>> records.
> IIRC I was originally advocating adding that xmin announcement to the
> standby snapshot message, but this seems better.
>
>> If we were to go with this approach, there'd be at least the following
>> tasks:
>> - adapt tests from [2]
>> - enforce hot-standby to be enabled on the standby when logical slots
>>    are created, and at startup if a logical slot exists
>> - fix issue around btree_xlog_delete_get_latestRemovedXid etc mentioned
>>    above.
>> - Have a nicer conflict handling than what I implemented here.  Craig's
>>    approach deleted the slots, but I'm not sure I like that.  Blocking
>>    seems more appropriately here, after all it's likely that the
>>    replication topology would be broken afterwards.
>> - get_rel_logical_catalog() shouldn't be in lsyscache.[ch], and can be
>>    optimized (e.g. check wal_level before opening rel etc).
>>
>>
>> Once we have this logic, it can be used to implement something like
>> failover slots on-top, by having having a mechanism that occasionally
>> forwards slots on standbys using pg_replication_slot_advance().
>>
> Looking at this from the failover slots perspective. Wouldn't blocking
> on conflict mean that we stop physical replication on catalog xmin
> advance when there is lagging logical replication on primary? It might
> not be too big deal as in that use-case it should only happen if
> hs_feedback was off at some point, but just wanted to point out this
> potential problem.
>

-- 
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



pgsql-hackers by date:

Previous
From: Michael Paquier
Date:
Subject: Re: pg_partition_tree crashes for a non-defined relation
Next
From: Konstantin Knizhnik
Date:
Subject: Re: Drop type "smgr"?