Re: [HACKERS] Logical replication in the same cluster - Mailing list pgsql-hackers

From Greg Stark
Subject Re: [HACKERS] Logical replication in the same cluster
Date
Msg-id CAM-w4HP5jRP9sr=XVk0Ckpdyz6nDe3x5s2iXus-6AAZ8Ke-7tA@mail.gmail.com
Whole thread Raw
In response to Re: [HACKERS] Logical replication in the same cluster  (Andres Freund <andres@anarazel.de>)
List pgsql-hackers
On 1 May 2017 at 19:24, Andres Freund <andres@anarazel.de> wrote:
>> There is no inherent reason why the CREATE INDEX CONCURRENTLY style of
>> using multiple transactions makes it necessary to leave a mess behind
>> in the event of an error or hard crash. Is someone going to get around
>> to fixing the problem for CREATE INDEX CONCURRENTLY (e.g., having
>> extra steps to drop the useless index during recovery)? IIRC, this was
>> always the plan.
>
> Doing catalog changes in recovery is frought with problems. Essentially
> requires starting one worker per database, before allowing access.

The "plan" was to add more layers PG_TRY and transactions so that if
there was an error during building the index all the remnants of the
failed index build got cleaned up. But when I went tried to actually
do it the problem seemed to metastatize and it was going to require
two or three layers of messy nested PG_TRY and extra transactions.
Perhaps there's a cleaner way to structure it and I should look again.

I don't recall ever having a plan to do anything in recovery. I think
we did talk about why it was hard to mark hash indexes invalid during
recovery which was probably the same problem.

-- 
greg



pgsql-hackers by date:

Previous
From: Peter Geoghegan
Date:
Subject: Re: [HACKERS] A design for amcheck heapam verification
Next
From: Greg Stark
Date:
Subject: Re: [HACKERS] A design for amcheck heapam verification