Re: BUG #8673: Could not open file "pg_multixact/members/xxxx" on slave during hot_standby - Mailing list pgsql-bugs

From Serge Negodyuck
Subject Re: BUG #8673: Could not open file "pg_multixact/members/xxxx" on slave during hot_standby
Date
Msg-id CABKyZDFRtDzcY1=sEznC2ivmcM5p1yPQvC8qLL8XLviNF2kp7A@mail.gmail.com
Whole thread Raw
In response to Re: BUG #8673: Could not open file "pg_multixact/members/xxxx" on slave during hot_standby  (Andres Freund <andres@2ndquadrant.com>)
Responses Re: BUG #8673: Could not open file "pg_multixact/members/xxxx" on slave during hot_standby  (Andres Freund <andres@2ndquadrant.com>)
List pgsql-bugs
2013/12/9 Andres Freund <andres@2ndquadrant.com>:
> On 2013-12-09 16:55:21 +0200, Serge Negodyuck wrote:
>> 2013/12/9 Andres Freund <andres@2ndquadrant.com>:
>> > On 2013-12-09 13:47:34 +0000, petr@petrovich.kiev.ua wrote:
>> >> PostgreSQL version: 9.3.2
>> >
>> >> I've installed new slave database on 6th of December. Since there was no
>> >> packages on apt.postgresql.org with postgresql 9.3.0 I've set up postgresql
>> >> 9.3.2
>> >
>> >> 2013-12-09 10:10:24 EET 172.18.10.45 main ERROR:  could not access status of
>> >> transaction 24568845
>> >> 2013-12-09 10:10:24 EET 172.18.10.45 main DETAIL:  Could not open file
>> >> "pg_multixact/members/CD8F": No such file or directory.
>> >
>> >> My next step was to upgrade to postgresql 9.3.2 on master and to do initial
>> >> sync from scratch.
>> >> It does not help. I still have the same error.
>> >
>> > Could you post, as close as possible to the next occurance of that
>> > error:
>> > * pg_controldata output from the primary
>> > * pg_controldata output from the standby
>>
>> Sorry, I've just  downgraded all the cluster to 9.3.0, and this error
>> disappeared.
>> I can provide output right now, if it make any sence.
>
> Yes, that'd be helpful.


master:
pg_control version number: 937
Catalog version number: 201306121
Database system identifier: 5928279687159054327
Database cluster state: in production
pg_control last modified: Mon 09 Dec 2013 05:29:53 PM EET
Latest checkpoint location: 3D4/76E97DA0
Prior checkpoint location: 3D4/6E768638
Latest checkpoint's REDO location: 3D4/76925C18
Latest checkpoint's REDO WAL file: 00000001000003D400000076
Latest checkpoint's TimeLineID: 1
Latest checkpoint's PrevTimeLineID: 1
Latest checkpoint's full_page_writes: on
Latest checkpoint's NextXID: 0/90546484
Latest checkpoint's NextOID: 6079185
Latest checkpoint's NextMultiXactId: 42049949
Latest checkpoint's NextMultiOffset: 55384024
Latest checkpoint's oldestXID: 710
Latest checkpoint's oldestXID's DB: 1
Latest checkpoint's oldestActiveXID: 90546475
Latest checkpoint's oldestMultiXid: 1
Latest checkpoint's oldestMulti's DB: 1
Time of latest checkpoint: Mon 09 Dec 2013 05:29:44 PM EET
Fake LSN counter for unlogged rels: 0/1
Minimum recovery ending location: 0/0
Min recovery ending loc's timeline: 0
Backup start location: 0/0
Backup end location: 0/0
End-of-backup record required: no
Current wal_level setting: hot_standby
Current max_connections setting: 1000
Current max_prepared_xacts setting: 0
Current max_locks_per_xact setting: 64
Maximum data alignment: 8
Database block size: 8192
Blocks per segment of large relation: 131072
WAL block size: 8192
Bytes per WAL segment: 16777216
Maximum length of identifiers: 64
Maximum columns in an index: 32
Maximum size of a TOAST chunk: 1996
Date/time type storage: 64-bit integers
Float4 argument passing: by value
Float8 argument passing: by value
Data page checksum version: 0



slave:
pg_control version number: 937
Catalog version number: 201306121
Database system identifier: 5928279687159054327
Database cluster state: in archive recovery
pg_control last modified: Mon 09 Dec 2013 05:25:22 PM EET
Latest checkpoint location: 3D4/6E768638
Prior checkpoint location: 3D4/66F14C60
Latest checkpoint's REDO location: 3D4/6E39F9E8
Latest checkpoint's REDO WAL file: 00000001000003D40000006E
Latest checkpoint's TimeLineID: 1
Latest checkpoint's PrevTimeLineID: 1
Latest checkpoint's full_page_writes: on
Latest checkpoint's NextXID: 0/90546484
Latest checkpoint's NextOID: 6079185
Latest checkpoint's NextMultiXactId: 42046170
Latest checkpoint's NextMultiOffset: 55058098
Latest checkpoint's oldestXID: 710
Latest checkpoint's oldestXID's DB: 1
Latest checkpoint's oldestActiveXID: 90541410
Latest checkpoint's oldestMultiXid: 1
Latest checkpoint's oldestMulti's DB: 1
Time of latest checkpoint: Mon 09 Dec 2013 05:24:44 PM EET
Fake LSN counter for unlogged rels: 0/1
Minimum recovery ending location: 3D4/7884BB68
Min recovery ending loc's timeline: 1
Backup start location: 0/0
Backup end location: 0/0
End-of-backup record required: no
Current wal_level setting: hot_standby
Current max_connections setting: 1000
Current max_prepared_xacts setting: 0
Current max_locks_per_xact setting: 64
Maximum data alignment: 8
Database block size: 8192
Blocks per segment of large relation: 131072
WAL block size: 8192
Bytes per WAL segment: 16777216
Maximum length of identifiers: 64
Maximum columns in an index: 32
Maximum size of a TOAST chunk: 1996
Date/time type storage: 64-bit integers
Float4 argument passing: by value
Float8 argument passing: by value
Data page checksum version: 0




>
> Could you also provide ls -l pg_multixact/ on both primary and standby?

Did you mean pg_multixact/members/ ?
I't not possible on slave right now. Since I had to re-sync these
files from master. May be that was not a good idea but it helped.

On master there are files from 0000 to 14078

On slave there were absent files from A1xx to FFFF
They were  the oldest ones. (October, November)

pgsql-bugs by date:

Previous
From: Andres Freund
Date:
Subject: Re: BUG #8673: Could not open file "pg_multixact/members/xxxx" on slave during hot_standby
Next
From: Andres Freund
Date:
Subject: Re: BUG #8673: Could not open file "pg_multixact/members/xxxx" on slave during hot_standby