Can't Bringing the former Primary up as a Standby - Mailing list pgsql-general

From Sergey Levchenko
Subject Can't Bringing the former Primary up as a Standby
Date
Msg-id CAK-g=Hw+Qj_644S0S0M6901b0hy05Qn0bNpexMw2qyMgsjWo8w@mail.gmail.com
Whole thread Raw
List pgsql-general
Hello!

I got:

root@reactor:~# invoke-rc.d postgresql start
Starting PostgreSQL 9.1 database server: mainThe PostgreSQL server
failed to start. Please check the log output: 2011-08-11 12:12:42 EEST
LOG: database system was interrupted; last known up at 2011-08-11
12:04:21 EEST 2011-08-11 12:12:42 EEST LOG: could not open file
"pg_xlog/00000001000000000000004A" (log file 0, segment 74): No such
file or directory 2011-08-11 12:12:42 EEST LOG: invalid checkpoint
record 2011-08-11 12:12:42 EEST FATAL: could not locate requir

While I do not:
1. cp recovery.done recovery.conf
2. change host to new primary at recovery.conf

Is it ok? I have to do that to bring up primary up as a standby?

But it doesn't help, I cant connect postgresql, last log:

2011-08-11 12:46:02 EEST LOG:  shutting down
2011-08-11 12:46:02 EEST LOG:  restartpoint starting: shutdown
immediate
2011-08-11 12:46:02 EEST LOG:  restartpoint complete: wrote 0 buffers
(0.0%); 0 transaction log file(s) added, 0 removed, 0 recycled;
write=0.001 s, sync=0.000 s, total=0.029 s; sync files=0,
longest=0.000 s, average=0.000 s
2011-08-11 12:46:02 EEST LOG:  recovery restart point at 0/53000020
2011-08-11 12:46:02 EEST LOG:  database system is shut down
2011-08-11 12:46:18 EEST LOG:  database system was shut down in
recovery at 2011-08-11 12:46:02 EEST
2011-08-11 12:46:18 EEST LOG:  entering standby mode
2011-08-11 12:46:18 EEST LOG:  consistent recovery state reached at
0/53000078
2011-08-11 12:46:18 EEST LOG:  record with zero length at 0/53000078
2011-08-11 12:46:18 EEST LOG:  streaming replication successfully
connected to primary
2011-08-11 12:46:18 EEST LOG:  connection received: host=[local]
2011-08-11 12:46:18 EEST LOG:  incomplete startup packet
2011-08-11 12:46:19 EEST LOG:  connection received: host=[local]
2011-08-11 12:46:19 EEST FATAL:  the database system is starting up
2011-08-11 12:46:19 EEST LOG:  connection received: host=[local]
2011-08-11 12:46:19 EEST FATAL:  the database system is starting up
2011-08-11 12:46:20 EEST LOG:  connection received: host=[local]
2011-08-11 12:46:20 EEST FATAL:  the database system is starting up
2011-08-11 12:46:20 EEST LOG:  connection received: host=[local]
2011-08-11 12:46:20 EEST FATAL:  the database system is starting up
2011-08-11 12:46:21 EEST LOG:  connection received: host=[local]
2011-08-11 12:46:21 EEST FATAL:  the database system is starting up
2011-08-11 12:46:21 EEST LOG:  connection received: host=[local]
2011-08-11 12:46:21 EEST FATAL:  the database system is starting up
2011-08-11 12:46:22 EEST LOG:  connection received: host=[local]
2011-08-11 12:46:22 EEST FATAL:  the database system is starting up
2011-08-11 12:46:22 EEST LOG:  connection received: host=[local]
2011-08-11 12:46:22 EEST FATAL:  the database system is starting up
2011-08-11 12:46:23 EEST LOG:  connection received: host=[local]
2011-08-11 12:46:23 EEST FATAL:  the database system is starting up
2011-08-11 12:46:23 EEST LOG:  connection received: host=[local]
2011-08-11 12:46:23 EEST FATAL:  the database system is starting up
2011-08-11 12:46:24 EEST LOG:  connection received: host=[local]
2011-08-11 12:46:24 EEST FATAL:  the database system is starting up
2011-08-11 12:46:24 EEST LOG:  connection received: host=[local]
2011-08-11 12:46:24 EEST LOG:  incomplete startup packet

processes:

postgres 18696  1.2  1.0 926428 40688 ?        S    12:54   0:00 /usr/
lib/postgresql/9.1/bin/postgres -D /var/lib/postgresql/9.1/main -c
config_file=/etc/postgresql/9.1/main/postgresql.conf
postgres 18697  0.0  0.0 926896  1832 ?        Ss   12:54   0:00
postgres: startup process   waiting for
000000010000000000000053
postgres 18698  0.0  0.0 926832  1812 ?        Ss   12:54   0:00
postgres: writer
process
postgres 18699  0.0  0.0 937440  2848 ?        Ss   12:54   0:00
postgres: wal receiver process   streaming
0/53000078


All it heppens after:

postgres@reactor:~$ repmgr -D /var/lib/postgresql/9.1/main -d pgbench -
p 5432 -U eps -R postgres --verbose --force standby clone 10.0.1.123
Opening configuration file: ./repmgr.conf
repmgr: directory "/var/lib/postgresql/9.1/main" exists but is not
empty
repmgr connecting to master database
repmgr connected to master, checking its state
Succesfully connected to primary. Current installation size is 182 MB
Starting backup...
standby clone: master control file '/media/postgresql/9.1/data/global/
pg_control'
rsync command line:  'rsync --archive --checksum --compress --progress
--rsh=ssh --delete postg...@10.0.1.123:/media/postgresql/9.1/data/
global/pg_control /var/lib/postgresql/9.1/main/global/.'
receiving incremental file list
pg_control
        8192 100%    7.81MB/s    0:00:00 (xfer#1, to-check=0/1)
sent 102 bytes  received 234 bytes  672.00 bytes/sec
total size is 8192  speedup is 24.38
standby clone: master data directory '/media/postgresql/9.1/data'
rsync command line:  'rsync --archive --checksum --compress --progress
--rsh=ssh --delete --exclude=pg_xlog* --exclude=pg_control --
exclude=*.pid postg...@10.0.1.123:/media/postgresql/9.1/data/* /var/
lib/postgresql/9.1/main'
receiving incremental file list
rsync: chgrp "/var/lib/postgresql/9.1/main/server.crt" failed:
Operation not permitted (1)
rsync: chgrp "/var/lib/postgresql/9.1/main/server.key" failed:
Operation not permitted (1)
deleting base/16397/16481_fsm
deleting base/16397/16481
backup_label
         171 100%  166.99kB/s    0:00:00 (xfer#1, to-check=1195/1197)
postmaster.opts
         131 100%  127.93kB/s    0:00:00 (xfer#2, to-check=1190/1197)
base/11973/
base/11973/pg_internal.init
      106804 100%    7.84MB/s    0:00:00 (xfer#3, to-check=534/1244)
base/16387/
base/16387/pg_internal.init
      106804 100%    5.09MB/s    0:00:00 (xfer#4, to-check=298/1244)
base/16397/
base/16397/11655
      163840 100%    4.22MB/s    0:00:00 (xfer#5, to-check=297/1244)
base/16397/11655_fsm
       24576 100%  631.58kB/s    0:00:00 (xfer#6, to-check=296/1244)
base/16397/11666
       40960 100%    1.03MB/s    0:00:00 (xfer#7, to-check=290/1244)
base/16397/11690
       65536 100%    1.64MB/s    0:00:00 (xfer#8, to-check=269/1244)
base/16397/16398
       73728 100%    1.35MB/s    0:00:00 (xfer#9, to-check=71/1244)
base/16397/16398_fsm
       24576 100%  452.83kB/s    0:00:00 (xfer#10, to-check=70/1244)
base/16397/16398_vm
        8192 100%  150.94kB/s    0:00:00 (xfer#11, to-check=69/1244)
base/16397/16401
       24576 100%  436.36kB/s    0:00:00 (xfer#12, to-check=68/1244)
base/16397/16401_fsm
       24576 100%  436.36kB/s    0:00:00 (xfer#13, to-check=67/1244)
base/16397/16401_vm
        8192 100%  145.45kB/s    0:00:00 (xfer#14, to-check=66/1244)
base/16397/16410
   136716288 100%   11.33MB/s    0:00:11 (xfer#15, to-check=65/1244)
base/16397/16410_fsm
       57344 100%  123.35kB/s    0:00:00 (xfer#16, to-check=64/1244)
base/16397/16410_vm
        8192 100%   17.58kB/s    0:00:00 (xfer#17, to-check=63/1244)
base/16397/16411
       16384 100%   35.16kB/s    0:00:00 (xfer#18, to-check=62/1244)
base/16397/16413
       16384 100%   35.16kB/s    0:00:00 (xfer#19, to-check=61/1244)
base/16397/16415
    22487040 100%   12.16MB/s    0:00:01 (xfer#20, to-check=60/1244)
base/16397/24678
      933888 100%  997.81kB/s    0:00:00 (xfer#21, to-check=51/1244)
base/16397/24678_fsm
       24576 100%   26.26kB/s    0:00:00 (xfer#22, to-check=50/1244)
base/16397/pg_internal.init
      106804 100%  113.74kB/s    0:00:00 (xfer#23, to-check=47/1244)
global/
global/pg_internal.init
       12456 100%   13.27kB/s    0:00:00 (xfer#24, to-check=8/1244)
pg_clog/0000
       24576 100%   26.17kB/s    0:00:00 (xfer#25, to-check=7/1244)
pg_notify/
pg_stat_tmp/
pg_stat_tmp/pgstat.stat
       22100 100%   23.51kB/s    0:00:00 (xfer#26, to-check=1/1244)
pg_subtrans/0001
      106496 100%  112.80kB/s    0:00:00 (xfer#27, to-check=0/1244)
sent 116370 bytes  received 7274858 bytes  509739.86 bytes/sec
total size is 191388777  speedup is 25.89
rsync error: some files/attrs were not transferred (see previous
errors) (code 23) at main.c(1536) [generator=3.0.8]
Can't rsync from remote file or directory (postg...@10.0.1.123:/media/
postgresql/9.1/data)
standby clone: failed copying master data directory '/media/postgresql/
9.1/data'
repmgr connecting to master database to stop backup
Finishing backup...
NOTICE:  pg_stop_backup complete, all required WAL segments have been
archived
repmgr requires primary to keep WAL files 00000001000000000000004A
until at least 00000001000000000000004A

pgsql-general by date:

Previous
From: djé djé
Date:
Subject: Re: already gerald2545
Next
From: Siva Palanisamy
Date:
Subject: Re: Copy command to handle view for my export requirement