Re: How to prevent master server crash if hot standby stops - Mailing list pgsql-general

From Andrus
Subject Re: How to prevent master server crash if hot standby stops
Date
Msg-id 0E181F52FD6449568F620E9C84427D10@dell2
Whole thread Raw
In response to Re: How to prevent master server crash if hot standby stops  (Laurenz Albe <laurenz.albe@cybertec.at>)
Responses Re: How to prevent master server crash if hot standby stops
List pgsql-general
Hi!

>If you prefer replication to fail silently, don't use replication
>slots.  Use "wal_keep_segments" instead.

I desided to give 1 GB to wal. So I added

wal_keep_segments=60

After some time Postgres created 80 files with total size 1.3GB. 

How to fix this so that no more than 1 GB of disk space is used ?
How to get information how may wal files are yet not processed by slave ?
How to delete processed wal files so that 1 GB of disk space can used for some other purposes ?

/var/lib/postgresql/12/main/pg_wal# ls
00000001000002A200000072  00000001000002A200000083  00000001000002A200000094  00000001000002A2000000A5
00000001000002A2000000B6
00000001000002A200000073  00000001000002A200000084  00000001000002A200000095  00000001000002A2000000A6
00000001000002A2000000B7
00000001000002A200000074  00000001000002A200000085  00000001000002A200000096  00000001000002A2000000A7
00000001000002A2000000B8
00000001000002A200000075  00000001000002A200000086  00000001000002A200000097  00000001000002A2000000A8
00000001000002A2000000B9
00000001000002A200000076  00000001000002A200000087  00000001000002A200000098  00000001000002A2000000A9
00000001000002A2000000BA
00000001000002A200000077  00000001000002A200000088  00000001000002A200000099  00000001000002A2000000AA
00000001000002A2000000BB
00000001000002A200000078  00000001000002A200000089  00000001000002A20000009A  00000001000002A2000000AB
00000001000002A2000000BC
00000001000002A200000079  00000001000002A20000008A  00000001000002A20000009B  00000001000002A2000000AC
00000001000002A2000000BD
00000001000002A20000007A  00000001000002A20000008B  00000001000002A20000009C  00000001000002A2000000AD
00000001000002A2000000BE
00000001000002A20000007B  00000001000002A20000008C  00000001000002A20000009D  00000001000002A2000000AE
00000001000002A2000000BF
00000001000002A20000007C  00000001000002A20000008D  00000001000002A20000009E  00000001000002A2000000AF
00000001000002A2000000C0
00000001000002A20000007D  00000001000002A20000008E  00000001000002A20000009F  00000001000002A2000000B0
00000001000002A2000000C1
00000001000002A20000007E  00000001000002A20000008F  00000001000002A2000000A0  00000001000002A2000000B1  archive_status
00000001000002A20000007F  00000001000002A200000090  00000001000002A2000000A1  00000001000002A2000000B2
00000001000002A200000080  00000001000002A200000091  00000001000002A2000000A2  00000001000002A2000000B3
00000001000002A200000081  00000001000002A200000092  00000001000002A2000000A3  00000001000002A2000000B4
00000001000002A200000082  00000001000002A200000093  00000001000002A2000000A4  00000001000002A2000000B5


Andrus.




pgsql-general by date:

Previous
From: Michael Lewis
Date:
Subject: Re: Logical replication
Next
From: Devrim Gündüz
Date:
Subject: Re: Cstore_fdw issue.