Re: pg_multixact issues - Mailing list pgsql-general
From | Achilleas Mantzios |
---|---|
Subject | Re: pg_multixact issues |
Date | |
Msg-id | 56BC441B.1070305@matrix.gatewaynet.com Whole thread Raw |
In response to | pg_multixact issues (Kiriakos Georgiou <kg.postgresql@olympiakos.com>) |
List | pgsql-general |
Καλημέρα Κυριάκο
We have been running 9.3.4 as our test system for quite some years, and 9.3.10 on production for a month or so (little less than 1T size, WAL changes worth of 2-5GB/day, about 250K xations/day) we never experienced any problems with data/pg_multixact.
In our 9.3.4 :
% pg_controldata data | grep -i multi
Latest checkpoint's NextMultiXactId: 69
Latest checkpoint's NextMultiOffset: 135
Latest checkpoint's oldestMultiXid: 1
Latest checkpoint's oldestMulti's DB: 0
% du -h data/pg_multixact/
10K data/pg_multixact/members
10K data/pg_multixact/offsets
22K data/pg_multixact/
In our prod 9.3.10:
~> pg_controldata data | grep -i multi
Latest checkpoint's NextMultiXactId: 12404
Latest checkpoint's NextMultiOffset: 356
Latest checkpoint's oldestMultiXid: 12232
Latest checkpoint's oldestMulti's DB: 16426
~> du -h data/pg_multixact/
12K data/pg_multixact/members
24K data/pg_multixact/offsets
40K data/pg_multixact/
Our system comprises of a JEE installation with ~ 500 uesrs, and at least 2 other applications hitting the same tables at the same time, we have about one case of deadlocks per week.
What could be different on yours?
On 10/02/2016 20:52, Kiriakos Georgiou wrote:
We have been running 9.3.4 as our test system for quite some years, and 9.3.10 on production for a month or so (little less than 1T size, WAL changes worth of 2-5GB/day, about 250K xations/day) we never experienced any problems with data/pg_multixact.
In our 9.3.4 :
% pg_controldata data | grep -i multi
Latest checkpoint's NextMultiXactId: 69
Latest checkpoint's NextMultiOffset: 135
Latest checkpoint's oldestMultiXid: 1
Latest checkpoint's oldestMulti's DB: 0
% du -h data/pg_multixact/
10K data/pg_multixact/members
10K data/pg_multixact/offsets
22K data/pg_multixact/
In our prod 9.3.10:
~> pg_controldata data | grep -i multi
Latest checkpoint's NextMultiXactId: 12404
Latest checkpoint's NextMultiOffset: 356
Latest checkpoint's oldestMultiXid: 12232
Latest checkpoint's oldestMulti's DB: 16426
~> du -h data/pg_multixact/
12K data/pg_multixact/members
24K data/pg_multixact/offsets
40K data/pg_multixact/
Our system comprises of a JEE installation with ~ 500 uesrs, and at least 2 other applications hitting the same tables at the same time, we have about one case of deadlocks per week.
What could be different on yours?
On 10/02/2016 20:52, Kiriakos Georgiou wrote:
Hello,Our pg_multixact directory keeps growing. I did a "vacuum freeze” which didn’t help. I also did a "vacuum full” which didn’t help either.We had this condition with 9.3.4 as well. When I upgraded our cluster to 9.4.5 (via plain sql dump and load) as expected the issue was resolved but now it’s happening again. Luckily it has no ill effect other than consuming 4G of space for an otherwise 1G database.Can you offer any hints as to how I can cure this?thanks,Kiriakos Georgioupg_controldata output:pg_control version number: 942Catalog version number: 201409291Database system identifier: 6211781659140720513Database cluster state: in productionpg_control last modified: Wed Feb 10 13:45:02 2016Latest checkpoint location: D/FB5FE630Prior checkpoint location: D/FB5FE558Latest checkpoint's REDO location: D/FB5FE5F8Latest checkpoint's REDO WAL file: 000000010000000D000000FBLatest checkpoint's TimeLineID: 1Latest checkpoint's PrevTimeLineID: 1Latest checkpoint's full_page_writes: onLatest checkpoint's NextXID: 0/3556219Latest checkpoint's NextOID: 2227252Latest checkpoint's NextMultiXactId: 2316566Latest checkpoint's NextMultiOffset: 823062151Latest checkpoint's oldestXID: 668Latest checkpoint's oldestXID's DB: 1Latest checkpoint's oldestActiveXID: 3556219Latest checkpoint's oldestMultiXid: 1Latest checkpoint's oldestMulti's DB: 1Time of latest checkpoint: Wed Feb 10 13:45:02 2016Fake LSN counter for unlogged rels: 0/1Minimum recovery ending location: 0/0Min recovery ending loc's timeline: 0Backup start location: 0/0Backup end location: 0/0End-of-backup record required: noCurrent wal_level setting: hot_standbyCurrent wal_log_hints setting: offCurrent max_connections setting: 100Current max_worker_processes setting: 8Current max_prepared_xacts setting: 0Current max_locks_per_xact setting: 1024Maximum data alignment: 8Database block size: 8192Blocks per segment of large relation: 131072WAL block size: 8192Bytes per WAL segment: 16777216Maximum length of identifiers: 64Maximum columns in an index: 32Maximum size of a TOAST chunk: 1996Size of a large-object chunk: 2048Date/time type storage: 64-bit integersFloat4 argument passing: by valueFloat8 argument passing: by valueData page checksum version: 0the offsets directory:-rw------- 1 postgres dba 262144 Nov 3 15:22 0000-rw------- 1 postgres dba 262144 Nov 5 12:45 0001-rw------- 1 postgres dba 262144 Nov 9 14:25 0002-rw------- 1 postgres dba 262144 Nov 13 10:10 0003-rw------- 1 postgres dba 262144 Nov 16 15:40 0004-rw------- 1 postgres dba 262144 Nov 20 09:55 0005-rw------- 1 postgres dba 262144 Dec 1 08:00 0006-rw------- 1 postgres dba 262144 Dec 9 11:50 0007-rw------- 1 postgres dba 262144 Dec 16 08:14 0008-rw------- 1 postgres dba 262144 Dec 21 09:40 0009-rw------- 1 postgres dba 262144 Dec 31 09:55 000A-rw------- 1 postgres dba 262144 Jan 4 21:17 000B-rw------- 1 postgres dba 262144 Jan 6 10:50 000C-rw------- 1 postgres dba 262144 Jan 7 18:20 000D-rw------- 1 postgres dba 262144 Jan 13 13:55 000E-rw------- 1 postgres dba 262144 Jan 15 11:55 000F-rw------- 1 postgres dba 262144 Jan 22 07:50 0010-rw------- 1 postgres dba 262144 Jan 26 16:35 0011-rw------- 1 postgres dba 262144 Jan 29 10:16 0012-rw------- 1 postgres dba 262144 Feb 3 13:17 0013-rw------- 1 postgres dba 262144 Feb 3 16:13 0014-rw------- 1 postgres dba 262144 Feb 4 08:24 0015-rw------- 1 postgres dba 262144 Feb 5 13:20 0016-rw------- 1 postgres dba 262144 Feb 8 11:26 0017-rw------- 1 postgres dba 262144 Feb 8 11:46 0018-rw------- 1 postgres dba 262144 Feb 8 12:25 0019-rw------- 1 postgres dba 262144 Feb 8 13:19 001A-rw------- 1 postgres dba 262144 Feb 8 14:23 001B-rw------- 1 postgres dba 262144 Feb 8 15:32 001C-rw------- 1 postgres dba 262144 Feb 8 17:01 001D-rw------- 1 postgres dba 262144 Feb 8 19:19 001E-rw------- 1 postgres dba 262144 Feb 8 22:11 001F-rw------- 1 postgres dba 262144 Feb 9 01:44 0020-rw------- 1 postgres dba 262144 Feb 9 05:57 0021-rw------- 1 postgres dba 262144 Feb 9 10:45 0022-rw------- 1 postgres dba 98304 Feb 10 13:35 0023the members directory has 15723 files:ls -l|wc -l15723
-- Achilleas Mantzios IT DEV Lead IT DEPT Dynacom Tankers Mgmt
pgsql-general by date: