I noticed one minor issue after I had already sent the
previous letter.
--- a/src/backend/access/transam/multixact.c
+++ b/src/backend/access/transam/multixact.c
@@ -1034,7 +1034,7 @@ GetNewMultiXactId(int nmembers, MultiXactOffset *offset)
if (nextOffset + nmembers < nextOffset)
ereport(ERROR,
(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
- "MultiXact members would wrap around"));
+ errmsg("MultiXact members would wrap around")));
*offset = nextOffset;
$ $PGBINOLD/pg_controldata -D pgdata
pg_control version number: 1800
Catalog version number: 202510221
...
Latest checkpoint's NextMultiXactId: 10000000
Latest checkpoint's NextMultiOffset: 999995050
Latest checkpoint's oldestXID: 748
...
I tried finding out how long it would take to convert a big number of
segments. Unfortunately, I only have access to a very old machine right
now. It took me 7 hours to generate this much data on my old
Intel(R) Core(TM) i5-6500 CPU @ 3.20GHz with 16 Gb of RAM.
Here are my rough measurements:
HDD
$ sudo sync && echo 3 | sudo tee /proc/sys/vm/drop_caches
$ time pg_upgrade
...
real 4m59.459s
user 0m19.974s
sys 0m13.640s
SSD
$ sudo sync && echo 3 | sudo tee /proc/sys/vm/drop_caches
$ time pg_upgrade
...
real 4m52.958s
user 0m19.826s
sys 0m13.624s
I aim to get access to more modern stuff and check it all out there.
-- Best regards,
Maxim Orlov.