Thread: Upgrading to v12
-----------------------------
Checking cluster versions ok
SQL command failed
WITH regular_heap (reloid, indtable, toastheap) AS ( SELECT c.oid, 0::oid, 0::oid FROM pg_catalog.pg_class c JOIN pg_catalog.pg_namespace n ON c.relnamespace = n.oid WHERE relkind IN ('r', 'm') AND ((n.nspname !~ '^pg_temp_' AND n.nspname !~ '^pg_toast_temp_' AND n.nspname NOT IN ('pg_catalog', 'information_schema', 'binary_upgrade', 'pg_toast') AND c.oid >= 16384::pg_catalog.oid) OR (n.nspname = 'pg_catalog' AND relname IN ('pg_largeobject') ))), toast_heap (reloid, indtable, toastheap) AS ( SELECT c.reltoastrelid, 0::oid, c.oid FROM regular_heap JOIN pg_catalog.pg_class c ON regular_heap.reloid = c.oid WHERE c.reltoastrelid != 0), all_index (reloid, indtable, toastheap) AS ( SELECT indexrelid, indrelid, 0::oid FROM pg_catalog.pg_index WHERE indisvalid AND indisready AND indrelid IN (SELECT reloid FROM regular_heap UNION ALL SELECT reloid FROM toast_heap)) SELECT all_rels.*, n.nspname, c.relname, c.relfilenode, c.reltablespace, pg_catalog.pg_tablespace_location(t.oid) AS spclocation FROM (SELECT * FROM regular_heap UNION ALL SELECT * FROM toast_heap UNION ALL SELECT * FROM all_index) all_rels JOIN pg_catalog.pg_class c ON all_rels.reloid = c.oid JOIN pg_catalog.pg_namespace n ON c.relnamespace = n.oid LEFT OUTER JOIN pg_catalog.pg_tablespace t ON c.reltablespace = t.oid ORDER BY 1;
ERROR: could not access status of transaction 22316920
DETAIL: Could not read from file "pg_clog/0015" at offset 73728: No error.
On 11/11/22 12:43, Brad White wrote: > I'm upgrading from v9.4 to v12.10 as a half step to 15. > > Q1: How do I tell it which database to upgrade? > I only need the primary. > Not the half dozen restored copies. > Or do I need to detach everything I don't want copied? 1) If you are using pg_upgrade then it only works on the entire cluster not individual databases. 2) This is not SQLite there is no detaching of databases. 3) I you want to move a single database then you are looking at pg_dump/pg_restore or logical replication. > > Q2: I get this error, and then at the end, it says "No error." What was the complete pg_upgrade command you used? > > Performing Consistency Checks > ----------------------------- > Checking cluster versions ok > SQL command failed > WITH regular_heap (reloid, indtable, toastheap) AS ( SELECT c.oid, > 0::oid, 0::oid FROM pg_catalog.pg_class c JOIN pg_catalog.pg_namespace > n ON c.relnamespace = n.oid WHERE relkind IN ('r', 'm') AND > ((n.nspname !~ '^pg_temp_' AND n.nspname !~ '^pg_toast_temp_' > AND n.nspname NOT IN ('pg_catalog', 'information_schema', > 'binary_upgrade', 'pg_toast') AND c.oid >= > 16384::pg_catalog.oid) OR (n.nspname = 'pg_catalog' AND > relname IN ('pg_largeobject') ))), toast_heap (reloid, indtable, > toastheap) AS ( SELECT c.reltoastrelid, 0::oid, c.oid FROM > regular_heap JOIN pg_catalog.pg_class c ON regular_heap.reloid = > c.oid WHERE c.reltoastrelid != 0), all_index (reloid, indtable, > toastheap) AS ( SELECT indexrelid, indrelid, 0::oid FROM > pg_catalog.pg_index WHERE indisvalid AND indisready AND indrelid > IN (SELECT reloid FROM regular_heap UNION ALL > SELECT reloid FROM toast_heap)) SELECT all_rels.*, n.nspname, > c.relname, c.relfilenode, c.reltablespace, > pg_catalog.pg_tablespace_location(t.oid) AS spclocation FROM (SELECT * > FROM regular_heap UNION ALL SELECT * FROM toast_heap > UNION ALL SELECT * FROM all_index) all_rels JOIN > pg_catalog.pg_class c ON all_rels.reloid = c.oid JOIN > pg_catalog.pg_namespace n ON c.relnamespace = n.oid LEFT OUTER > JOIN pg_catalog.pg_tablespace t ON c.reltablespace = t.oid ORDER BY 1; > ERROR: could not access status of transaction 22316920 > DETAIL: Could not read from file "pg_clog/0015" at offset 73728: No error. -- Adrian Klaver adrian.klaver@aklaver.com
DETAIL: Could not read from file "pg_clog/0015" at offset 73728: No error.
Brad White <b55white@gmail.com> writes: > I deleted all the other DBs and left only the primary. > Still getting the same error message, ending with > ERROR: could not access status of transaction 22316920 > DETAIL: Could not read from file "pg_clog/0015" at offset 73728: No error. That's a pretty clear indication of data corruption. pg_upgrade isn't going to be able to work with a corrupted source database, so you'll have to do something to clear that before you can get anywhere. There's some advice about dealing with that here: https://wiki.postgresql.org/wiki/Corruption but in general the news is not going to be good. regards, tom lane
On 11/11/22 13:11, Brad White wrote: > I deleted all the other DBs and left only the primary. > Still getting the same error message, ending with > > ERROR: could not access status of transaction 22316920 > DETAIL: Could not read from file "pg_clog/0015" at offset 73728: No error. Can you do a pg_dump of that database? -- Adrian Klaver adrian.klaver@aklaver.com
On 11/11/22 13:11, Brad White wrote:
> I deleted all the other DBs and left only the primary.
> Still getting the same error message, ending with
>
> ERROR: could not access status of transaction 22316920
> DETAIL: Could not read from file "pg_clog/0015" at offset 73728: No error.
Can you do a pg_dump of that database?
--
Adrian Klaver
adrian.klaver@aklaver.com
On 11/11/22 13:11, Brad White wrote:
> I deleted all the other DBs and left only the primary.
> Still getting the same error message, ending with
>
> ERROR: could not access status of transaction 22316920
> DETAIL: Could not read from file "pg_clog/0015" at offset 73728: No error.
Can you do a pg_dump of that database?
--
Adrian Klaver
adrian.klaver@aklaver.com
Brad White <b55white@gmail.com> writes: >> Can you do a pg_dump of that database? > Yes. No visible problems. No errors reported. Well, that's quite interesting, because pg_dump ought to read all the same catalogs that pg_upgrade is failing to read. So I'm not quite sure what's going on there. Nonetheless, your path forward is clear: use pg_dump (or better pg_dumpall) and then load the output into a freshly initdb'd v12 installation. It'll be a bit slower than the pg_upgrade way, but it'll work. regards, tom lane
On 11/11/22 14:06, Brad White wrote: > > Can you do a pg_dump of that database? > Yes. No visible problems. No errors reported. From your original post, what did "Not the half dozen restored copies" mean? In other words define the restore process. -- Adrian Klaver adrian.klaver@aklaver.com
At C:\Temp\Restore12.ps1:36 char:2
+ &$prestore -h $phost -p $pport -U postgres --no-password --clean --if-exists -- ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (pg_restore: while INITIALIZING::String) [], RemoteException
+ FullyQualifiedErrorId : NativeCommandError
pg_restore: error:
could not execute query: ERROR: unrecognized configuration parameter "idle_in_transaction_session_timeout"
Command was: SET idle_in_transaction_session_timeout = 0;
pg_restore: error:
could not execute query: ERROR: unrecognized configuration parameter "row_security"
Command was: SET row_security = off;
pg_restore
: WARNING: column "Button2" has type "unknown"
DETAIL: Proceeding with relation creation anyway.
pg_restore: WARNING: column "Button3" has type "unknown"
DETAIL: Proceeding with relation creation anyway.
pg_restore
: WARNING: column "Button4" has type "unknown"
DETAIL: Proceeding with relation creation anyway.
pg_restore: WARNING: column "Button5" has type "unknown"
DETAIL: Proceeding with relation creation anyway.
pg_restore
: WARNING: column "Button6" has type "unknown"
DETAIL: Proceeding with relation creation anyway.
pg_restore
: WARNING: column "Button7" has type "unknown"
DETAIL: Proceeding with relation creation anyway.
pg_restore
: warning: errors ignored on restore: 2
I'm practicing on our Dev server, so I can blow this away and reload at any time.Are there any utilities to check for corruption on my Prod server in v9.4.1?All my backups are done with pg_dump.exe, so that's where this database came from in the first place.So we know that pg_dump.exe works on Prod at least.On Fri, Nov 11, 2022 at 3:17 PM Adrian Klaver <adrian.klaver@aklaver.com> wrote:On 11/11/22 13:11, Brad White wrote:
> I deleted all the other DBs and left only the primary.
> Still getting the same error message, ending with
>
> ERROR: could not access status of transaction 22316920
> DETAIL: Could not read from file "pg_clog/0015" at offset 73728: No error.
Can you do a pg_dump of that database?
--
Adrian Klaver
adrian.klaver@aklaver.com
On 11/11/22 18:41, Brad White wrote: > > From your original post, what did "Not the half dozen restored copies" > mean? > Over time, we've restored multiple copies for testing and reproducing > various issues. > > I'm only trying to set up replication one one of those copies. > > > In other words define the restore process. > > Command to back up the database: > "C:\Program Files\PostgreSQL\9.4\bin\pg_dump.exe" --host localhost > --port 5432 --username "postgres" --no-password --format custom --blobs > --verbose --file "DB_"%datestr%"\DB_"%datestr%.backup "DB" > > Restore: > $pgdir\pg_restore.exe -h localhost -p 5433 -U postgres --no-password > --clean --if-exists --format=custom --dbname="DB_test" > "C:\Temp\DB_20220922_2300\DB_20220922_2300.backup" Alright I am confused. You said you had multiple copies of the database on one cluster. The above though shows you restoring to different cluster(5433) then the cluster(5432) you dumped from. Also why "C:\Program Files\PostgreSQL\9.4\bin\pg_dump.exe vs $pgdir\pg_restore.exe ? Is pgdir different from "C:\Program Files\PostgreSQL\9.4\bin\ ? -- Adrian Klaver adrian.klaver@aklaver.com
On 11/11/22 18:41, Brad White wrote:
> > From your original post, what did "Not the half dozen restored copies"
> mean?
> Over time, we've restored multiple copies for testing and reproducing
> various issues.
>
> I'm only trying to set up replication one one of those copies.
>
> > In other words define the restore process.
>
> Command to back up the database:
> "C:\Program Files\PostgreSQL\9.4\bin\pg_dump.exe" --host localhost
> --port 5432 --username "postgres" --no-password --format custom --blobs
> --verbose --file "DB_"%datestr%"\DB_"%datestr%.backup "DB"
>
> Restore:
> $pgdir\pg_restore.exe -h localhost -p 5433 -U postgres --no-password
> --clean --if-exists --format=custom --dbname="DB_test"
> "C:\Temp\DB_20220922_2300\DB_20220922_2300.backup"
Alright I am confused. You said you had multiple copies of the database
on one cluster.
The above though shows you restoring to different
cluster(5433) then the cluster(5432) you dumped from.
Also why
"C:\Program Files\PostgreSQL\9.4\bin\pg_dump.exe
vs
$pgdir\pg_restore.exe
?
Is pgdir different from "C:\Program Files\PostgreSQL\9.4\bin\ ?
--
Adrian Klaver
adrian.klaver@aklaver.com
On 11/11/22 20:59, Brad White wrote: > On Fri, Nov 11, 2022, 9:57 PM Adrian Klaver <adrian.klaver@aklaver.com > <mailto:adrian.klaver@aklaver.com>> wrote: > Yes. The backup is from production. > V9.4 is running on 5432 on all servers. > That particular restore happens to be on the dev server. 5433 is v12. > 1) This does not address from your OP: "I only need the primary. Not the half dozen restored copies." And then from follow up post: "I deleted all the other DBs and left only the primary. Still getting the same error message, ending with" How where the restored copies made on the original cluster? 2) For your explanation above, pg_dump from 9.4(5432) to pg_restore 12(5433) the issue would be ...\9.4\bin\pg_dump.exe of 9.4 and pg_restore of said dump file to version 12. When moving up in version you need to use the newer version of pg_dump(...\12\bin\pg_dump.exe) to dump the 9.4 instance and then the version 12 pg_restore to the 12 instance. Both programs are backwards compatible, not forwards compatible. -- Adrian Klaver adrian.klaver@aklaver.com
(If the client lets you, of course. I had servers stuck on 8.4.17 and 9.2.7 that were only upgraded because PCI auditors were going to tell my client's client, and that scared my client. Now they're on 9.6.24...)
I'm practicing on our Dev server, so I can blow this away and reload at any time.Are there any utilities to check for corruption on my Prod server in v9.4.1?All my backups are done with pg_dump.exe, so that's where this database came from in the first place.So we know that pg_dump.exe works on Prod at least.On Fri, Nov 11, 2022 at 3:17 PM Adrian Klaver <adrian.klaver@aklaver.com> wrote:On 11/11/22 13:11, Brad White wrote:
> I deleted all the other DBs and left only the primary.
> Still getting the same error message, ending with
>
> ERROR: could not access status of transaction 22316920
> DETAIL: Could not read from file "pg_clog/0015" at offset 73728: No error.
Can you do a pg_dump of that database?
--
Adrian Klaver
adrian.klaver@aklaver.com
Angular momentum makes the world go 'round.
> How where the restored copies made on the original cluster?I guess I'm not understanding the confusion here. They were restored with the same script but to a different DB name and with the 9.4 executables.In fact, that was why the script was originally written, so we could restore and test the backups.I've since hijacked it and used it to restore to other versions.
On 11/11/22 20:59, Brad White wrote:On Fri, Nov 11, 2022, 9:57 PM Adrian Klaver <adrian.klaver@aklaver.com <mailto:adrian.klaver@aklaver.com>> wrote:Yes. The backup is from production.
V9.4 is running on 5432 on all servers.
That particular restore happens to be on the dev server. 5433 is v12.
1) This does not address from your OP:
"I only need the primary.
Not the half dozen restored copies."
And then from follow up post:
"I deleted all the other DBs and left only the primary.
Still getting the same error message, ending with"
How where the restored copies made on the original cluster?
2) For your explanation above, pg_dump from 9.4(5432) to pg_restore 12(5433) the issue would be ...\9.4\bin\pg_dump.exe of 9.4 and pg_restore of said dump file to version 12. When moving up in version you need to use the newer version of pg_dump(...\12\bin\pg_dump.exe) to dump the 9.4 instance and then the version 12 pg_restore to the 12 instance. Both programs are backwards compatible, not forwards compatible.
Unless there's some bug (you're running a really old version of 9.4), you might be able to get away with using the 9.4 binary. (I successfully migrated 8.4.17 to 9.6.6 and 9.2.7 databases to 9.6.24, and 9.6.24 databases to 13.8 using the old pg_dump binaries when it was impractical to install the newer Postgresql in parallel with the old binaries.)
Angular momentum makes the world go 'round.
Ron <ronljohnsonjr@gmail.com> writes: > On 11/11/22 23:09, Adrian Klaver wrote: >> 2) For your explanation above, pg_dump from 9.4(5432) to pg_restore >> 12(5433) the issue would be ...\9.4\bin\pg_dump.exe of 9.4 and pg_restore >> of said dump file to version 12. When moving up in version you need to use >> the newer version of pg_dump(...\12\bin\pg_dump.exe) to dump the 9.4 >> instance and then the version 12 pg_restore to the 12 instance. Both >> programs are backwards compatible, not forwards compatible. > Unless there's some bug (you're running a /really/ old version of 9.4), you > might be able to get away with using the 9.4 binary. Yeah. The recommendation to use the later version's pg_dump for a migration is in the nature of "this is best practice", not "this is the only way that will work". The argument for it is that the older pg_dump might have bugs that are fixed in the newer version. But such bugs are rare, so usually it'll work fine to use the older one. We do endeavor to make sure that older dump output will load into newer versions, because in disaster-recovery scenarios an older dump might be all you have. regards, tom lane
On 11/12/22 18:18, Brad White wrote: > > > How where the restored copies made on the original cluster? > I guess I'm not understanding the confusion here. They were restored > with the same script but to a different DB name and with the 9.4 > executables. > In fact, that was why the script was originally written, so we could > restore and test the backups. The confusion came from this: "I only need the primary. Not the half dozen restored copies." I initially assumed, correctly as it turns out, that they all existed on the production cluster and where duplicates. Then you posted: "Over time, we've restored multiple copies for testing and reproducing various issues. I'm only trying to set up replication one one of those copies. " and showed a process where they being restored to other clusters. At that point I was lost as to what copies meant and where they came from. Not sure that this is actually pertinent to the problem at hand, I was just trying to nail down the moving pieces. > > I've since hijacked it and used it to restore to other versions. > -- Adrian Klaver adrian.klaver@aklaver.com
On 11/12/22 22:07, Tom Lane wrote: > Ron <ronljohnsonjr@gmail.com> writes: >> On 11/11/22 23:09, Adrian Klaver wrote: >>> 2) For your explanation above, pg_dump from 9.4(5432) to pg_restore >>> 12(5433) the issue would be ...\9.4\bin\pg_dump.exe of 9.4 and pg_restore >>> of said dump file to version 12. When moving up in version you need to use >>> the newer version of pg_dump(...\12\bin\pg_dump.exe) to dump the 9.4 >>> instance and then the version 12 pg_restore to the 12 instance. Both >>> programs are backwards compatible, not forwards compatible. > >> Unless there's some bug (you're running a /really/ old version of 9.4), you >> might be able to get away with using the 9.4 binary. > > Yeah. The recommendation to use the later version's pg_dump for a > migration is in the nature of "this is best practice", not "this is > the only way that will work". The argument for it is that the older > pg_dump might have bugs that are fixed in the newer version. But > such bugs are rare, so usually it'll work fine to use the older one. > We do endeavor to make sure that older dump output will load into > newer versions, because in disaster-recovery scenarios an older > dump might be all you have. I stand corrected. I should have read the Notes here: https://www.postgresql.org/docs/current/app-pgdump.html > > regards, tom lane > > -- Adrian Klaver adrian.klaver@aklaver.com
On 11/18/22 16:05, Brad White wrote: > tl;dr How do I turn up the logging so I can see what is failing? > > In our quest to get replication working, we are upgrading from v9.4 to > v12.10. > > Access365 via ODBC > Driver = "PostgreSQL Unicode" v13.02, Date 9/22/2021 > > In testing the app against v12, I find this issue: > > On updating a record, I set values on several fields and call > recordSet.Update after each one. > After updating one particular field, calling Update gives > > --> The Microsoft Access database engine stopped the process because you > and another user are attempting to change the same data at the same time. > > Code in question: > rst!Update <-- success > rst!QtyDeliverable = rst!Quantity > rst.Update <-- fails here > The wisdom of the internet says that this is most likely with a BIT > field that has null that Access can't handle. But that isn't the case > here. Both are int4 fields and both have values before the update. That wisdom: https://odbc.postgresql.org/faq.html#6.4 also says " PostgreSQL 7.2 and above can cause similar problems but for different reasons: Contributed by Sam Hokin (sam@ims.net) The new PostgreSQL timestamp data type defaults to microsecond precision. This means that timestamp values are stored like 2002-05-22 09:00:00.123456-05. However, Access does not support the extra precision, so the value that Access uses is 2002-05-22 09:00:00-05. When one tries to update a record, one gets the error message above because the value that Access uses in its UPDATE query does not match the value in the PostgreSQL table, similar to the NULL vs. empty string conflict that is already reported in this FAQ entry. " The above is the problem I usually ran into with Access and Postgres and updating. Is there a timestamp field in the record you are updating? > > For context, this is after we've copied the order record. Then we copy > this Order Item record. Then we copy all the child records. Lastly, we > are updating a few fields in this Order Item record and the update fails. > > This only fails against PostgreSQL 12.10, compiled by Visual C++ build > 1914, 64-bit > Succeeds against PostgreSQL 9.4.1, compiled by Visual C++ build 1800, 64-bit > > I don't see anything in the data\logs folder that looks relevant. Where > else should I look? > -- Adrian Klaver adrian.klaver@aklaver.com
On 11/18/2022 6:34 PM, Adrian Klaver wrote: > On 11/18/22 16:05, Brad White wrote: >> >> --> The Microsoft Access database engine stopped the process because >> you and another user are attempting to change the same data at the >> same time. >> >> Code in question: >> rst!Update <-- success >> rst!QtyDeliverable = rst!Quantity >> rst.Update <-- fails here >> The wisdom of the internet says that this is most likely with a BIT >> field that has null that Access can't handle. But that isn't the case >> here. Both are int4 fields and both have values before the update. > > > The new PostgreSQL timestamp data type defaults to microsecond > precision. This means that timestamp values are stored like 2002-05-22 > 09:00:00.123456-05. However, Access does not support the extra > precision, so the value that Access uses is 2002-05-22 09:00:00-05. > When one tries to update a record, one gets the error message above > because the value that Access uses in its UPDATE query does not match > the value in the PostgreSQL table, similar to the NULL vs. empty > string conflict that is already reported in this FAQ entry. " > > The above is the problem I usually ran into with Access and Postgres > and updating. > > Is there a timestamp field in the record you are updating? > UPDATE: Yes, there are 5 timestamp fields. It seems unlikely to be the culprit for 3 reasons. 1) It worked fine in v9.4 2) It worked the previous 4 times I saved that record in v12. 3) As the data came from Access, there is no data in any of the fields in the last three decimal places. ex. 45.234000 But as it is the best lead I have, and it could still be the culprit until proven otherwise, I'm working to convert those 5 fields from timestamp to timestamp(3). Of course, PG doesn't allow to edit a table with dependent views. Which means that I'm attempting to modify a script that will allow me to save, drop, restore the views. Of course, PG coerces all table and field names to lowercase unless quoted. So I have to figure how to recognize all table names and add quotes. This table is core to the app, so a LOT of the views reference it. I may not be done anytime soon. Have a good vacation!
On 11/22/22 12:53, Brad White wrote: > > On 11/18/2022 6:34 PM, Adrian Klaver wrote: >> On 11/18/22 16:05, Brad White wrote: >>> >>> --> The Microsoft Access database engine stopped the process because >>> you and another user are attempting to change the same data at the >>> same time. >>> >>> Code in question: >>> rst!Update <-- success >>> rst!QtyDeliverable = rst!Quantity >>> rst.Update <-- fails here >>> The wisdom of the internet says that this is most likely with a BIT >>> field that has null that Access can't handle. But that isn't the case >>> here. Both are int4 fields and both have values before the update. >> >> >> The new PostgreSQL timestamp data type defaults to microsecond >> precision. This means that timestamp values are stored like 2002-05-22 >> 09:00:00.123456-05. However, Access does not support the extra >> precision, so the value that Access uses is 2002-05-22 09:00:00-05. >> When one tries to update a record, one gets the error message above >> because the value that Access uses in its UPDATE query does not match >> the value in the PostgreSQL table, similar to the NULL vs. empty >> string conflict that is already reported in this FAQ entry. " >> >> The above is the problem I usually ran into with Access and Postgres >> and updating. >> >> Is there a timestamp field in the record you are updating? >> > UPDATE: > > Yes, there are 5 timestamp fields. > > It seems unlikely to be the culprit for 3 reasons. > > 1) It worked fine in v9.4 > 2) It worked the previous 4 times I saved that record in v12. > 3) As the data came from Access, there is no data in any of the fields > in the last three decimal places. > ex. 45.234000 > > But as it is the best lead I have, and it could still be the culprit > until proven otherwise, I'm working to convert those 5 fields from > timestamp to timestamp(3). It is worse then that: https://learn.microsoft.com/en-us/office/troubleshoot/access/store-calculate-compare-datetime-data Valid time values range from .0 (00:00:00) to .99999 (23:59:59) So no fractional seconds. Before you do any of the below I would set up a test table with timestamps and verify they are the issue. > > Of course, PG doesn't allow to edit a table with dependent views. > > Which means that I'm attempting to modify a script that will allow me to > save, drop, restore the views. > > Of course, PG coerces all table and field names to lowercase unless quoted. > > So I have to figure how to recognize all table names and add quotes. > > This table is core to the app, so a LOT of the views reference it. > > I may not be done anytime soon. > > Have a good vacation! > > > -- Adrian Klaver adrian.klaver@aklaver.com
Nonetheless,
your path forward is clear: use pg_dump (or better pg_dumpall)
and then load the output into a freshly initdb'd v12 installation.
It'll be a bit slower than the pg_upgrade way, but it'll work.
regards, tom lane
Brad White <b55white@gmail.com> writes: > I tried to run initdb after re-installing pg 12 > using postgresql-12.10-2-windows-x64.exe. > But the runas I'm using to execute it as pguser seems to be swallowing all > the output, so I can't see any errors. > I was able to run pg_checksums and get those enabled. > Is there anything else I want from initdb? If you can connect to the new installation, then you're done with that part, and can get on with the dump-and-restore part. regards, tom lane
On 11/28/22 17:02, Tom Lane wrote: > Brad White <b55white@gmail.com> writes: >> I tried to run initdb after re-installing pg 12 >> using postgresql-12.10-2-windows-x64.exe. >> But the runas I'm using to execute it as pguser seems to be swallowing all >> the output, so I can't see any errors. >> I was able to run pg_checksums and get those enabled. >> Is there anything else I want from initdb? > > If you can connect to the new installation, then you're done with > that part, and can get on with the dump-and-restore part. Yeah this is coming from this SO question: https://stackoverflow.com/questions/74607304/what-results-should-i-see-from-running-initdb#comment131694419_74607304 Brad was double clutching on the initdb. > > regards, tom lane > > -- Adrian Klaver adrian.klaver@aklaver.com
I needed to be able to compare the contents of a table across several databases and clusters.
Since it took me several hours to write this, I thought I'd share it with you, in case anyone needs a starting point for anything similar.
BACKGROUND DETAILS:
These databases are all on sandbox servers, restored backups from production.
We are querying the most recent entry in the log table. You'll see that some have a date of 11/1 (from the backup) and some have a more recent date. This is a problem as I've run the client against all the databases. We conclude that the log entries are not all going to the "current" database. I needed an easy way to see where they *were* going.
IMPLEMENTATION DETAILS:
In this case, all the databases are on the same server and the same DB name but different ports. But you can obviously modify the connect string to hit any combination you need.
This assumes a view exists with the same name on each database. 'LatestLogEntry' in our case.
As you'll see in the results, we are running
V9.4 on port 5432
V10 on 5433
V11 on 5434
V12 on 5435
It raises a NOTICE at the end to print out the query just for debugging purposes.
Here is the text for LatestLogEntry
----
-- retrieve the most recent log entry
SELECT current_setting('port'::text) AS "Port",
current_setting('server_version'::text) AS "Version",
"System Log"."AddDate"
FROM "System Log"
ORDER BY "System Log"."AddDate" DESC
LIMIT 1
----
And the text for our routine to retrieve results from across clusters:
----
CREATE EXTENSION IF NOT EXISTS dblink;
BEGIN;
DO
$$
DECLARE
conn_template TEXT;
conn_string9 TEXT;
conn_string10 TEXT;
conn_string11 TEXT;
conn_string12 TEXT;
_query TEXT;
_cursor CONSTANT refcursor := '_cursor';
BEGIN
conn_template = 'user={user} password={password} dbname={DB} port=';
conn_string9 = conn_template || 5432;
conn_string10 = conn_template || 5433;
conn_string11 = conn_template || 5434;
conn_string12 = conn_template || 5435;
_query := 'select "Port", "Version", "AddDate" from dblink(''' || conn_string9 || ''', ''(select * from "LatestLogEntry")'') as t1("Port" integer, "Version" text, "AddDate" timestamp)' ||
' UNION select "Port", "Version", "AddDate" from dblink(''' || conn_string10 || ''', ''(select * from "LatestLogEntry")'') as t1("Port" integer, "Version" text, "AddDate" timestamp)' ||
' UNION select "Port", "Version", "AddDate" from dblink(''' || conn_string11 || ''', ''(select * from "LatestLogEntry")'') as t1("Port" integer, "Version" text, "AddDate" timestamp)' ||
' UNION select "Port", "Version", "AddDate" from dblink(''' || conn_string12 || ''', ''(select * from "LatestLogEntry")'') as t1("Port" integer, "Version" text, "AddDate" timestamp) ORDER BY "Port";';
OPEN _cursor FOR EXECUTE _query;
RAISE NOTICE '%', _query;
END
$$;
FETCH ALL FROM _cursor ;
COMMIT;
----
Results:
----
Port Version AddDate
5432 9.4.1 2022-12-09 16:44:08.091
5433 10.20 2022-11-01 17:01:33.322
5434 11.15 2022-12-16 12:43:31.679973
5435 12.10 2022-11-01 17:01:33.322
----