Re: Upgrade from PG12 to PG - Mailing list pgsql-admin

From Scott Ribe
Subject Re: Upgrade from PG12 to PG
Date
Msg-id F6998B6F-11FA-4A9F-954B-E4388B835483@elevated-dev.com
Whole thread Raw
In response to Re: Upgrade from PG12 to PG  (Jef Mortelle <jefmortelle@gmail.com>)
Responses Re: Upgrade from PG12 to PG
List pgsql-admin
> On Jul 20, 2023, at 11:05 AM, Jef Mortelle <jefmortelle@gmail.com> wrote:
>
> so, yes pg_ugrade start a pg_dump session,

Only for the schema, which you can see in the output you posted.

> Server is a VM server, my VM has 64GB SuseSLES  attached to a SAN with SSD disk (Hp3Par)

VM + SAN can perform well, or introduce all sorts of issues: busy neighbor, poor VM drivers, SAN only fast for large
sequentialwrites, etc. 

> On Jul 20, 2023, at 11:22 AM, Ron <ronljohnsonjr@gmail.com> wrote:
>
> Note also that there's a known issue with pg_upgrade and millions of Large Objects (not bytea or text, but lo_*
columns).


Good to know, but it would be weird to have millions of large objects in a 1TB database. (Then again, I found an old
postabout 3M large objects taking 5.5GB...) 

Try:
  time a run of that pg_dump command, then time a run of pg_restore of the schema only dump
  time a file copy of the db to a location on the SAN--purpose is not to produce a usable backup, but rather to check
IOthroughput 
  use the link option on pg_upgrade

Searching on this subject turns up some posts about slow restore of large objects under much older versions of PG--not
sureif any of it still applies. 

Finally given the earlier confusion between text and large objects, your apparent belief that text columns correlated
tolarge objects, and that text could hold more data than varchar, it's worth asking: do you actually need large objects
atall? (Is this even under your control?) 


pgsql-admin by date:

Previous
From: Ron
Date:
Subject: Re: Upgrade from PG12 to PG
Next
From: Jef Mortelle
Date:
Subject: Re: Upgrade from PG12 to PG