Thread: pg_upgrade and schema complexity...

pg_upgrade and schema complexity...

From
Ron
Date:
Ran into this when upgrading from 13.11 to 15.3...

The pg_restore phase failed with "ERROR: out of shared memory", and 
recommended that I increase max_locks_per_transaction.  Doing so let the 
process run to completion.

It took 12.5 minutes to upgrade a 13GB instance.  Soon after, I upgraded a 
78GB cluster, and it only took 3.1 minutes.

(Both are VMs (same number of CPUs and RAM) connected to the same SAN.)

A "pg_dump --schema-only" of the two systems shows that the small-but-slow 
schema is 5.9M lines.

Is this to be expected of such a huge schema?

-- 
Born in Arizona, moved to Babylonia.



Re: pg_upgrade and schema complexity...

From
Adrian Klaver
Date:
On 6/2/23 17:44, Ron wrote:
> Ran into this when upgrading from 13.11 to 15.3...
> 
> The pg_restore phase failed with "ERROR: out of shared memory", and 
> recommended that I increase max_locks_per_transaction.  Doing so let the 
> process run to completion.
> 
> It took 12.5 minutes to upgrade a 13GB instance.  Soon after, I upgraded 
> a 78GB cluster, and it only took 3.1 minutes.

Where/how did you measure  those sizes?

> 
> (Both are VMs (same number of CPUs and RAM) connected to the same SAN.)
> 
> A "pg_dump --schema-only" of the two systems shows that the 
> small-but-slow schema is 5.9M lines.

Anything special you are doing in this cluster to create all those lines?

What is the line count for the other instance?

> 
> Is this to be expected of such a huge schema?
> 

-- 
Adrian Klaver
adrian.klaver@aklaver.com




Re: pg_upgrade and schema complexity...

From
Ron
Date:
On 6/2/23 19:58, Adrian Klaver wrote:
> On 6/2/23 17:44, Ron wrote:
>> Ran into this when upgrading from 13.11 to 15.3...
>>
>> The pg_restore phase failed with "ERROR: out of shared memory", and 
>> recommended that I increase max_locks_per_transaction. Doing so let the 
>> process run to completion.
>>
>> It took 12.5 minutes to upgrade a 13GB instance.  Soon after, I upgraded 
>> a 78GB cluster, and it only took 3.1 minutes.
>
> Where/how did you measure  those sizes?

Does it really matter?

START_SECS=$(date +"%s")
pg_upgrade ...
FINISH_SECS=$(date +"%s")
ET=`echo "scale=2;(${FINISH_SECS} - ${START_SECS})/60" | bc`
date +"%F %T pg_upgrade finished.  Elapsed time: ${ET} minutes."

(Text copied between air-gapped computers, so there might be errors.)

>>
>> (Both are VMs (same number of CPUs and RAM) connected to the same SAN.)
>>
>> A "pg_dump --schema-only" of the two systems shows that the 
>> small-but-slow schema is 5.9M lines.
>
> Anything special you are doing in this cluster to create all those lines?

I do nothing; the schema is provided by the vendor.

>
> What is the line count for the other instance?

227K rows.

>>
>> Is this to be expected of such a huge schema?
>>

-- 
Born in Arizona, moved to Babylonia.



Re: pg_upgrade and schema complexity...

From
Christophe Pettus
Date:

> On Jun 2, 2023, at 17:44, Ron <ronljohnsonjr@gmail.com> wrote:
> Is this to be expected of such a huge schema?

pg_upgrade time is pretty much proportional to the number of database objects in the schema, so a much larger schema
takingmuch longer is to be expected. 


Re: pg_upgrade and schema complexity...

From
Adrian Klaver
Date:
On 6/2/23 18:06, Ron wrote:
> On 6/2/23 19:58, Adrian Klaver wrote:
>> On 6/2/23 17:44, Ron wrote:
>>> Ran into this when upgrading from 13.11 to 15.3...
>>>
>>> The pg_restore phase failed with "ERROR: out of shared memory", and 
>>> recommended that I increase max_locks_per_transaction. Doing so let 
>>> the process run to completion.
>>>
>>> It took 12.5 minutes to upgrade a 13GB instance.  Soon after, I 
>>> upgraded a 78GB cluster, and it only took 3.1 minutes.
>>
>> Where/how did you measure  those sizes?
> 
> Does it really matter?
> 
> START_SECS=$(date +"%s")
> pg_upgrade ...
> FINISH_SECS=$(date +"%s")
> ET=`echo "scale=2;(${FINISH_SECS} - ${START_SECS})/60" | bc`
> date +"%F %T pg_upgrade finished.  Elapsed time: ${ET} minutes."

Unless I'm not mistaken the above is how the elapsed time was measured. 
I was looking for the procedure for determining the size.

> 
> (Text copied between air-gapped computers, so there might be errors.)
> 
>>>
>>> (Both are VMs (same number of CPUs and RAM) connected to the same SAN.)
>>>
>>> A "pg_dump --schema-only" of the two systems shows that the 
>>> small-but-slow schema is 5.9M lines.
>>
>> Anything special you are doing in this cluster to create all those lines?
> 
> I do nothing; the schema is provided by the vendor.

Alright so it is not your design, but you do have an idea of what is in 
the database correct?


> 
>>
>> What is the line count for the other instance?
> 
> 227K rows.
> 
>>>
>>> Is this to be expected of such a huge schema?
>>>
> 

-- 
Adrian Klaver
adrian.klaver@aklaver.com




Re: pg_upgrade and schema complexity...

From
Ron
Date:
On 6/2/23 21:22, Adrian Klaver wrote:
> On 6/2/23 18:06, Ron wrote:
>> On 6/2/23 19:58, Adrian Klaver wrote:
>>> On 6/2/23 17:44, Ron wrote:
>>>> Ran into this when upgrading from 13.11 to 15.3...
>>>>
>>>> The pg_restore phase failed with "ERROR: out of shared memory", and 
>>>> recommended that I increase max_locks_per_transaction. Doing so let the 
>>>> process run to completion.
>>>>
>>>> It took 12.5 minutes to upgrade a 13GB instance.  Soon after, I 
>>>> upgraded a 78GB cluster, and it only took 3.1 minutes.
>>>
>>> Where/how did you measure  those sizes?
>>
>> Does it really matter?
>>
>> START_SECS=$(date +"%s")
>> pg_upgrade ...
>> FINISH_SECS=$(date +"%s")
>> ET=`echo "scale=2;(${FINISH_SECS} - ${START_SECS})/60" | bc`
>> date +"%F %T pg_upgrade finished.  Elapsed time: ${ET} minutes."
>
> Unless I'm not mistaken the above is how the elapsed time was measured. I 
> was looking for the procedure for determining the size.

du -cm -d1 $DATA13
du -cm -d1 $DATA15

>
>>
>> (Text copied between air-gapped computers, so there might be errors.)
>>
>>>>
>>>> (Both are VMs (same number of CPUs and RAM) connected to the same SAN.)
>>>>
>>>> A "pg_dump --schema-only" of the two systems shows that the 
>>>> small-but-slow schema is 5.9M lines.
>>>
>>> Anything special you are doing in this cluster to create all those lines?
>>
>> I do nothing; the schema is provided by the vendor.
>
> Alright so it is not your design, but you do have an idea of what is in 
> the database correct?

"In" like the /purpose/ of the database?  Sure.
"In" like the /design/ of the database?  No; that's all on the vendor.

>
>
>>
>>>
>>> What is the line count for the other instance?
>>
>> 227K rows.
>

-- 
Born in Arizona, moved to Babylonia.