Re: pg_upgrade failing for 200+ million Large Objects - Mailing list pgsql-hackers

From Justin Pryzby
Subject Re: pg_upgrade failing for 200+ million Large Objects
Date
Msg-id ZqQIxIsaPWO6coil@pryzbyj2023
Whole thread Raw
In response to Re: pg_upgrade failing for 200+ million Large Objects  (Alexander Korotkov <aekorotkov@gmail.com>)
Responses Re: pg_upgrade failing for 200+ million Large Objects
Re: pg_upgrade failing for 200+ million Large Objects
List pgsql-hackers
On Wed, Jul 24, 2024 at 09:17:51AM -0500, Justin Pryzby wrote:
> With partitioning, we have a lot of tables, some of them wide (126
> partitioned tables, 8942 childs, total 1019315 columns).

On Fri, Jul 26, 2024 at 10:53:30PM +0300, Alexander Korotkov wrote:
> It would be nice to identify such cases and check which memory contexts are
> growing and why.

I reproduced the problem with this schema:

SELECT format('CREATE TABLE p(i int, %s) PARTITION BY RANGE(i)', array_to_string(a, ', ')) FROM (SELECT
array_agg(format('i%sint', i))a FROM generate_series(1,999)i);
 
SELECT format('CREATE TABLE t%s PARTITION OF p FOR VALUES FROM (%s)TO(%s)', i,i,i+1) FROM generate_series(1,999)i;

This used over 4 GB of RAM.
3114201 pryzbyj   20   0 5924520   4.2g  32476 T   0.0  53.8   0:27.35 postgres: pryzbyj postgres [local] UPDATE

The large context is:
2024-07-26 15:22:19.280 CDT [3114201] LOG:  level: 1; CacheMemoryContext: 5211209088 total in 50067 blocks; 420688 free
(14chunks); 5210788400 used
 

Note that there seemed to be no issue when I created 999 tables without
partitioning:

SELECT format('CREATE TABLE t%s(LIKE p)', i,i,i+1) FROM generate_series(1,999)i;

-- 
Justin



pgsql-hackers by date:

Previous
From: Jeff Davis
Date:
Subject: Re: tiny step toward threading: reduce dependence on setlocale()
Next
From: Alexander Korotkov
Date:
Subject: Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into PostgreSQL 15)