Re: pg_dump sort priority mismatch for large objects - Mailing list pgsql-hackers

From Tom Lane
Subject Re: pg_dump sort priority mismatch for large objects
Date
Msg-id 1605840.1752169120@sss.pgh.pa.us
Whole thread Raw
In response to pg_dump sort priority mismatch for large objects  (Nathan Bossart <nathandbossart@gmail.com>)
Responses Re: pg_dump sort priority mismatch for large objects
List pgsql-hackers
Nathan Bossart <nathandbossart@gmail.com> writes:
> On Thu, Jul 10, 2025 at 06:05:26PM +0530, Nitin Motiani wrote:
>> I looked through the history of this to see how this happened and if it
>> could be an existing issue. Prior to a45c78e3284b, dumpLO used to put large
>> objects in SECTION_PRE_DATA. That commit changed dumpLO and also changed
>> addBoundaryDependencies to move DO_LARGE_OBJECT from pre-data to data
>> section. Seems like since then this has been inconsistent with
>> pg_dump_sort.c. I think the change in pg_dump_sort.c should be backported
>> to PG17 & 18 independent of the state of the larger patch.

> +1, if for no other reason than we'll need it to be below PRIO_TABLE_DATA
> for the speed-up-pg_upgrade-with-many-LOs patch [0].  Does anyone see any
> problems with applying something like the following down to v17?

That's clearly an oversight in a45c78e3284b.  I agree that fixing
pg_dump_sort.c to match shouldn't create any functional difficulties.
It might make the topological sort step marginally faster by
reducing the number of ordering violations that have to be fixed.

            regards, tom lane



pgsql-hackers by date:

Previous
From: Nathan Bossart
Date:
Subject: pg_dump sort priority mismatch for large objects
Next
From: Dimitrios Apostolou
Date:
Subject: Re: [PING] fallocate() causes btrfs to never compress postgresql files