Re: Statistics Import and Export - Mailing list pgsql-hackers

From Andres Freund
Subject Re: Statistics Import and Export
Date
Msg-id 5ezgltxblbsaui74s6apjt2vqszjxjesufem7ecsgybio4xm43@gmcoxov6evzx
Whole thread Raw
In response to Re: Statistics Import and Export  (Jeff Davis <pgsql@j-davis.com>)
List pgsql-hackers
Hi,

On 2025-03-06 10:07:43 -0800, Jeff Davis wrote:
> On Thu, 2025-03-06 at 12:16 -0500, Andres Freund wrote:
> > I don't follow. We already have the tablenames, schemanames and oids
> > of the
> > to-be-dumped tables/indexes collected in pg_dump, all that's needed
> > is to send
> > a list of those to the server to filter there?
> 
> Would it be appropriate to create a temp table? I wouldn't normally
> expect pg_dump to create temp tables, but I can't think of a major
> reason not to.

It doesn't work on a standby.


> If not, did you have in mind a CTE with a large VALUES expression, or
> just a giant IN() list?

An array, with a server-side unnest(), like we do in a bunch of other
places. E.g.


    /* need left join to pg_type to not fail on dropped columns ... */
    appendPQExpBuffer(q,
                      "FROM unnest('%s'::pg_catalog.oid[]) AS src(tbloid)\n"
                      "JOIN pg_catalog.pg_attribute a ON (src.tbloid = a.attrelid) "
                      "LEFT JOIN pg_catalog.pg_type t "
                      "ON (a.atttypid = t.oid)\n",
                      tbloids->data);

Greetings,

Andres Freund



pgsql-hackers by date:

Previous
From: Corey Huinker
Date:
Subject: Re: Statistics Import and Export
Next
From: Tom Lane
Date:
Subject: Re: Statistics Import and Export