Re: Dumping an Extension's Script - Mailing list pgsql-hackers

From Dimitri Fontaine
Subject Re: Dumping an Extension's Script
Date
Msg-id m2sj85r0a1.fsf@2ndQuadrant.fr
Whole thread Raw
In response to Re: Dumping an Extension's Script  (Robert Haas <robertmhaas@gmail.com>)
List pgsql-hackers
Robert Haas <robertmhaas@gmail.com> writes:
> That approach seems likely to break things for the hoped-for parallel
> pg_dump feature, though I'm not sure exactly in what way.

Will the parallel dump solve the dependencies and extension membership
properties in parallel too?

> Beyond that, I think much of the appeal of the extension feature is
> that it dumps as "CREATE EXTENSION hstore;" and nothing more.  That
> allows you to migrate a dump between systems with different but
> compatible versions of the hstore and have things work as intended.

Yes. That's the only use case supported so far. The contrib/ use case.

> I'm not opposed to the idea of being able to make extensions without
> files on disk work ... but I consider it a niche use case; the
> behavior we have right now works well for me and hopefully for others
> most of the time.

I hear you. I'm not doing that on my free time, it's not a hobby, I have
customers that want it bad enough to be willing to sponsor my work here.
I hope that helps you figuring about the use case being a niche or not.

The current extension support has been targeted at a single use case,
because that's how you bootstrap that kind of feature. We have request
for extensions that will not include a part written in C.

We've been around the topic last year, we spent much energy trying to
come up with something easy enough to accept as a first step in that
direction, and the conclusion at the time was that we want to be able to
dump an extension's script. That's what my current patch is all about.


More about use cases. Consider PL/Proxy. By the way, that should really
really get in core and be called a FOREIGN FUNCTION, but let's get back
on topic. So I have customers with between 8 and 256 plproxy partitions,
that means each database upgrade has to reach that many databases.

Now, I've built a automatic system that will fetch the PL function code
from the staging database area, put them into files depending on the
schema they live in, package those files into a single one that can be
used by the CREATE EXTENSION command, automatically create an upgrade
file to be able to ALTER EXTENSION … TO VERSION …, and create a bunch of
debian packages out of that (a single debian source package that will
build as many binary packages as we have extensions).

Then, the system will push those packages to an internal repository, run
apt-get update on all the database hosts, then connect to each partition
and run the upgrade command.

All of that could get simplified to getting the PL code into a single
SQL command then running it on all the members of the cluster by using a
plproxy RUN ON ALL command, now that it's a self-contained single SQL
command.

Of course that's only one use case, but that email is already only too
long for what it does: rehashing a story we already ran last year.

Regards,
--
Dimitri Fontaine
http://2ndQuadrant.fr     PostgreSQL : Expertise, Formation et Support



pgsql-hackers by date:

Previous
From: Tom Lane
Date:
Subject: Re: [RFC] Fix div/mul crash and more undefined behavior
Next
From: Kohei KaiGai
Date:
Subject: Re: [v9.3] writable foreign tables