Thread: Option to dump foreign data in pg_dump

Option to dump foreign data in pg_dump

From
Luis Carril
Date:
Hello,
  pg_dump ignores the dumping of data in foreign tables
  on purpose, this patch makes it optional as the user maybe
  wants to manage the data in the foreign servers directly from
  Postgres. Opinions?

Cheers
Luis M Carril
Attachment

Re: Option to dump foreign data in pg_dump

From
Pavel Stehule
Date:
Hi

pá 28. 6. 2019 v 16:50 odesílatel Luis Carril <luis.carril@swarm64.com> napsal:
Hello,
  pg_dump ignores the dumping of data in foreign tables
  on purpose, this patch makes it optional as the user maybe
  wants to manage the data in the foreign servers directly from
  Postgres. Opinions?

It has sense for me

Pavel

Cheers
Luis M Carril

Re: Option to dump foreign data in pg_dump

From
Daniel Gustafsson
Date:
> On 28 Jun 2019, at 16:49, Luis Carril <luis.carril@swarm64.com> wrote:

>   pg_dump ignores the dumping of data in foreign tables
>   on purpose, this patch makes it optional as the user maybe
>   wants to manage the data in the foreign servers directly from
>   Postgres. Opinions?

Wouldn’t that have the potential to make restores awkward for FDWs that aren’t
writeable?  Basically, how can the risk of foot-gunning be minimized to avoid
users ending up with dumps that are hard to restore?

cheers ./daniel


Re: Option to dump foreign data in pg_dump

From
Pavel Stehule
Date:


pá 28. 6. 2019 v 17:17 odesílatel Daniel Gustafsson <daniel@yesql.se> napsal:
> On 28 Jun 2019, at 16:49, Luis Carril <luis.carril@swarm64.com> wrote:

>   pg_dump ignores the dumping of data in foreign tables
>   on purpose, this patch makes it optional as the user maybe
>   wants to manage the data in the foreign servers directly from
>   Postgres. Opinions?

Wouldn’t that have the potential to make restores awkward for FDWs that aren’t
writeable?  Basically, how can the risk of foot-gunning be minimized to avoid
users ending up with dumps that are hard to restore?

It can be used for migrations, porting, testing (where FDW sources are not accessible).

pg_dump has not any safeguards against bad usage. But this feature has sense only if foreign tables are dumped as classic tables - so some special option is necessary

Pavel

cheers ./daniel

Re: Option to dump foreign data in pg_dump

From
Tom Lane
Date:
Daniel Gustafsson <daniel@yesql.se> writes:
>> On 28 Jun 2019, at 16:49, Luis Carril <luis.carril@swarm64.com> wrote:
>> pg_dump ignores the dumping of data in foreign tables
>> on purpose, this patch makes it optional as the user maybe
>> wants to manage the data in the foreign servers directly from
>> Postgres. Opinions?

> Wouldn’t that have the potential to make restores awkward for FDWs that aren’t
> writeable?

Yeah, I think the feature as-proposed is a shotgun that's much more likely
to cause problems than solve them.  Almost certainly, what people would
really need is the ability to dump individual foreign tables' data not
everything.  (I also note that the main reason for "dump everything",
namely to get a guaranteed-consistent snapshot, isn't really valid for
foreign tables anyhow.)

I'm tempted to suggest that the way to approach this is to say that if you
explicitly select some foreign table(s) with "-t", then we'll dump their
data, unless you suppress that with "-s".  No new switch needed.

Another way of looking at it, which responds more directly to Daniel's
point about non-writable FDWs, could be to have a switch that says "dump
foreign tables' data if their FDW is one of these".

            regards, tom lane



Re: Option to dump foreign data in pg_dump

From
Pavel Stehule
Date:


pá 28. 6. 2019 v 17:30 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:
Daniel Gustafsson <daniel@yesql.se> writes:
>> On 28 Jun 2019, at 16:49, Luis Carril <luis.carril@swarm64.com> wrote:
>> pg_dump ignores the dumping of data in foreign tables
>> on purpose, this patch makes it optional as the user maybe
>> wants to manage the data in the foreign servers directly from
>> Postgres. Opinions?

> Wouldn’t that have the potential to make restores awkward for FDWs that aren’t
> writeable?

Yeah, I think the feature as-proposed is a shotgun that's much more likely
to cause problems than solve them.  Almost certainly, what people would
really need is the ability to dump individual foreign tables' data not
everything.  (I also note that the main reason for "dump everything",
namely to get a guaranteed-consistent snapshot, isn't really valid for
foreign tables anyhow.)

I agree so major usage is dumping data. But can be interesting some transformation from foreign table to classic table (when schema was created by IMPORT FOREIGN SCHEMA).


I'm tempted to suggest that the way to approach this is to say that if you
explicitly select some foreign table(s) with "-t", then we'll dump their
data, unless you suppress that with "-s".  No new switch needed.

Another way of looking at it, which responds more directly to Daniel's
point about non-writable FDWs, could be to have a switch that says "dump
foreign tables' data if their FDW is one of these".

Restoring content of FDW table via pg_restore or psql can be dangerous - there I see a risk, and can be nice to allow it only with some form of safeguard.

I think so important questions is motivation for dumping FDW - a) clonning (has sense for me and it is safe), b) real backup (requires writeable FDW) - has sense too, but I see a possibility of unwanted problems.

Regards

Pavel

                        regards, tom lane


Re: Option to dump foreign data in pg_dump

From
Luis Carril
Date:
 >Restoring content of FDW table via pg_restore or psql can be dangerous - there I see a risk, and can be nice to allow it only >with some form of safeguard.
>
>I think so important questions is motivation for dumping FDW - a) clonning (has sense for me and it is safe), b) real backup >(requires writeable FDW) - has sense too, but I see a possibility of unwanted problems.

What about providing a list of FDW servers instead of an all or nothing option? In that way the user really has to do a conscious decision to dump the content of the foreign tables for a specific server, this would allow distinction if multiple FDW are being used in the same DB. Also I think it is responsability of the user to know if the FDW that are being used are read-only or not.

Cheers
Luis M Carril


Re: Option to dump foreign data in pg_dump

From
Daniel Gustafsson
Date:
> On 28 Jun 2019, at 17:30, Tom Lane <tgl@sss.pgh.pa.us> wrote:


> > Yeah, I think the feature as-proposed is a shotgun that's much more likely
> > to cause problems than solve them.  Almost certainly, what people would
> > really need is the ability to dump individual foreign tables' data not
> > everything.  (I also note that the main reason for "dump everything",
> > namely to get a guaranteed-consistent snapshot, isn't really valid for
> > foreign tables anyhow.)


I think this is sort of key here, the consistency guarantees are wildly
different.  A note about this should perhaps be added to the docs for the
option discussed here?

> On 28 Jun 2019, at 19:55, Luis Carril <luis.carril@swarm64.com> wrote:


> What about providing a list of FDW servers instead of an all or nothing option? In that way the user really has to do
aconscious decision to dump the content of the foreign tables for a specific server, this would allow distinction if
multipleFDW are being used in the same DB. 

I think this is a good option, the normal exclusion rules can then still apply
in case not everything from a specific server is of interest.

cheers ./daniel


Re: Option to dump foreign data in pg_dump

From
Luis Carril
Date:

> > On 28 Jun 2019, at 19:55, Luis Carril <luis.carril@swarm64.com> wrote:
> > What about providing a list of FDW servers instead of an all or nothing option? In that way the user really has to do a conscious decision to dump the content of the foreign tables for > > a specific server, this would allow distinction if multiple FDW are being used in the same DB.

> I think this is a good option, the normal exclusion rules can then still apply
> in case not everything from a specific server is of interest.

Hi, here  is a new patch to dump the data of foreign tables using pg_dump.
This time the user specifies for which foreign servers the data will be dumped, which helps in case of having a mix of writeable and non-writeable fdw in the database.
It would be nice to emit an error if the fdw is read-only, but that information is not available in the catalog.

Cheers
Luis M Carril
Attachment

Re: Option to dump foreign data in pg_dump

From
Daniel Gustafsson
Date:
> On 12 Jul 2019, at 16:08, Luis Carril <luis.carril@swarm64.com> wrote:
>
> > > On 28 Jun 2019, at 19:55, Luis Carril <luis.carril@swarm64.com> wrote:
> > > What about providing a list of FDW servers instead of an all or nothing option? In that way the user really has
todo a conscious decision to dump the content of the foreign tables for > > a specific server, this would allow
distinctionif multiple FDW are being used in the same DB. 
>
> > I think this is a good option, the normal exclusion rules can then still apply
> > in case not everything from a specific server is of interest.
>
> Hi, here  is a new patch to dump the data of foreign tables using pg_dump.

Cool!  Please register this patch in the next commitfest to make sure it
doesn’t get lost on the way.  Feel free to mark me as reviewer when adding it.

> This time the user specifies for which foreign servers the data will be dumped, which helps in case of having a mix
ofwriteable and non-writeable fdw in the database. 

Looks good, and works as expected.

A few comments on the patch:

Documentation is missing, but you've waited with docs until the functionality
of the patch was fleshed out?

This allows for adding a blanket wildcard with "--include-foreign-data=“ which
includes every foreign server.  This seems to go against the gist of the patch,
to require an explicit opt-in per server.  Testing for an empty string should
do the trick.

+    case 11:                /* include foreign data */
+        simple_string_list_append(&foreign_servers_include_patterns, optarg);
+        break;
+

I don’t think expand_foreign_server_name_patterns should use strict_names, but
rather always consider failures to map as errors.

+    expand_foreign_server_name_patterns(fout, &foreign_servers_include_patterns,
+                        &foreign_servers_include_oids,
+                        strict_names);

This seems like a bit too ambiguous name, it would be good to indicate in the
name that it refers to a foreign server.

+    Oid            serveroid;      /* foreign server oid */

As coded there is no warning when asking for foreign data on a schema-only
dump, maybe something like could make usage clearer as this option is similar
in concept to data-only:

+     if (dopt.schemaOnly && foreign_servers_include_patterns.head != NULL)
+     {
+         pg_log_error("options -s/--schema-only and --include-foreign-data cannot be used together");
+         exit_nicely(1);
+     }
+

cheers ./daniel




Re: Option to dump foreign data in pg_dump

From
Luis Carril
Date:
On 15.07.19 12:06, Daniel Gustafsson wrote:
On 12 Jul 2019, at 16:08, Luis Carril <luis.carril@swarm64.com> wrote:

On 28 Jun 2019, at 19:55, Luis Carril <luis.carril@swarm64.com> wrote:
What about providing a list of FDW servers instead of an all or nothing option? In that way the user really has to do a conscious decision to dump the content of the foreign tables for > > a specific server, this would allow distinction if multiple FDW are being used in the same DB.
I think this is a good option, the normal exclusion rules can then still apply
in case not everything from a specific server is of interest.
Hi, here  is a new patch to dump the data of foreign tables using pg_dump. 
Cool!  Please register this patch in the next commitfest to make sure it
doesn’t get lost on the way.  Feel free to mark me as reviewer when adding it.
Thanks, I'll do!
This time the user specifies for which foreign servers the data will be dumped, which helps in case of having a mix of writeable and non-writeable fdw in the database.
Looks good, and works as expected.

A few comments on the patch:

Documentation is missing, but you've waited with docs until the functionality
of the patch was fleshed out?
I've added the documentation about the option in the pg_dump page
This allows for adding a blanket wildcard with "--include-foreign-data=“ which
includes every foreign server.  This seems to go against the gist of the patch,
to require an explicit opt-in per server.  Testing for an empty string should
do the trick.

+	case 11:				/* include foreign data */
+		simple_string_list_append(&foreign_servers_include_patterns, optarg);
+		break;
+
Now it errors if any is an empty string.

I don’t think expand_foreign_server_name_patterns should use strict_names, but
rather always consider failures to map as errors.

+	expand_foreign_server_name_patterns(fout, &foreign_servers_include_patterns,
+					    &foreign_servers_include_oids,
+					    strict_names);
Removed, ie if nothing match it throws an error.

This seems like a bit too ambiguous name, it would be good to indicate in the
name that it refers to a foreign server.

+	Oid			serveroid;      /* foreign server oid */
Changed to foreign_server_oid.

As coded there is no warning when asking for foreign data on a schema-only
dump, maybe something like could make usage clearer as this option is similar
in concept to data-only:

+     if (dopt.schemaOnly && foreign_servers_include_patterns.head != NULL)
+     {
+         pg_log_error("options -s/--schema-only and --include-foreign-data cannot be used together");
+         exit_nicely(1);
+     }
+
Added too

cheers ./daniel

Thanks for the comments!

Cheers
Luis M Carril

Attachment

Re: Option to dump foreign data in pg_dump

From
Surafel Temesgen
Date:
Hi Luis,
Here is a few comment for me

*I suggest the option to be just –foreign-data because if we make it –include-foreign-data its expected to have –exclude-foreign-data option too.

*please add test case

* + if (tdinfo->filtercond || tbinfo->relkind == RELKIND_FOREIGN_TABLE)

filter condition is not implemented completely yet so the logic only work on foreign table so I think its better to handle it separately

* I don’t understand the need for changing SELECT query .we can use the same SELECT query syntax for both regular table and foreign table


regards

Surafel

Re: Option to dump foreign data in pg_dump

From
vignesh C
Date:
On Mon, Jul 15, 2019 at 6:09 PM Luis Carril <luis.carril@swarm64.com> wrote:
>
> On 15.07.19 12:06, Daniel Gustafsson wrote:
>
Few comments:

As you have specified required_argument in above:
+ {"include-foreign-data", required_argument, NULL, 11},

The below check may not be required:
+ if (strcmp(optarg, "") == 0)
+ {
+ pg_log_error("empty string is not a valid pattern in --include-foreign-data");
+ exit_nicely(1);
+ }

+ if (foreign_servers_include_patterns.head != NULL)
+ {
+ expand_foreign_server_name_patterns(fout, &foreign_servers_include_patterns,
+ &foreign_servers_include_oids);
+ if (foreign_servers_include_oids.head == NULL)
+ fatal("no matching foreign servers were found");
+ }
+

The above check if (foreign_servers_include_oids.head == NULL) may not
be required, as there is a check present inside
expand_foreign_server_name_patterns to handle this error:
+
+ res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
+ if (PQntuples(res) == 0)
+ fatal("no matching foreign servers were found for pattern \"%s\"", cell->val);
+

+static void
+expand_foreign_server_name_patterns(Archive *fout,
+ SimpleStringList *patterns,
+ SimpleOidList *oids)
+{
+ PQExpBuffer query;
+ PGresult   *res;
+ SimpleStringListCell *cell;
+ int i;
+
+ if (patterns->head == NULL)
+ return; /* nothing to do */
+

The above check for patterns->head may not be required as similar
check exists before this function is called:
+ if (foreign_servers_include_patterns.head != NULL)
+ {
+ expand_foreign_server_name_patterns(fout, &foreign_servers_include_patterns,
+ &foreign_servers_include_oids);
+ if (foreign_servers_include_oids.head == NULL)
+ fatal("no matching foreign servers were found");
+ }
+

+ /* Skip FOREIGN TABLEs (no data to dump) if not requested explicitly */
+ if (tbinfo->relkind == RELKIND_FOREIGN_TABLE &&
+ (foreign_servers_include_oids.head == NULL ||
+ !simple_oid_list_member(&foreign_servers_include_oids,
tbinfo->foreign_server_oid)))
simple_oid_list_member can be split into two lines

Regards,
Vignesh
EnterpriseDB: http://www.enterprisedb.com



Re: Option to dump foreign data in pg_dump

From
vignesh C
Date:
On Thu, Sep 19, 2019 at 3:08 PM vignesh C <vignesh21@gmail.com> wrote:
>
> On Mon, Jul 15, 2019 at 6:09 PM Luis Carril <luis.carril@swarm64.com> wrote:
> >
> > On 15.07.19 12:06, Daniel Gustafsson wrote:
> >
> Few comments:
>
> As you have specified required_argument in above:
> + {"include-foreign-data", required_argument, NULL, 11},
>
> The below check may not be required:
> + if (strcmp(optarg, "") == 0)
> + {
> + pg_log_error("empty string is not a valid pattern in --include-foreign-data");
> + exit_nicely(1);
> + }
>
> + if (foreign_servers_include_patterns.head != NULL)
> + {
> + expand_foreign_server_name_patterns(fout, &foreign_servers_include_patterns,
> + &foreign_servers_include_oids);
> + if (foreign_servers_include_oids.head == NULL)
> + fatal("no matching foreign servers were found");
> + }
> +
>
> The above check if (foreign_servers_include_oids.head == NULL) may not
> be required, as there is a check present inside
> expand_foreign_server_name_patterns to handle this error:
> +
> + res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
> + if (PQntuples(res) == 0)
> + fatal("no matching foreign servers were found for pattern \"%s\"", cell->val);
> +
>
> +static void
> +expand_foreign_server_name_patterns(Archive *fout,
> + SimpleStringList *patterns,
> + SimpleOidList *oids)
> +{
> + PQExpBuffer query;
> + PGresult   *res;
> + SimpleStringListCell *cell;
> + int i;
> +
> + if (patterns->head == NULL)
> + return; /* nothing to do */
> +
>
> The above check for patterns->head may not be required as similar
> check exists before this function is called:
> + if (foreign_servers_include_patterns.head != NULL)
> + {
> + expand_foreign_server_name_patterns(fout, &foreign_servers_include_patterns,
> + &foreign_servers_include_oids);
> + if (foreign_servers_include_oids.head == NULL)
> + fatal("no matching foreign servers were found");
> + }
> +
>
> + /* Skip FOREIGN TABLEs (no data to dump) if not requested explicitly */
> + if (tbinfo->relkind == RELKIND_FOREIGN_TABLE &&
> + (foreign_servers_include_oids.head == NULL ||
> + !simple_oid_list_member(&foreign_servers_include_oids,
> tbinfo->foreign_server_oid)))
> simple_oid_list_member can be split into two lines
>
Also can we include few tests for this feature.

Regards,
Vignesh
EnterpriseDB: http://www.enterprisedb.com



Re: Option to dump foreign data in pg_dump

From
Luis Carril
Date:
Hello,
   thanks for the comments!

*I suggest the option to be just –foreign-data because if we make it –include-foreign-data its expected to have –exclude-foreign-data option too.

Several pg_dump options have no counterpart, e.g --enable-row-security does not have a disable (which is the default). Also calling it --foreign-data would sound similar to the --table,  by default all tables are dumped, but with --table only the selected tables are dumped. While without --include-foreign-data all data is excluded, and only with the option some foreign data would be included.

*please add test case

I added tests cases for the invalid inputs. I'll try to make a test case for the actual dump of foreign data, but that requires more setup, because a functional fdw is needed there.

* + if (tdinfo->filtercond || tbinfo->relkind == RELKIND_FOREIGN_TABLE)

filter condition is not implemented completely yet so the logic only work on foreign table so I think its better to handle it separately

Note that there is another if condition that actually applies the the filtercondition if provided, also for a foreign table we need to do a COPY SELECT instead of a COPY TO

* I don’t understand the need for changing SELECT query .we can use the same SELECT query syntax for both regular table and foreign table

To which query do you refer? In the patch there are three queries: 1 retrieves foreign servers, another is the SELECT in the COPY that now it applies in case of a filter condition of a foreign table, and a third that retrieves the oid of a given foreign server.


> As you have specified required_argument in above:
> + {"include-foreign-data", required_argument, NULL, 11},
>
> The below check may not be required:
> + if (strcmp(optarg, "") == 0)
> + {
> + pg_log_error("empty string is not a valid pattern in --include-foreign-data");
> + exit_nicely(1);
> + }

We need to conserve this check to avoid that the use of '--include-foreign-data=', which would match all foreign servers. And in previous messages it was established that that behavior is too coarse.

>
> + if (foreign_servers_include_patterns.head != NULL)
> + {
> + expand_foreign_server_name_patterns(fout, &foreign_servers_include_patterns,
> + &foreign_servers_include_oids);
> + if (foreign_servers_include_oids.head == NULL)
> + fatal("no matching foreign servers were found");
> + }
> +
>
> The above check if (foreign_servers_include_oids.head == NULL) may not
> be required, as there is a check present inside
> expand_foreign_server_name_patterns to handle this error:
> +
> + res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
> + if (PQntuples(res) == 0)
> + fatal("no matching foreign servers were found for pattern \"%s\"", cell->val);
> +

Removed

>
> +static void
> +expand_foreign_server_name_patterns(Archive *fout,
> + SimpleStringList *patterns,
> + SimpleOidList *oids)
> +{
> + PQExpBuffer query;
> + PGresult   *res;
> + SimpleStringListCell *cell;
> + int i;
> +
> + if (patterns->head == NULL)
> + return; /* nothing to do */
> +
>
> The above check for patterns->head may not be required as similar
> check exists before this function is called:
> + if (foreign_servers_include_patterns.head != NULL)
> + {
> + expand_foreign_server_name_patterns(fout, &foreign_servers_include_patterns,
> + &foreign_servers_include_oids);
> + if (foreign_servers_include_oids.head == NULL)
> + fatal("no matching foreign servers were found");
> + }
> +

I think that it is better that the function expand_foreign_server_name do not rely on a non-NULL head, so it checks it by itself, and is closer to the other expand_* functions.
Instead I've removed the check before the function is called.

>
> + /* Skip FOREIGN TABLEs (no data to dump) if not requested explicitly */
> + if (tbinfo->relkind == RELKIND_FOREIGN_TABLE &&
> + (foreign_servers_include_oids.head == NULL ||
> + !simple_oid_list_member(&foreign_servers_include_oids,
> tbinfo->foreign_server_oid)))
> simple_oid_list_member can be split into two lines

Done

Cheers
Luis M Carril
Attachment

Re: Option to dump foreign data in pg_dump

From
Surafel Temesgen
Date:


On Fri, Sep 20, 2019 at 6:20 PM Luis Carril <luis.carril@swarm64.com> wrote:
Hello,
   thanks for the comments!

* + if (tdinfo->filtercond || tbinfo->relkind == RELKIND_FOREIGN_TABLE)

filter condition is not implemented completely yet so the logic only work on foreign table so I think its better to handle it separately

Note that there is another if condition that actually applies the the filtercondition if provided, also for a we need to do a COPY SELECT instead of a COPY TO

but we can't supplied where clause in pg_dump yet so filtercondtion is always NULL and the logic became true only on foreign table.

* I don’t understand the need for changing SELECT query .we can use the same SELECT query syntax for both regular table and foreign table

To which query do you refer? In the patch there are three queries: 1 retrieves foreign servers, another is the SELECT in the COPY that now it applies in case of a filter condition of a foreign table, and a third that retrieves the oid of a given foreign server.


SELECT on COPY  

regards
Surafel

Re: Option to dump foreign data in pg_dump

From
Luis Carril
Date:

On Fri, Sep 20, 2019 at 6:20 PM Luis Carril <luis.carril@swarm64.com> wrote:
Hello,
   thanks for the comments!

* + if (tdinfo->filtercond || tbinfo->relkind == RELKIND_FOREIGN_TABLE)

filter condition is not implemented completely yet so the logic only work on foreign table so I think its better to handle it separately

Note that there is another if condition that actually applies the the filtercondition if provided, also for a we need to do a COPY SELECT instead of a COPY TO

but we can't supplied where clause in pg_dump yet so filtercondtion is always NULL and the logic became true only on foreign table.

* I don’t understand the need for changing SELECT query .we can use the same SELECT query syntax for both regular table and foreign table

To which query do you refer? In the patch there are three queries: 1 retrieves foreign servers, another is the SELECT in the COPY that now it applies in case of a filter condition of a foreign table, and a third that retrieves the oid of a given foreign server.


SELECT on COPY  

regards
Surafel
If we have a non-foreign table and filtercond is NULL, then we can do a `COPY table columns TO stdout`.
But if the table is foreign, the `COPY foreign-table columns TO stdout` is not supported by Postgres, so we have to do a `COPY (SELECT columns FROM foreign-table) TO sdout`

Now if in any case the filtercond is non-NULL, ie we have a WHERE clause, then for non-foreign and foreign tables we have to do a:
`COPY (SELECT columns FROM table) TO sdout`

So the COPY of a foreign table has to be done using the sub-SELECT just as a non-foreign table with filtercond, not like a non-foreign table without filtercond.

Cheers

Luis M Carril


Re: Option to dump foreign data in pg_dump

From
Laurenz Albe
Date:
On Sat, 2019-11-09 at 21:38 +0100, Daniel Gustafsson wrote:
> I took a look at this patch again today for a review of the latest version.
> While I still think it's a potential footgun due to read-only FDW's, I can see
> usecases for having it so I'm mildly +1 on adding it.

I don't feel good about this feature.
pg_dump should not dump any data that are not part of the database
being dumped.

If you restore such a dump, the data will be inserted into the foreign table,
right?  Unless someone emptied the remote table first, this will add
duplicated data to that table.
I think that is an unpleasant surprise.  I'd expect that if I drop a database
and restore it from a dump, it should be as it was before.  This change would
break that assumption.

What are the use cases of a dump with foreign table data?

Unless I misunderstood something there, -1.

Yours,
Laurenz Albe




Re: Option to dump foreign data in pg_dump

From
Luis Carril
Date:

Hello

   a new version of the patch with the tests from Daniel (thanks!) and the nitpicks.

I don't feel good about this feature.
pg_dump should not dump any data that are not part of the database
being dumped.

If you restore such a dump, the data will be inserted into the foreign table,
right?  Unless someone emptied the remote table first, this will add
duplicated data to that table.
I think that is an unpleasant surprise.  I'd expect that if I drop a database
and restore it from a dump, it should be as it was before.  This change would
break that assumption.

What are the use cases of a dump with foreign table data?

Unless I misunderstood something there, -1.

This feature is opt-in so if the user makes dumps of a remote server explicitly by other means, then the user would not need to use these option.

But, not all foreign tables are necessarily in a remote server like the ones referenced by the postgres_fdw.
In FDWs like swarm64da, cstore, citus or timescaledb, the foreign tables are part of your database, and one could expect that a dump of the database includes data from these FDWs.


Cheers


Luis M Carril


Attachment

Re: Option to dump foreign data in pg_dump

From
Daniel Gustafsson
Date:
> On 12 Nov 2019, at 12:12, Luis Carril <luis.carril@swarm64.com> wrote:

>    a new version of the patch with the tests from Daniel (thanks!) and the nitpicks.

The nitpicks have been addressed.  However, it seems that the new file
containing the test FDW seems missing from the new version of the patch.  Did
you forget to git add the file?

>> I don't feel good about this feature.
>> pg_dump should not dump any data that are not part of the database
>> being dumped.
>>
>> If you restore such a dump, the data will be inserted into the foreign table,
>> right?  Unless someone emptied the remote table first, this will add
>> duplicated data to that table.
>> I think that is an unpleasant surprise.  I'd expect that if I drop a database
>> and restore it from a dump, it should be as it was before.  This change would
>> break that assumption.
>>
>> What are the use cases of a dump with foreign table data?
>>
>> Unless I misunderstood something there, -1.
>
> This feature is opt-in so if the user makes dumps of a remote server explicitly by other means, then the user would
notneed to use these option. 
> But, not all foreign tables are necessarily in a remote server like the ones referenced by the postgres_fdw.
> In FDWs like swarm64da, cstore, citus or timescaledb, the foreign tables are part of your database, and one could
expectthat a dump of the database includes data from these FDWs. 

Right, given the deliberate opt-in which is required I don't see much risk of
unpleasant user surprises.  There are no doubt foot-guns available with this
feature, as has been discussed upthread, but the current proposal is IMHO
minimizing them.

cheers ./daniel


Re: Option to dump foreign data in pg_dump

From
Luis Carril
Date:

The nitpicks have been addressed.  However, it seems that the new file
containing the test FDW seems missing from the new version of the patch.  Did
you forget to git add the file?

Yes, I forgot, thanks for noticing. New patch attached again.


Cheers


Luis M Carril

Attachment

Re: Option to dump foreign data in pg_dump

From
Alvaro Herrera
Date:
On 2019-Nov-12, Luis Carril wrote:

> But, not all foreign tables are necessarily in a remote server like
> the ones referenced by the postgres_fdw.
> In FDWs like swarm64da, cstore, citus or timescaledb, the foreign
> tables are part of your database, and one could expect that a dump of
> the database includes data from these FDWs.

BTW these are not FDWs in the "foreign" sense at all; they're just
abusing the FDW system in order to be able to store data in some
different way.  The right thing to do IMO is to port these systems to be
users of the new storage abstraction (table AM).  If we do that, what
value is there to the feature being proposed here?

-- 
Álvaro Herrera                https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: Option to dump foreign data in pg_dump

From
Daniel Gustafsson
Date:
> On 12 Nov 2019, at 15:21, Luis Carril <luis.carril@swarm64.com> wrote:
>
>> The nitpicks have been addressed.  However, it seems that the new file
>> containing the test FDW seems missing from the new version of the patch.  Did
>> you forget to git add the file?
> Yes, I forgot, thanks for noticing. New patch attached again.

The patch applies, compiles and tests clean.  The debate whether we want to
allow dumping of foreign data at all will continue but I am marking the patch
as ready for committer as I believe it is ready for input on that level.

cheers ./daniel


Re: Option to dump foreign data in pg_dump

From
Tom Lane
Date:
Alvaro Herrera <alvherre@2ndquadrant.com> writes:
> On 2019-Nov-12, Luis Carril wrote:
>> But, not all foreign tables are necessarily in a remote server like
>> the ones referenced by the postgres_fdw.
>> In FDWs like swarm64da, cstore, citus or timescaledb, the foreign
>> tables are part of your database, and one could expect that a dump of
>> the database includes data from these FDWs.

> BTW these are not FDWs in the "foreign" sense at all; they're just
> abusing the FDW system in order to be able to store data in some
> different way.  The right thing to do IMO is to port these systems to be
> users of the new storage abstraction (table AM).  If we do that, what
> value is there to the feature being proposed here?

That is a pretty valid point.  I'm not sure however that there would
be *no* use-cases for the proposed option if all of those FDWs were
converted to table AMs.  Also, even if the authors of those systems
are all hard at work on such a conversion, it'd probably be years
before the FDW implementations disappear from the wild.

Having said that, I'm ending up -0.5 or so on the patch as it stands,
mainly because it seems like it is bringing way more maintenance
burden than it's realistically worth.  I'm particularly unhappy about
the proposed regression test additions --- the cycles added to
check-world, and the maintenance effort that's inevitably going to be
needed for all that code, seem unwarranted for something that's at
best a very niche use-case.  And, despite the bulk of the test
additions, they're in no sense offering an end-to-end test, because
that would require successfully reloading the data as well.

That objection could be addressed, perhaps, by scaling down the tests
to just have a goal of exercising the new pg_dump option-handling
code, and not to attempt to do meaningful data extraction from a
foreign table.  You could do that with an entirely dummy foreign data
wrapper and server (cf. sql/foreign_data.sql).  I'm imagining perhaps
create two dummy servers, of which only one has a table, and we ask to
dump data from the other one.  This would cover parsing and validation
of the --include-foreign-data option, and make sure that we don't dump
from servers we're not supposed to.  It doesn't actually dump any
data, but that part is a completely trivial aspect of the patch,
really, and almost all of the code relevant to that does get tested
already.

In the department of minor nitpicks ... why bother with joining to
pg_foreign_server in the query that retrieves a foreign table's
server OID?  ft.ftserver is already the answer you seek.  Also,
I think it'd be wise from a performance standpoint to skip doing
that query altogether in the normal case where --include-foreign-data
hasn't been requested.

            regards, tom lane



Re: Option to dump foreign data in pg_dump

From
Alvaro Herrera
Date:
On 2019-Nov-12, Luis Carril wrote:

> The nitpicks have been addressed.  However, it seems that the new file
> containing the test FDW seems missing from the new version of the patch.  Did
> you forget to git add the file?
> 
> Yes, I forgot, thanks for noticing. New patch attached again.

Luis,

It seems you've got enough support for this concept, so let's move
forward with this patch.  There are some comments from Tom about the
patch; would you like to send an updated version perhaps?

Thanks

-- 
Álvaro Herrera                https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: Option to dump foreign data in pg_dump

From
Luis Carril
Date:

Luis,

It seems you've got enough support for this concept, so let's move
forward with this patch.  There are some comments from Tom about the
patch; would you like to send an updated version perhaps?

Thanks
Hi,

 I've attached a new version (v6) removing the superfluous JOIN that Tom identified, and not collecting the oids (avoiding the query) if the option is not used at all.

About the testing issues that Tom mentioned:
I do not see how can we have a pure SQL dummy FDW that tests the functionality. Because the only way to identify if the data of a foreign table for the chosen server is dumped is if the COPY statement appears in the output, but if the C callbacks of the FDW are not implemented, then the SELECT that dumps the data to generate the COPY cannot be executed.
Also, to test that the include option chooses only the data of the  specified foreign servers we would need some negative testing, i.e. that the COPY statement for the non-desired table does not appear. But I do not find these kind of tests in the test suite, even for other selective options like --table or --exclude-schema.


Cheers
Luis M Carril


From: Alvaro Herrera <alvherre@2ndquadrant.com>
Sent: Thursday, November 28, 2019 3:31 PM
To: Luis Carril <luis.carril@swarm64.com>
Cc: Daniel Gustafsson <daniel@yesql.se>; Laurenz Albe <laurenz.albe@cybertec.at>; vignesh C <vignesh21@gmail.com>; PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org>
Subject: Re: Option to dump foreign data in pg_dump
 
On 2019-Nov-12, Luis Carril wrote:

> The nitpicks have been addressed.  However, it seems that the new file
> containing the test FDW seems missing from the new version of the patch.  Did
> you forget to git add the file?
>
> Yes, I forgot, thanks for noticing. New patch attached again.

Luis,

It seems you've got enough support for this concept, so let's move
forward with this patch.  There are some comments from Tom about the
patch; would you like to send an updated version perhaps?

Thanks

--
Álvaro Herrera                https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Attachment

Re: Option to dump foreign data in pg_dump

From
vignesh C
Date:
On Fri, Nov 29, 2019 at 2:10 PM Luis Carril <luis.carril@swarm64.com> wrote:
>
> Luis,
>
> It seems you've got enough support for this concept, so let's move
> forward with this patch.  There are some comments from Tom about the
> patch; would you like to send an updated version perhaps?
>
> Thanks
>
> Hi,
>
>  I've attached a new version (v6) removing the superfluous JOIN that Tom identified, and not collecting the oids
(avoidingthe query) if the option is not used at all. 
>
> About the testing issues that Tom mentioned:
> I do not see how can we have a pure SQL dummy FDW that tests the functionality. Because the only way to identify if
thedata of a foreign table for the chosen server is dumped is if the COPY statement appears in the output, but if the C
callbacksof the FDW are not implemented, then the SELECT that dumps the data to generate the COPY cannot be executed. 
> Also, to test that the include option chooses only the data of the  specified foreign servers we would need some
negativetesting, i.e. that the COPY statement for the non-desired table does not appear. But I do not find these kind
oftests in the test suite, even for other selective options like --table or --exclude-schema. 
>

Can you have a look at dump with parallel option. Parallel option will
take a lock on table while invoking lockTableForWorker. May be this is
not required for foreign tables.
Thoughts?

Regards,
Vignesh
EnterpriseDB: http://www.enterprisedb.com



Re: Option to dump foreign data in pg_dump

From
Luis Carril
Date:
Can you have a look at dump with parallel option. Parallel option will
take a lock on table while invoking lockTableForWorker. May be this is
not required for foreign tables.
Thoughts?
I tried with -j and found no issue. I guess that the foreign table needs locking anyway to prevent anyone to modify it while is being dumped.

Cheers,

Luis M Carril

From: vignesh C <vignesh21@gmail.com>
Sent: Tuesday, January 14, 2020 1:48 AM
To: Luis Carril <luis.carril@swarm64.com>
Cc: Alvaro Herrera <alvherre@2ndquadrant.com>; Daniel Gustafsson <daniel@yesql.se>; Laurenz Albe <laurenz.albe@cybertec.at>; PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org>
Subject: Re: Option to dump foreign data in pg_dump
 
On Fri, Nov 29, 2019 at 2:10 PM Luis Carril <luis.carril@swarm64.com> wrote:
>
> Luis,
>
> It seems you've got enough support for this concept, so let's move
> forward with this patch.  There are some comments from Tom about the
> patch; would you like to send an updated version perhaps?
>
> Thanks
>
> Hi,
>
>  I've attached a new version (v6) removing the superfluous JOIN that Tom identified, and not collecting the oids (avoiding the query) if the option is not used at all.
>
> About the testing issues that Tom mentioned:
> I do not see how can we have a pure SQL dummy FDW that tests the functionality. Because the only way to identify if the data of a foreign table for the chosen server is dumped is if the COPY statement appears in the output, but if the C callbacks of the FDW are not implemented, then the SELECT that dumps the data to generate the COPY cannot be executed.
> Also, to test that the include option chooses only the data of the  specified foreign servers we would need some negative testing, i.e. that the COPY statement for the non-desired table does not appear. But I do not find these kind of tests in the test suite, even for other selective options like --table or --exclude-schema.
>

Can you have a look at dump with parallel option. Parallel option will
take a lock on table while invoking lockTableForWorker. May be this is
not required for foreign tables.
Thoughts?

Regards,
Vignesh
EnterpriseDB: http://www.enterprisedb.com

Re: Option to dump foreign data in pg_dump

From
vignesh C
Date:
On Tue, Jan 14, 2020 at 5:22 PM Luis Carril <luis.carril@swarm64.com> wrote:
Can you have a look at dump with parallel option. Parallel option will
take a lock on table while invoking lockTableForWorker. May be this is
not required for foreign tables.
Thoughts?
I tried with -j and found no issue. I guess that the foreign table needs locking anyway to prevent anyone to modify it while is being dumped.


I'm able to get the problem with the following steps:
Bring up a postgres setup with servers running in 5432 & 5433 port.

Execute the following commands in Server1 configured on 5432 port:
  • CREATE EXTENSION postgres_fdw;
  • CREATE SERVER foreign_server FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host '127.0.0.1', port '5433', dbname 'postgres');
  • create user user1 password '123';
  • alter user user1 with superuser;
  • CREATE USER MAPPING FOR user1 SERVER foreign_server OPTIONS (user 'user1', password '123');

Execute the following commands in Server2 configured on 5433 port:
  • create user user1 password '123';
  • alter user user1 with superuser;
Execute the following commands in Server2 configured on 5433 port as user1 user:
  • create schema test;
  • create table test.test1(id int);
  • insert into test.test1 values(10);

Execute the following commands in Server1 configured on 5432 port as user1 user:
  • CREATE FOREIGN TABLE foreign_table1 (id integer NOT NULL) SERVER foreign_server OPTIONS (schema_name 'test', table_name 'test1');

Without parallel option, the operation is successful:
  • ./pg_dump -d postgres -f dumpdir -U user1 -F d  --include-foreign-data foreign_server

With parallel option it fails:
  • ./pg_dump -d postgres -f dumpdir1 -U user1 -F d -j 5 --include-foreign-data foreign_server
pg_dump: error: could not obtain lock on relation "public.foreign_table1"
This usually means that someone requested an ACCESS EXCLUSIVE lock on the table after the pg_dump parent process had gotten the initial ACCESS SHARE lock on the table.
pg_dump: error: a worker process died unexpectedly

There may be simpler steps than this to reproduce the issue, i have not try to optimize it.

Regards,
Vignesh

Re: Option to dump foreign data in pg_dump

From
Luis Carril
Date:

On Tue, Jan 14, 2020 at 5:22 PM Luis Carril <luis.carril@swarm64.com> wrote:
Can you have a look at dump with parallel option. Parallel option will
take a lock on table while invoking lockTableForWorker. May be this is
not required for foreign tables.
Thoughts?
I tried with -j and found no issue. I guess that the foreign table needs locking anyway to prevent anyone to modify it while is being dumped.


I'm able to get the problem with the following steps:
Bring up a postgres setup with servers running in 5432 & 5433 port.

Execute the following commands in Server1 configured on 5432 port:
  • CREATE EXTENSION postgres_fdw;
  • CREATE SERVER foreign_server FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host '127.0.0.1', port '5433', dbname 'postgres');
  • create user user1 password '123';
  • alter user user1 with superuser;
  • CREATE USER MAPPING FOR user1 SERVER foreign_server OPTIONS (user 'user1', password '123');

Execute the following commands in Server2 configured on 5433 port:
  • create user user1 password '123';
  • alter user user1 with superuser;
Execute the following commands in Server2 configured on 5433 port as user1 user:
  • create schema test;
  • create table test.test1(id int);
  • insert into test.test1 values(10);

Execute the following commands in Server1 configured on 5432 port as user1 user:
  • CREATE FOREIGN TABLE foreign_table1 (id integer NOT NULL) SERVER foreign_server OPTIONS (schema_name 'test', table_name 'test1');

Without parallel option, the operation is successful:
  • ./pg_dump -d postgres -f dumpdir -U user1 -F d  --include-foreign-data foreign_server

With parallel option it fails:
  • ./pg_dump -d postgres -f dumpdir1 -U user1 -F d -j 5 --include-foreign-data foreign_server
pg_dump: error: could not obtain lock on relation "public.foreign_table1"
This usually means that someone requested an ACCESS EXCLUSIVE lock on the table after the pg_dump parent process had gotten the initial ACCESS SHARE lock on the table.
pg_dump: error: a worker process died unexpectedly

There may be simpler steps than this to reproduce the issue, i have not try to optimize it.

Regards,
Vignesh

Hi Vignesh,

   yes you are right I could reproduce it also with 'file_fdw'. The issue is that LOCK is not supported on foreign tables, so I guess that the safest solution is to make the --include-foreign-data incompatible with --jobs, because skipping the locking for foreign tables maybe can lead to a deadlock anyway. Suggestions?

Cheers
Luis M Carril


From: vignesh C <vignesh21@gmail.com>
Sent: Thursday, January 16, 2020 10:01 AM
To: Luis Carril <luis.carril@swarm64.com>
Cc: Alvaro Herrera <alvherre@2ndquadrant.com>; Daniel Gustafsson <daniel@yesql.se>; Laurenz Albe <laurenz.albe@cybertec.at>; PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org>
Subject: Re: Option to dump foreign data in pg_dump
 
On Tue, Jan 14, 2020 at 5:22 PM Luis Carril <luis.carril@swarm64.com> wrote:
Can you have a look at dump with parallel option. Parallel option will
take a lock on table while invoking lockTableForWorker. May be this is
not required for foreign tables.
Thoughts?
I tried with -j and found no issue. I guess that the foreign table needs locking anyway to prevent anyone to modify it while is being dumped.


I'm able to get the problem with the following steps:
Bring up a postgres setup with servers running in 5432 & 5433 port.

Execute the following commands in Server1 configured on 5432 port:
  • CREATE EXTENSION postgres_fdw;
  • CREATE SERVER foreign_server FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host '127.0.0.1', port '5433', dbname 'postgres');
  • create user user1 password '123';
  • alter user user1 with superuser;
  • CREATE USER MAPPING FOR user1 SERVER foreign_server OPTIONS (user 'user1', password '123');

Execute the following commands in Server2 configured on 5433 port:
  • create user user1 password '123';
  • alter user user1 with superuser;
Execute the following commands in Server2 configured on 5433 port as user1 user:
  • create schema test;
  • create table test.test1(id int);
  • insert into test.test1 values(10);

Execute the following commands in Server1 configured on 5432 port as user1 user:
  • CREATE FOREIGN TABLE foreign_table1 (id integer NOT NULL) SERVER foreign_server OPTIONS (schema_name 'test', table_name 'test1');

Without parallel option, the operation is successful:
  • ./pg_dump -d postgres -f dumpdir -U user1 -F d  --include-foreign-data foreign_server

With parallel option it fails:
  • ./pg_dump -d postgres -f dumpdir1 -U user1 -F d -j 5 --include-foreign-data foreign_server
pg_dump: error: could not obtain lock on relation "public.foreign_table1"
This usually means that someone requested an ACCESS EXCLUSIVE lock on the table after the pg_dump parent process had gotten the initial ACCESS SHARE lock on the table.
pg_dump: error: a worker process died unexpectedly

There may be simpler steps than this to reproduce the issue, i have not try to optimize it.

Regards,
Vignesh

Re: Option to dump foreign data in pg_dump

From
vignesh C
Date:
On Mon, Jan 20, 2020 at 8:34 PM Luis Carril <luis.carril@swarm64.com> wrote:
>
>
> Hi Vignesh,
>
>    yes you are right I could reproduce it also with 'file_fdw'. The issue is that LOCK is not supported on foreign
tables,so I guess that the safest solution is to make the --include-foreign-data incompatible with --jobs, because
skippingthe locking for foreign tables maybe can lead to a deadlock anyway. Suggestions? 
>

Yes we can support --include-foreign-data without parallel option and
later add support for parallel option as a different patch.

Regards,
Vignesh
EnterpriseDB: http://www.enterprisedb.com



Re: Option to dump foreign data in pg_dump

From
Luis Carril
Date:

Yes we can support --include-foreign-data without parallel option and
later add support for parallel option as a different patch.

Hi,

    I've attached a new version of the patch in which an error is emitted if the parallel backup is used with the --include-foreign-data option.


Cheers


Luis M. Carril


Attachment

Re: Option to dump foreign data in pg_dump

From
vignesh C
Date:
On Tue, Jan 21, 2020 at 3:06 PM Luis Carril <luis.carril@swarm64.com> wrote:
>
> Yes we can support --include-foreign-data without parallel option and
> later add support for parallel option as a different patch.
>
> Hi,
>
>     I've attached a new version of the patch in which an error is emitted if the parallel backup is used with the
--include-foreign-dataoption.
 
>

Thanks for working on the comments. I noticed one behavior is
different when --table option is specified. When --table is specified
the following are not getting dumped:
CREATE SERVER foreign_server

I felt the above also should be included as part of the dump when
include-foreign-data option is specified.

Regards,
Vignesh
EnterpriseDB: http://www.enterprisedb.com



Re: Option to dump foreign data in pg_dump

From
Luis Carril
Date:

Thanks for working on the comments. I noticed one behavior is
different when --table option is specified. When --table is specified
the following are not getting dumped:
CREATE SERVER foreign_server

I felt the above also should be included as part of the dump when
include-foreign-data option is specified.

Yes, it also happens on master. A dump of a foreign table using --table, which only dumps the table definition, does not include the extension nor the server.
I guess that the idea behind --table is that the table prerequisites should already exist on the database.

A similar behavior can be reproduced for a non foreign table. If a table is created in a specific schema, dumping only the table with --table does not dump the schema definition.

So I think we do not need to dump the server with the table.

Cheers

Luis M Carril

Re: Option to dump foreign data in pg_dump

From
Peter Eisentraut
Date:
On 2020-01-21 10:36, Luis Carril wrote:
>> Yes we can support --include-foreign-data without parallel option and
>> later add support for parallel option as a different patch.
> 
> Hi,
> 
>      I've attached a new version of the patch in which an error is 
> emitted if the parallel backup is used with the --include-foreign-data 
> option.

This seems like an overreaction.  The whole point of 
lockTableForWorker() is to avoid deadlocks, but foreign tables don't 
have locks, so it's not a problem.  I think you can just skip foreign 
tables in lockTableForWorker() using the same logic that getTables() uses.

I think parallel data dump would be an especially interesting option 
when using foreign tables, so it's worth figuring this out.

-- 
Peter Eisentraut              http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: Option to dump foreign data in pg_dump

From
vignesh C
Date:
On Wed, Jan 29, 2020 at 2:00 PM Luis Carril <luis.carril@swarm64.com> wrote:
>
> Thanks for working on the comments. I noticed one behavior is
> different when --table option is specified. When --table is specified
> the following are not getting dumped:
> CREATE SERVER foreign_server
>
> I felt the above also should be included as part of the dump when
> include-foreign-data option is specified.
>
> Yes, it also happens on master. A dump of a foreign table using --table, which only dumps the table definition, does
notinclude the extension nor the server.
 
> I guess that the idea behind --table is that the table prerequisites should already exist on the database.
>
> A similar behavior can be reproduced for a non foreign table. If a table is created in a specific schema, dumping
onlythe table with --table does not dump the schema definition.
 
>
> So I think we do not need to dump the server with the table.
>

Thanks for the clarification, the behavior sounds reasonable to me
unless others have a different opinion on this.

Regards,
Vignesh
EnterpriseDB: http://www.enterprisedb.com



Re: Option to dump foreign data in pg_dump

From
David Steele
Date:
Hi Luis,

On 1/29/20 11:05 AM, Peter Eisentraut wrote:
> On 2020-01-21 10:36, Luis Carril wrote:
>>> Yes we can support --include-foreign-data without parallel option and
>>> later add support for parallel option as a different patch.
>>
>> Hi,
>>
>>      I've attached a new version of the patch in which an error is 
>> emitted if the parallel backup is used with the --include-foreign-data 
>> option.
> 
> This seems like an overreaction.  The whole point of 
> lockTableForWorker() is to avoid deadlocks, but foreign tables don't 
> have locks, so it's not a problem.  I think you can just skip foreign 
> tables in lockTableForWorker() using the same logic that getTables() uses.
> 
> I think parallel data dump would be an especially interesting option 
> when using foreign tables, so it's worth figuring this out.

What do you think of Peter's comment?

Regards,
-- 
-David
david@pgmasters.net



Re: Option to dump foreign data in pg_dump

From
Ashutosh Bapat
Date:
I am just responding on the latest mail on this thread. But the question is about functionality. The proposal is to add a single flag --include-foreign-data which controls whether or not data is dumped for all the foreign tables in a database. That may not serve the purpose. A foreign table may point to a view, materialized view or inheritance tree, and so on. A database can have foreign tables pointing to all of those kinds. Restoring data to a view won't be possible and restoring it into an inheritance tree would insert it into the parent only and not the children. Further, a user may not want the data to be dumped for all the foreign tables since their usages are different esp. considering restore. I think a better option is to extract data in a foreign table using --table if that's the only usage. Otherwise, we need a foreign table level flag indicating whether pg_dump should dump the data for that foreign table or not.

On Wed, Mar 4, 2020 at 12:41 AM David Steele <david@pgmasters.net> wrote:
Hi Luis,

On 1/29/20 11:05 AM, Peter Eisentraut wrote:
> On 2020-01-21 10:36, Luis Carril wrote:
>>> Yes we can support --include-foreign-data without parallel option and
>>> later add support for parallel option as a different patch.
>>
>> Hi,
>>
>>      I've attached a new version of the patch in which an error is
>> emitted if the parallel backup is used with the --include-foreign-data
>> option.
>
> This seems like an overreaction.  The whole point of
> lockTableForWorker() is to avoid deadlocks, but foreign tables don't
> have locks, so it's not a problem.  I think you can just skip foreign
> tables in lockTableForWorker() using the same logic that getTables() uses.
>
> I think parallel data dump would be an especially interesting option
> when using foreign tables, so it's worth figuring this out.

What do you think of Peter's comment?

Regards,
--
-David
david@pgmasters.net




--
Best Wishes,
Ashutosh Bapat

Re: Option to dump foreign data in pg_dump

From
Luis Carril
Date:
Hi everyone,

I am just responding on the latest mail on this thread. But the question is about functionality. The proposal is to add a single flag --include-foreign-data which controls whether or not data is dumped for all the foreign tables in a database. That may not serve the purpose. A foreign table may point to a view, materialized view or inheritance tree, and so on. A database can have foreign tables pointing to all of those kinds. Restoring data to a view won't be possible and restoring it into an inheritance tree would insert it into the parent only and not the children. Further, a user may not want the data to be dumped for all the foreign tables since their usages are different esp. considering restore. I think a better option is to extract data in a foreign table using --table if that's the only usage. Otherwise, we need a foreign table level flag indicating whether pg_dump should dump the data for that foreign table or not.

The option enables the user to dump data of tables backed by a specific foreign_server. It is up to the user to guarantee that the foreign server is also writable, that is the reason to make the option opt-in. The option can be combined with --table to dump specific tables if needed. If the user has different foreign servers in the database has to make the conscious decision of dumping each one of them. Without this option the user is totally unable to do it.


> On 2020-01-21 10:36, Luis Carril wrote:
>>> Yes we can support --include-foreign-data without parallel option and
>>> later add support for parallel option as a different patch.
>>
>> Hi,
>>
>>      I've attached a new version of the patch in which an error is
>> emitted if the parallel backup is used with the --include-foreign-data
>> option.
>
> This seems like an overreaction.  The whole point of
> lockTableForWorker() is to avoid deadlocks, but foreign tables don't
> have locks, so it's not a problem.  I think you can just skip foreign
> tables in lockTableForWorker() using the same logic that getTables() uses.
>
> I think parallel data dump would be an especially interesting option
> when using foreign tables, so it's worth figuring this out.

What do you think of Peter's comment?
I took a look at it, we could skip foreign tables by checking the catalog in lockTableForWorker but this would imply an extra query per call to the function (as in getTables), which would be irrelevant for most of the cases. Or we could pass in the TocEntry that it is a foreign table (although that seems highly specific).
Also, would it not be possible to offer support of LOCK TABLE on foreign tables?

At this point I would like to leave the patch as is, and discuss further improvement in a future patch.

Luis M.


From: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>
Sent: Wednesday, March 4, 2020 5:39 PM
To: David Steele <david@pgmasters.net>
Cc: Luis Carril <luis.carril@swarm64.com>; vignesh C <vignesh21@gmail.com>; Peter Eisentraut <peter.eisentraut@2ndquadrant.com>; Alvaro Herrera <alvherre@2ndquadrant.com>; Daniel Gustafsson <daniel@yesql.se>; Laurenz Albe <laurenz.albe@cybertec.at>; PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org>
Subject: Re: Option to dump foreign data in pg_dump
 
I am just responding on the latest mail on this thread. But the question is about functionality. The proposal is to add a single flag --include-foreign-data which controls whether or not data is dumped for all the foreign tables in a database. That may not serve the purpose. A foreign table may point to a view, materialized view or inheritance tree, and so on. A database can have foreign tables pointing to all of those kinds. Restoring data to a view won't be possible and restoring it into an inheritance tree would insert it into the parent only and not the children. Further, a user may not want the data to be dumped for all the foreign tables since their usages are different esp. considering restore. I think a better option is to extract data in a foreign table using --table if that's the only usage. Otherwise, we need a foreign table level flag indicating whether pg_dump should dump the data for that foreign table or not.

On Wed, Mar 4, 2020 at 12:41 AM David Steele <david@pgmasters.net> wrote:
Hi Luis,

On 1/29/20 11:05 AM, Peter Eisentraut wrote:
> On 2020-01-21 10:36, Luis Carril wrote:
>>> Yes we can support --include-foreign-data without parallel option and
>>> later add support for parallel option as a different patch.
>>
>> Hi,
>>
>>      I've attached a new version of the patch in which an error is
>> emitted if the parallel backup is used with the --include-foreign-data
>> option.
>
> This seems like an overreaction.  The whole point of
> lockTableForWorker() is to avoid deadlocks, but foreign tables don't
> have locks, so it's not a problem.  I think you can just skip foreign
> tables in lockTableForWorker() using the same logic that getTables() uses.
>
> I think parallel data dump would be an especially interesting option
> when using foreign tables, so it's worth figuring this out.

What do you think of Peter's comment?

Regards,
--
-David
david@pgmasters.net




--
Best Wishes,
Ashutosh Bapat

Re: Option to dump foreign data in pg_dump

From
David Steele
Date:
Hi Luis,

Please don't top post. Also be careful to quote prior text when 
replying.  Your message was pretty hard to work through -- i.e. figuring 
out what you said vs. what you were replying to.

On 3/5/20 8:51 AM, Luis Carril wrote:
> 
> At this point I would like to leave the patch as is, and discuss further 
> improvement in a future patch.

I have marked this as Need Review since the author wants the patch 
considered as-is.

I think Ashutosh, at least, has concerns about the patch as it stands, 
but does anyone else want to chime in?

Regards,
-- 
-David
david@pgmasters.net



Re: Option to dump foreign data in pg_dump

From
Alvaro Herrera
Date:
On 2020-Jan-29, Peter Eisentraut wrote:

> On 2020-01-21 10:36, Luis Carril wrote:
> > > Yes we can support --include-foreign-data without parallel option and
> > > later add support for parallel option as a different patch.
> > 
> >      I've attached a new version of the patch in which an error is
> > emitted if the parallel backup is used with the --include-foreign-data
> > option.
> 
> This seems like an overreaction.  The whole point of lockTableForWorker() is
> to avoid deadlocks, but foreign tables don't have locks, so it's not a
> problem.  I think you can just skip foreign tables in lockTableForWorker()
> using the same logic that getTables() uses.
> 
> I think parallel data dump would be an especially interesting option when
> using foreign tables, so it's worth figuring this out.

I agree it would be nice to implement this, so I tried to implement it.

I found it's not currently workable, because parallel.c only has a tocEntry
to work with, not a DumpableObject, so it doesn't know that the table is
foreign; to find that out, parallel.c could use findObjectByDumpId, but
parallel.c is used by both pg_dump and pg_restore, and findObjectByDumpId is
in common.c which cannot be linked in pg_restore because of numerous
incompatibilities.

One way to make this work would be to put lockTableForWorker somewhere other
than parallel.c.  Foe example maybe have CreateArchive() set up a new "lock
table" ArchiveHandle function ptr that parallel.c can call;
lockTableForWorker() becomes the pg_dump implementation of that, while
pg_restore uses NULL.

Anyway, I think Luis has it right that this should not be a blocker for
this feature.

-- 
Álvaro Herrera                https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: Option to dump foreign data in pg_dump

From
Alvaro Herrera
Date:
On 2020-Mar-23, Alvaro Herrera wrote:

> > This seems like an overreaction.  The whole point of lockTableForWorker() is
> > to avoid deadlocks, but foreign tables don't have locks, so it's not a
> > problem.  I think you can just skip foreign tables in lockTableForWorker()
> > using the same logic that getTables() uses.
> > 
> > I think parallel data dump would be an especially interesting option when
> > using foreign tables, so it's worth figuring this out.
> 
> I agree it would be nice to implement this, so I tried to implement it.

(Here's patch for this, which of course doesn't compile)

diff --git a/src/bin/pg_dump/parallel.c b/src/bin/pg_dump/parallel.c
index c25e3f7a88..b3000da409 100644
--- a/src/bin/pg_dump/parallel.c
+++ b/src/bin/pg_dump/parallel.c
@@ -1316,17 +1316,33 @@ IsEveryWorkerIdle(ParallelState *pstate)
  * then we know that somebody else has requested an ACCESS EXCLUSIVE lock and
  * so we have a deadlock.  We must fail the backup in that case.
  */
+#include "pg_dump.h"
+#include "catalog/pg_class_d.h"
 static void
 lockTableForWorker(ArchiveHandle *AH, TocEntry *te)
 {
     const char *qualId;
     PQExpBuffer query;
     PGresult   *res;
+    DumpableObject *obj;
 
     /* Nothing to do for BLOBS */
     if (strcmp(te->desc, "BLOBS") == 0)
         return;
 
+    /*
+     * Nothing to do for foreign tables either, since they don't support LOCK
+     * TABLE.
+     */
+    obj = findObjectByDumpId(te->dumpId);
+    if (obj->objType == DO_TABLE_DATA)
+    {
+        TableInfo *tabinfo = (TableInfo *) obj;
+
+        if (tabinfo->relkind == RELKIND_FOREIGN_TABLE)
+            return;
+    }
+
     query = createPQExpBuffer();
 
     qualId = fmtQualifiedId(te->namespace, te->tag);

-- 
Álvaro Herrera                https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: Option to dump foreign data in pg_dump

From
Alvaro Herrera
Date:
v8 attached.

I modified Luis' v7 a little bit by putting the ftserver acquisition in
the main pg_class query instead of adding one separate query for each
foreign table.  That seems better overall.

I don't understand why this code specifically disallows the empty string
as an option to --dump-foreign-data.  The other pattern-matching options
don't do that.  This seems to have been added in response to Daniel's
review[1], but I don't quite understand the rationale.  No other option
behaves that way.  I'm inclined to remove that, and I have done so in
this version.

I removed DumpOptions new bool flag.  Seems pointless; we can just check
that the list is not null, as we do for other such lists.

I split out the proposed test in a different commit; there's no
consensus that this test is acceptable as-is.  Tom proposed a different
strategy[2]; if you try to dump a table with a dummy handler, you'll get
this:

COPY public.ft1 (c1, c2, c3) FROM stdin;
pg_dump: error: query failed: ERROR:  foreign-data wrapper "dummy" has no handler
pg_dump: error: query was: COPY (SELECT c1, c2, c3 FROM public.ft1 ) TO stdout;

Maybe what we should do just verify that you do get that error (and no
other errors).

[1] https://postgr.es/m/E9C5B25C-52E4-49EC-9958-69CD5BD14EDA@yesql.se
[2] https://postgr.es/m/8001.1573759651@sss.pgh.pa.us

-- 
Álvaro Herrera                https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Attachment

Re: Option to dump foreign data in pg_dump

From
Daniel Gustafsson
Date:
> On 23 Mar 2020, at 21:40, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:

> I don't understand why this code specifically disallows the empty string
> as an option to --dump-foreign-data.  The other pattern-matching options
> don't do that.  This seems to have been added in response to Daniel's
> review[1], but I don't quite understand the rationale.  No other option
> behaves that way.  I'm inclined to remove that, and I have done so in
> this version.

It was a response to the discussion upthread about not allowing a blanket dump-
everything statement for foreign data, but rather require some form of opt-in.
The empty string made the code wildcard to all foreign data, which was thought
of as being a footgun for creating problematic dumps.

cheers ./daniel


Re: Option to dump foreign data in pg_dump

From
Alvaro Herrera
Date:
On 2020-Mar-23, Alvaro Herrera wrote:

> COPY public.ft1 (c1, c2, c3) FROM stdin;
> pg_dump: error: query failed: ERROR:  foreign-data wrapper "dummy" has no handler
> pg_dump: error: query was: COPY (SELECT c1, c2, c3 FROM public.ft1 ) TO stdout;
> 
> Maybe what we should do just verify that you do get that error (and no
> other errors).

Done that way.  Will be pushing this shortly.

-- 
Álvaro Herrera                https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Attachment

Re: Option to dump foreign data in pg_dump

From
Alvaro Herrera
Date:
On 2020-Mar-24, Alvaro Herrera wrote:

> On 2020-Mar-23, Alvaro Herrera wrote:
> 
> > COPY public.ft1 (c1, c2, c3) FROM stdin;
> > pg_dump: error: query failed: ERROR:  foreign-data wrapper "dummy" has no handler
> > pg_dump: error: query was: COPY (SELECT c1, c2, c3 FROM public.ft1 ) TO stdout;
> > 
> > Maybe what we should do just verify that you do get that error (and no
> > other errors).
> 
> Done that way.  Will be pushing this shortly.

Hmm, but travis is failing on the cfbot, and I can't see why ...

-- 
Álvaro Herrera                https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: Option to dump foreign data in pg_dump

From
Alvaro Herrera
Date:
On 2020-Mar-24, Alvaro Herrera wrote:

> Hmm, but travis is failing on the cfbot, and I can't see why ...

My only guess, without further output, is that getopt_long is not liking
the [ "--include-foreign-data", "xxx" ] style of arguments in the Perl
array of the command to run (which we don't use with anywhere else in
the files I looked), so I changed it to [ "--include-foreign-data=xxx" ].
If this was not the problem, we'll need more info, which the buildfarm
will give us.

And pushed.  Thanks, Luis, and thanks to all reviewers.

-- 
Álvaro Herrera                https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services