Thread: Option to dump foreign data in pg_dump
Attachment
Hello,pg_dump ignores the dumping of data in foreign tableson purpose, this patch makes it optional as the user maybewants to manage the data in the foreign servers directly fromPostgres. Opinions?
CheersLuis M Carril
> On 28 Jun 2019, at 16:49, Luis Carril <luis.carril@swarm64.com> wrote: > pg_dump ignores the dumping of data in foreign tables > on purpose, this patch makes it optional as the user maybe > wants to manage the data in the foreign servers directly from > Postgres. Opinions? Wouldn’t that have the potential to make restores awkward for FDWs that aren’t writeable? Basically, how can the risk of foot-gunning be minimized to avoid users ending up with dumps that are hard to restore? cheers ./daniel
> On 28 Jun 2019, at 16:49, Luis Carril <luis.carril@swarm64.com> wrote:
> pg_dump ignores the dumping of data in foreign tables
> on purpose, this patch makes it optional as the user maybe
> wants to manage the data in the foreign servers directly from
> Postgres. Opinions?
Wouldn’t that have the potential to make restores awkward for FDWs that aren’t
writeable? Basically, how can the risk of foot-gunning be minimized to avoid
users ending up with dumps that are hard to restore?
cheers ./daniel
Daniel Gustafsson <daniel@yesql.se> writes: >> On 28 Jun 2019, at 16:49, Luis Carril <luis.carril@swarm64.com> wrote: >> pg_dump ignores the dumping of data in foreign tables >> on purpose, this patch makes it optional as the user maybe >> wants to manage the data in the foreign servers directly from >> Postgres. Opinions? > Wouldn’t that have the potential to make restores awkward for FDWs that aren’t > writeable? Yeah, I think the feature as-proposed is a shotgun that's much more likely to cause problems than solve them. Almost certainly, what people would really need is the ability to dump individual foreign tables' data not everything. (I also note that the main reason for "dump everything", namely to get a guaranteed-consistent snapshot, isn't really valid for foreign tables anyhow.) I'm tempted to suggest that the way to approach this is to say that if you explicitly select some foreign table(s) with "-t", then we'll dump their data, unless you suppress that with "-s". No new switch needed. Another way of looking at it, which responds more directly to Daniel's point about non-writable FDWs, could be to have a switch that says "dump foreign tables' data if their FDW is one of these". regards, tom lane
Daniel Gustafsson <daniel@yesql.se> writes:
>> On 28 Jun 2019, at 16:49, Luis Carril <luis.carril@swarm64.com> wrote:
>> pg_dump ignores the dumping of data in foreign tables
>> on purpose, this patch makes it optional as the user maybe
>> wants to manage the data in the foreign servers directly from
>> Postgres. Opinions?
> Wouldn’t that have the potential to make restores awkward for FDWs that aren’t
> writeable?
Yeah, I think the feature as-proposed is a shotgun that's much more likely
to cause problems than solve them. Almost certainly, what people would
really need is the ability to dump individual foreign tables' data not
everything. (I also note that the main reason for "dump everything",
namely to get a guaranteed-consistent snapshot, isn't really valid for
foreign tables anyhow.)
I'm tempted to suggest that the way to approach this is to say that if you
explicitly select some foreign table(s) with "-t", then we'll dump their
data, unless you suppress that with "-s". No new switch needed.
Another way of looking at it, which responds more directly to Daniel's
point about non-writable FDWs, could be to have a switch that says "dump
foreign tables' data if their FDW is one of these".
regards, tom lane
Luis M Carril
> On 28 Jun 2019, at 17:30, Tom Lane <tgl@sss.pgh.pa.us> wrote: > > Yeah, I think the feature as-proposed is a shotgun that's much more likely > > to cause problems than solve them. Almost certainly, what people would > > really need is the ability to dump individual foreign tables' data not > > everything. (I also note that the main reason for "dump everything", > > namely to get a guaranteed-consistent snapshot, isn't really valid for > > foreign tables anyhow.) I think this is sort of key here, the consistency guarantees are wildly different. A note about this should perhaps be added to the docs for the option discussed here? > On 28 Jun 2019, at 19:55, Luis Carril <luis.carril@swarm64.com> wrote: > What about providing a list of FDW servers instead of an all or nothing option? In that way the user really has to do aconscious decision to dump the content of the foreign tables for a specific server, this would allow distinction if multipleFDW are being used in the same DB. I think this is a good option, the normal exclusion rules can then still apply in case not everything from a specific server is of interest. cheers ./daniel
> > What about providing a list of FDW servers instead of an all or nothing option? In that way the user really has to do a conscious decision to dump the content of the foreign tables for > > a specific server, this would allow distinction if multiple FDW are being used in the same DB.
> I think this is a good option, the normal exclusion rules can then still apply
> in case not everything from a specific server is of interest.
Hi, here is a new patch to dump the data of foreign tables using pg_dump.
Luis M Carril
Attachment
> On 12 Jul 2019, at 16:08, Luis Carril <luis.carril@swarm64.com> wrote: > > > > On 28 Jun 2019, at 19:55, Luis Carril <luis.carril@swarm64.com> wrote: > > > What about providing a list of FDW servers instead of an all or nothing option? In that way the user really has todo a conscious decision to dump the content of the foreign tables for > > a specific server, this would allow distinctionif multiple FDW are being used in the same DB. > > > I think this is a good option, the normal exclusion rules can then still apply > > in case not everything from a specific server is of interest. > > Hi, here is a new patch to dump the data of foreign tables using pg_dump. Cool! Please register this patch in the next commitfest to make sure it doesn’t get lost on the way. Feel free to mark me as reviewer when adding it. > This time the user specifies for which foreign servers the data will be dumped, which helps in case of having a mix ofwriteable and non-writeable fdw in the database. Looks good, and works as expected. A few comments on the patch: Documentation is missing, but you've waited with docs until the functionality of the patch was fleshed out? This allows for adding a blanket wildcard with "--include-foreign-data=“ which includes every foreign server. This seems to go against the gist of the patch, to require an explicit opt-in per server. Testing for an empty string should do the trick. + case 11: /* include foreign data */ + simple_string_list_append(&foreign_servers_include_patterns, optarg); + break; + I don’t think expand_foreign_server_name_patterns should use strict_names, but rather always consider failures to map as errors. + expand_foreign_server_name_patterns(fout, &foreign_servers_include_patterns, + &foreign_servers_include_oids, + strict_names); This seems like a bit too ambiguous name, it would be good to indicate in the name that it refers to a foreign server. + Oid serveroid; /* foreign server oid */ As coded there is no warning when asking for foreign data on a schema-only dump, maybe something like could make usage clearer as this option is similar in concept to data-only: + if (dopt.schemaOnly && foreign_servers_include_patterns.head != NULL) + { + pg_log_error("options -s/--schema-only and --include-foreign-data cannot be used together"); + exit_nicely(1); + } + cheers ./daniel
Thanks, I'll do!On 12 Jul 2019, at 16:08, Luis Carril <luis.carril@swarm64.com> wrote:On 28 Jun 2019, at 19:55, Luis Carril <luis.carril@swarm64.com> wrote: What about providing a list of FDW servers instead of an all or nothing option? In that way the user really has to do a conscious decision to dump the content of the foreign tables for > > a specific server, this would allow distinction if multiple FDW are being used in the same DB.I think this is a good option, the normal exclusion rules can then still apply in case not everything from a specific server is of interest.Hi, here is a new patch to dump the data of foreign tables using pg_dump.Cool! Please register this patch in the next commitfest to make sure it doesn’t get lost on the way. Feel free to mark me as reviewer when adding it.
I've added the documentation about the option in the pg_dump pageThis time the user specifies for which foreign servers the data will be dumped, which helps in case of having a mix of writeable and non-writeable fdw in the database.Looks good, and works as expected. A few comments on the patch: Documentation is missing, but you've waited with docs until the functionality of the patch was fleshed out?
Now it errors if any is an empty string.This allows for adding a blanket wildcard with "--include-foreign-data=“ which includes every foreign server. This seems to go against the gist of the patch, to require an explicit opt-in per server. Testing for an empty string should do the trick. + case 11: /* include foreign data */ + simple_string_list_append(&foreign_servers_include_patterns, optarg); + break; +
Removed, ie if nothing match it throws an error.I don’t think expand_foreign_server_name_patterns should use strict_names, but rather always consider failures to map as errors. + expand_foreign_server_name_patterns(fout, &foreign_servers_include_patterns, + &foreign_servers_include_oids, + strict_names);
Changed to foreign_server_oid.This seems like a bit too ambiguous name, it would be good to indicate in the name that it refers to a foreign server. + Oid serveroid; /* foreign server oid */
Added tooAs coded there is no warning when asking for foreign data on a schema-only dump, maybe something like could make usage clearer as this option is similar in concept to data-only: + if (dopt.schemaOnly && foreign_servers_include_patterns.head != NULL) + { + pg_log_error("options -s/--schema-only and --include-foreign-data cannot be used together"); + exit_nicely(1); + } +
cheers ./daniel
Thanks for the comments!
Cheers
Luis M Carril
Attachment
*I suggest the option to be just –foreign-data because if we make it –include-foreign-data its expected to have –exclude-foreign-data option too.
*please add test case
* + if (tdinfo->filtercond || tbinfo->relkind == RELKIND_FOREIGN_TABLE)
filter condition is not implemented completely yet so the logic only work on foreign table so I think its better to handle it separately
* I don’t understand the need for changing SELECT query .we can use the same SELECT query syntax for both regular table and foreign table
regards
Surafel
On Mon, Jul 15, 2019 at 6:09 PM Luis Carril <luis.carril@swarm64.com> wrote: > > On 15.07.19 12:06, Daniel Gustafsson wrote: > Few comments: As you have specified required_argument in above: + {"include-foreign-data", required_argument, NULL, 11}, The below check may not be required: + if (strcmp(optarg, "") == 0) + { + pg_log_error("empty string is not a valid pattern in --include-foreign-data"); + exit_nicely(1); + } + if (foreign_servers_include_patterns.head != NULL) + { + expand_foreign_server_name_patterns(fout, &foreign_servers_include_patterns, + &foreign_servers_include_oids); + if (foreign_servers_include_oids.head == NULL) + fatal("no matching foreign servers were found"); + } + The above check if (foreign_servers_include_oids.head == NULL) may not be required, as there is a check present inside expand_foreign_server_name_patterns to handle this error: + + res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK); + if (PQntuples(res) == 0) + fatal("no matching foreign servers were found for pattern \"%s\"", cell->val); + +static void +expand_foreign_server_name_patterns(Archive *fout, + SimpleStringList *patterns, + SimpleOidList *oids) +{ + PQExpBuffer query; + PGresult *res; + SimpleStringListCell *cell; + int i; + + if (patterns->head == NULL) + return; /* nothing to do */ + The above check for patterns->head may not be required as similar check exists before this function is called: + if (foreign_servers_include_patterns.head != NULL) + { + expand_foreign_server_name_patterns(fout, &foreign_servers_include_patterns, + &foreign_servers_include_oids); + if (foreign_servers_include_oids.head == NULL) + fatal("no matching foreign servers were found"); + } + + /* Skip FOREIGN TABLEs (no data to dump) if not requested explicitly */ + if (tbinfo->relkind == RELKIND_FOREIGN_TABLE && + (foreign_servers_include_oids.head == NULL || + !simple_oid_list_member(&foreign_servers_include_oids, tbinfo->foreign_server_oid))) simple_oid_list_member can be split into two lines Regards, Vignesh EnterpriseDB: http://www.enterprisedb.com
On Thu, Sep 19, 2019 at 3:08 PM vignesh C <vignesh21@gmail.com> wrote: > > On Mon, Jul 15, 2019 at 6:09 PM Luis Carril <luis.carril@swarm64.com> wrote: > > > > On 15.07.19 12:06, Daniel Gustafsson wrote: > > > Few comments: > > As you have specified required_argument in above: > + {"include-foreign-data", required_argument, NULL, 11}, > > The below check may not be required: > + if (strcmp(optarg, "") == 0) > + { > + pg_log_error("empty string is not a valid pattern in --include-foreign-data"); > + exit_nicely(1); > + } > > + if (foreign_servers_include_patterns.head != NULL) > + { > + expand_foreign_server_name_patterns(fout, &foreign_servers_include_patterns, > + &foreign_servers_include_oids); > + if (foreign_servers_include_oids.head == NULL) > + fatal("no matching foreign servers were found"); > + } > + > > The above check if (foreign_servers_include_oids.head == NULL) may not > be required, as there is a check present inside > expand_foreign_server_name_patterns to handle this error: > + > + res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK); > + if (PQntuples(res) == 0) > + fatal("no matching foreign servers were found for pattern \"%s\"", cell->val); > + > > +static void > +expand_foreign_server_name_patterns(Archive *fout, > + SimpleStringList *patterns, > + SimpleOidList *oids) > +{ > + PQExpBuffer query; > + PGresult *res; > + SimpleStringListCell *cell; > + int i; > + > + if (patterns->head == NULL) > + return; /* nothing to do */ > + > > The above check for patterns->head may not be required as similar > check exists before this function is called: > + if (foreign_servers_include_patterns.head != NULL) > + { > + expand_foreign_server_name_patterns(fout, &foreign_servers_include_patterns, > + &foreign_servers_include_oids); > + if (foreign_servers_include_oids.head == NULL) > + fatal("no matching foreign servers were found"); > + } > + > > + /* Skip FOREIGN TABLEs (no data to dump) if not requested explicitly */ > + if (tbinfo->relkind == RELKIND_FOREIGN_TABLE && > + (foreign_servers_include_oids.head == NULL || > + !simple_oid_list_member(&foreign_servers_include_oids, > tbinfo->foreign_server_oid))) > simple_oid_list_member can be split into two lines > Also can we include few tests for this feature. Regards, Vignesh EnterpriseDB: http://www.enterprisedb.com
Several pg_dump options have no counterpart, e.g --enable-row-security does not have a disable (which is the default). Also calling it --foreign-data would sound similar to the --table, by default all tables are dumped, but with --table only the selected tables are dumped. While without --include-foreign-data all data is excluded, and only with the option some foreign data would be included.*I suggest the option to be just –foreign-data because if we make it –include-foreign-data its expected to have –exclude-foreign-data option too.
*please add test case
I added tests cases for the invalid inputs. I'll try to make a test case for the actual dump of foreign data, but that requires more setup, because a functional fdw is needed there.
Note that there is another if condition that actually applies the the filtercondition if provided, also for a foreign table we need to do a COPY SELECT instead of a COPY TO* + if (tdinfo->filtercond || tbinfo->relkind == RELKIND_FOREIGN_TABLE)
filter condition is not implemented completely yet so the logic only work on foreign table so I think its better to handle it separately
* I don’t understand the need for changing SELECT query .we can use the same SELECT query syntax for both regular table and foreign table
> As you have specified required_argument in above:
> + {"include-foreign-data", required_argument, NULL, 11},
>
> The below check may not be required:
> + if (strcmp(optarg, "") == 0)
> + {
> + pg_log_error("empty string is not a valid pattern in --include-foreign-data");
> + exit_nicely(1);
> + }
We need to conserve this check to avoid that the use of '--include-foreign-data=', which would match all foreign servers. And in previous messages it was established that that behavior is too coarse.
Removed>
> + if (foreign_servers_include_patterns.head != NULL)
> + {
> + expand_foreign_server_name_patterns(fout, &foreign_servers_include_patterns,
> + &foreign_servers_include_oids);
> + if (foreign_servers_include_oids.head == NULL)
> + fatal("no matching foreign servers were found");
> + }
> +
>
> The above check if (foreign_servers_include_oids.head == NULL) may not
> be required, as there is a check present inside
> expand_foreign_server_name_patterns to handle this error:
> +
> + res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
> + if (PQntuples(res) == 0)
> + fatal("no matching foreign servers were found for pattern \"%s\"", cell->val);
> +
I think that it is better that the function expand_foreign_server_name do not rely on a non-NULL head, so it checks it by itself, and is closer to the other expand_* functions.>
> +static void
> +expand_foreign_server_name_patterns(Archive *fout,
> + SimpleStringList *patterns,
> + SimpleOidList *oids)
> +{
> + PQExpBuffer query;
> + PGresult *res;
> + SimpleStringListCell *cell;
> + int i;
> +
> + if (patterns->head == NULL)
> + return; /* nothing to do */
> +
>
> The above check for patterns->head may not be required as similar
> check exists before this function is called:
> + if (foreign_servers_include_patterns.head != NULL)
> + {
> + expand_foreign_server_name_patterns(fout, &foreign_servers_include_patterns,
> + &foreign_servers_include_oids);
> + if (foreign_servers_include_oids.head == NULL)
> + fatal("no matching foreign servers were found");
> + }
> +
Instead I've removed the check before the function is called.
Done>
> + /* Skip FOREIGN TABLEs (no data to dump) if not requested explicitly */
> + if (tbinfo->relkind == RELKIND_FOREIGN_TABLE &&
> + (foreign_servers_include_oids.head == NULL ||
> + !simple_oid_list_member(&foreign_servers_include_oids,
> tbinfo->foreign_server_oid)))
> simple_oid_list_member can be split into two lines
Attachment
Hello,thanks for the comments!Note that there is another if condition that actually applies the the filtercondition if provided, also for a we need to do a COPY SELECT instead of a COPY TO* + if (tdinfo->filtercond || tbinfo->relkind == RELKIND_FOREIGN_TABLE)
filter condition is not implemented completely yet so the logic only work on foreign table so I think its better to handle it separately
* I don’t understand the need for changing SELECT query .we can use the same SELECT query syntax for both regular table and foreign table
To which query do you refer? In the patch there are three queries: 1 retrieves foreign servers, another is the SELECT in the COPY that now it applies in case of a filter condition of a foreign table, and a third that retrieves the oid of a given foreign server.
If we have a non-foreign table and filtercond is NULL, then we can do a `COPY table columns TO stdout`.On Fri, Sep 20, 2019 at 6:20 PM Luis Carril <luis.carril@swarm64.com> wrote:Hello,thanks for the comments!Note that there is another if condition that actually applies the the filtercondition if provided, also for a we need to do a COPY SELECT instead of a COPY TO* + if (tdinfo->filtercond || tbinfo->relkind == RELKIND_FOREIGN_TABLE)
filter condition is not implemented completely yet so the logic only work on foreign table so I think its better to handle it separately
but we can't supplied where clause in pg_dump yet so filtercondtion is always NULL and the logic became true only on foreign table.* I don’t understand the need for changing SELECT query .we can use the same SELECT query syntax for both regular table and foreign table
To which query do you refer? In the patch there are three queries: 1 retrieves foreign servers, another is the SELECT in the COPY that now it applies in case of a filter condition of a foreign table, and a third that retrieves the oid of a given foreign server.SELECT on COPYregardsSurafel
But if the table is foreign, the `COPY foreign-table columns TO stdout` is not supported by Postgres, so we have to do a `COPY (SELECT columns FROM foreign-table) TO sdout`
Now if in any case the filtercond is non-NULL, ie we have a WHERE clause, then for non-foreign and foreign tables we have to do a:
`COPY (SELECT columns FROM table) TO sdout`
So the COPY of a foreign table has to be done using the sub-SELECT just as a non-foreign table with filtercond, not like a non-foreign table without filtercond.
Cheers
Luis M Carril
On Sat, 2019-11-09 at 21:38 +0100, Daniel Gustafsson wrote: > I took a look at this patch again today for a review of the latest version. > While I still think it's a potential footgun due to read-only FDW's, I can see > usecases for having it so I'm mildly +1 on adding it. I don't feel good about this feature. pg_dump should not dump any data that are not part of the database being dumped. If you restore such a dump, the data will be inserted into the foreign table, right? Unless someone emptied the remote table first, this will add duplicated data to that table. I think that is an unpleasant surprise. I'd expect that if I drop a database and restore it from a dump, it should be as it was before. This change would break that assumption. What are the use cases of a dump with foreign table data? Unless I misunderstood something there, -1. Yours, Laurenz Albe
Hello
a new version of the patch with the tests from Daniel (thanks!) and the nitpicks.
I don't feel good about this feature.
pg_dump should not dump any data that are not part of the database
being dumped.
If you restore such a dump, the data will be inserted into the foreign table,
right? Unless someone emptied the remote table first, this will add
duplicated data to that table.
I think that is an unpleasant surprise. I'd expect that if I drop a database
and restore it from a dump, it should be as it was before. This change would
break that assumption.
What are the use cases of a dump with foreign table data?
Unless I misunderstood something there, -1.
This feature is opt-in so if the user makes dumps of a remote server explicitly by other means, then the user would not need to use these option.
But, not all foreign tables are necessarily in a remote server like the ones referenced by the postgres_fdw.
In FDWs like swarm64da, cstore, citus or timescaledb, the foreign tables are part of your database, and one could expect that a dump of the database includes data from these FDWs.
Cheers
Luis M Carril
Attachment
> On 12 Nov 2019, at 12:12, Luis Carril <luis.carril@swarm64.com> wrote: > a new version of the patch with the tests from Daniel (thanks!) and the nitpicks. The nitpicks have been addressed. However, it seems that the new file containing the test FDW seems missing from the new version of the patch. Did you forget to git add the file? >> I don't feel good about this feature. >> pg_dump should not dump any data that are not part of the database >> being dumped. >> >> If you restore such a dump, the data will be inserted into the foreign table, >> right? Unless someone emptied the remote table first, this will add >> duplicated data to that table. >> I think that is an unpleasant surprise. I'd expect that if I drop a database >> and restore it from a dump, it should be as it was before. This change would >> break that assumption. >> >> What are the use cases of a dump with foreign table data? >> >> Unless I misunderstood something there, -1. > > This feature is opt-in so if the user makes dumps of a remote server explicitly by other means, then the user would notneed to use these option. > But, not all foreign tables are necessarily in a remote server like the ones referenced by the postgres_fdw. > In FDWs like swarm64da, cstore, citus or timescaledb, the foreign tables are part of your database, and one could expectthat a dump of the database includes data from these FDWs. Right, given the deliberate opt-in which is required I don't see much risk of unpleasant user surprises. There are no doubt foot-guns available with this feature, as has been discussed upthread, but the current proposal is IMHO minimizing them. cheers ./daniel
The nitpicks have been addressed. However, it seems that the new file
containing the test FDW seems missing from the new version of the patch. Did
you forget to git add the file?
Yes, I forgot, thanks for noticing. New patch attached again.
Cheers
Luis M Carril
Attachment
On 2019-Nov-12, Luis Carril wrote: > But, not all foreign tables are necessarily in a remote server like > the ones referenced by the postgres_fdw. > In FDWs like swarm64da, cstore, citus or timescaledb, the foreign > tables are part of your database, and one could expect that a dump of > the database includes data from these FDWs. BTW these are not FDWs in the "foreign" sense at all; they're just abusing the FDW system in order to be able to store data in some different way. The right thing to do IMO is to port these systems to be users of the new storage abstraction (table AM). If we do that, what value is there to the feature being proposed here? -- Álvaro Herrera https://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
> On 12 Nov 2019, at 15:21, Luis Carril <luis.carril@swarm64.com> wrote: > >> The nitpicks have been addressed. However, it seems that the new file >> containing the test FDW seems missing from the new version of the patch. Did >> you forget to git add the file? > Yes, I forgot, thanks for noticing. New patch attached again. The patch applies, compiles and tests clean. The debate whether we want to allow dumping of foreign data at all will continue but I am marking the patch as ready for committer as I believe it is ready for input on that level. cheers ./daniel
Alvaro Herrera <alvherre@2ndquadrant.com> writes: > On 2019-Nov-12, Luis Carril wrote: >> But, not all foreign tables are necessarily in a remote server like >> the ones referenced by the postgres_fdw. >> In FDWs like swarm64da, cstore, citus or timescaledb, the foreign >> tables are part of your database, and one could expect that a dump of >> the database includes data from these FDWs. > BTW these are not FDWs in the "foreign" sense at all; they're just > abusing the FDW system in order to be able to store data in some > different way. The right thing to do IMO is to port these systems to be > users of the new storage abstraction (table AM). If we do that, what > value is there to the feature being proposed here? That is a pretty valid point. I'm not sure however that there would be *no* use-cases for the proposed option if all of those FDWs were converted to table AMs. Also, even if the authors of those systems are all hard at work on such a conversion, it'd probably be years before the FDW implementations disappear from the wild. Having said that, I'm ending up -0.5 or so on the patch as it stands, mainly because it seems like it is bringing way more maintenance burden than it's realistically worth. I'm particularly unhappy about the proposed regression test additions --- the cycles added to check-world, and the maintenance effort that's inevitably going to be needed for all that code, seem unwarranted for something that's at best a very niche use-case. And, despite the bulk of the test additions, they're in no sense offering an end-to-end test, because that would require successfully reloading the data as well. That objection could be addressed, perhaps, by scaling down the tests to just have a goal of exercising the new pg_dump option-handling code, and not to attempt to do meaningful data extraction from a foreign table. You could do that with an entirely dummy foreign data wrapper and server (cf. sql/foreign_data.sql). I'm imagining perhaps create two dummy servers, of which only one has a table, and we ask to dump data from the other one. This would cover parsing and validation of the --include-foreign-data option, and make sure that we don't dump from servers we're not supposed to. It doesn't actually dump any data, but that part is a completely trivial aspect of the patch, really, and almost all of the code relevant to that does get tested already. In the department of minor nitpicks ... why bother with joining to pg_foreign_server in the query that retrieves a foreign table's server OID? ft.ftserver is already the answer you seek. Also, I think it'd be wise from a performance standpoint to skip doing that query altogether in the normal case where --include-foreign-data hasn't been requested. regards, tom lane
On 2019-Nov-12, Luis Carril wrote: > The nitpicks have been addressed. However, it seems that the new file > containing the test FDW seems missing from the new version of the patch. Did > you forget to git add the file? > > Yes, I forgot, thanks for noticing. New patch attached again. Luis, It seems you've got enough support for this concept, so let's move forward with this patch. There are some comments from Tom about the patch; would you like to send an updated version perhaps? Thanks -- Álvaro Herrera https://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Luis,Hi,
It seems you've got enough support for this concept, so let's move
forward with this patch. There are some comments from Tom about the
patch; would you like to send an updated version perhaps?
Thanks
I've attached a new version (v6) removing the superfluous JOIN that Tom identified, and not collecting the oids (avoiding the query) if the option is not used at all.
About the testing issues that Tom mentioned:
I do not see how can we have a pure SQL dummy FDW that tests the functionality. Because the only way to identify if the data of a foreign table for the chosen server is dumped is if the COPY statement appears in the output, but if the C callbacks of the FDW are not implemented, then the SELECT that dumps the data to generate the COPY cannot be executed.
Also, to test that the include option chooses only the data of the specified foreign servers we would need some negative testing, i.e. that the COPY statement for the non-desired table does not appear. But I do not find these kind of tests in the test suite, even for other selective options like --table or --exclude-schema.
Cheers
Luis M Carril
Sent: Thursday, November 28, 2019 3:31 PM
To: Luis Carril <luis.carril@swarm64.com>
Cc: Daniel Gustafsson <daniel@yesql.se>; Laurenz Albe <laurenz.albe@cybertec.at>; vignesh C <vignesh21@gmail.com>; PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org>
Subject: Re: Option to dump foreign data in pg_dump
> The nitpicks have been addressed. However, it seems that the new file
> containing the test FDW seems missing from the new version of the patch. Did
> you forget to git add the file?
>
> Yes, I forgot, thanks for noticing. New patch attached again.
Luis,
It seems you've got enough support for this concept, so let's move
forward with this patch. There are some comments from Tom about the
patch; would you like to send an updated version perhaps?
Thanks
--
Álvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Attachment
On Fri, Nov 29, 2019 at 2:10 PM Luis Carril <luis.carril@swarm64.com> wrote: > > Luis, > > It seems you've got enough support for this concept, so let's move > forward with this patch. There are some comments from Tom about the > patch; would you like to send an updated version perhaps? > > Thanks > > Hi, > > I've attached a new version (v6) removing the superfluous JOIN that Tom identified, and not collecting the oids (avoidingthe query) if the option is not used at all. > > About the testing issues that Tom mentioned: > I do not see how can we have a pure SQL dummy FDW that tests the functionality. Because the only way to identify if thedata of a foreign table for the chosen server is dumped is if the COPY statement appears in the output, but if the C callbacksof the FDW are not implemented, then the SELECT that dumps the data to generate the COPY cannot be executed. > Also, to test that the include option chooses only the data of the specified foreign servers we would need some negativetesting, i.e. that the COPY statement for the non-desired table does not appear. But I do not find these kind oftests in the test suite, even for other selective options like --table or --exclude-schema. > Can you have a look at dump with parallel option. Parallel option will take a lock on table while invoking lockTableForWorker. May be this is not required for foreign tables. Thoughts? Regards, Vignesh EnterpriseDB: http://www.enterprisedb.com
Can you have a look at dump with parallel option. Parallel option will
take a lock on table while invoking lockTableForWorker. May be this is
not required for foreign tables.
Thoughts?
Sent: Tuesday, January 14, 2020 1:48 AM
To: Luis Carril <luis.carril@swarm64.com>
Cc: Alvaro Herrera <alvherre@2ndquadrant.com>; Daniel Gustafsson <daniel@yesql.se>; Laurenz Albe <laurenz.albe@cybertec.at>; PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org>
Subject: Re: Option to dump foreign data in pg_dump
>
> Luis,
>
> It seems you've got enough support for this concept, so let's move
> forward with this patch. There are some comments from Tom about the
> patch; would you like to send an updated version perhaps?
>
> Thanks
>
> Hi,
>
> I've attached a new version (v6) removing the superfluous JOIN that Tom identified, and not collecting the oids (avoiding the query) if the option is not used at all.
>
> About the testing issues that Tom mentioned:
> I do not see how can we have a pure SQL dummy FDW that tests the functionality. Because the only way to identify if the data of a foreign table for the chosen server is dumped is if the COPY statement appears in the output, but if the C callbacks of the FDW are not implemented, then the SELECT that dumps the data to generate the COPY cannot be executed.
> Also, to test that the include option chooses only the data of the specified foreign servers we would need some negative testing, i.e. that the COPY statement for the non-desired table does not appear. But I do not find these kind of tests in the test suite, even for other selective options like --table or --exclude-schema.
>
Can you have a look at dump with parallel option. Parallel option will
take a lock on table while invoking lockTableForWorker. May be this is
not required for foreign tables.
Thoughts?
Regards,
Vignesh
EnterpriseDB: http://www.enterprisedb.com
Can you have a look at dump with parallel option. Parallel option will
take a lock on table while invoking lockTableForWorker. May be this is
not required for foreign tables.
Thoughts?I tried with -j and found no issue. I guess that the foreign table needs locking anyway to prevent anyone to modify it while is being dumped.
- CREATE EXTENSION postgres_fdw;
- CREATE SERVER foreign_server FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host '127.0.0.1', port '5433', dbname 'postgres');
- create user user1 password '123';
- alter user user1 with superuser;
- CREATE USER MAPPING FOR user1 SERVER foreign_server OPTIONS (user 'user1', password '123');
- create user user1 password '123';
- alter user user1 with superuser;
- create schema test;
- create table test.test1(id int);
- insert into test.test1 values(10);
- CREATE FOREIGN TABLE foreign_table1 (id integer NOT NULL) SERVER foreign_server OPTIONS (schema_name 'test', table_name 'test1');
- ./pg_dump -d postgres -f dumpdir -U user1 -F d --include-foreign-data foreign_server
- ./pg_dump -d postgres -f dumpdir1 -U user1 -F d -j 5 --include-foreign-data foreign_server
This usually means that someone requested an ACCESS EXCLUSIVE lock on the table after the pg_dump parent process had gotten the initial ACCESS SHARE lock on the table.
pg_dump: error: a worker process died unexpectedly
On Tue, Jan 14, 2020 at 5:22 PM Luis Carril <luis.carril@swarm64.com> wrote:Can you have a look at dump with parallel option. Parallel option will
take a lock on table while invoking lockTableForWorker. May be this is
not required for foreign tables.
Thoughts?I tried with -j and found no issue. I guess that the foreign table needs locking anyway to prevent anyone to modify it while is being dumped.I'm able to get the problem with the following steps:Bring up a postgres setup with servers running in 5432 & 5433 port.Execute the following commands in Server1 configured on 5432 port:
- CREATE EXTENSION postgres_fdw;
- CREATE SERVER foreign_server FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host '127.0.0.1', port '5433', dbname 'postgres');
- create user user1 password '123';
- alter user user1 with superuser;
- CREATE USER MAPPING FOR user1 SERVER foreign_server OPTIONS (user 'user1', password '123');
Execute the following commands in Server2 configured on 5433 port:
- create user user1 password '123';
- alter user user1 with superuser;
Execute the following commands in Server2 configured on 5433 port as user1 user:
- create schema test;
- create table test.test1(id int);
- insert into test.test1 values(10);
Execute the following commands in Server1 configured on 5432 port as user1 user:
- CREATE FOREIGN TABLE foreign_table1 (id integer NOT NULL) SERVER foreign_server OPTIONS (schema_name 'test', table_name 'test1');
Without parallel option, the operation is successful:
- ./pg_dump -d postgres -f dumpdir -U user1 -F d --include-foreign-data foreign_server
With parallel option it fails:pg_dump: error: could not obtain lock on relation "public.foreign_table1"
- ./pg_dump -d postgres -f dumpdir1 -U user1 -F d -j 5 --include-foreign-data foreign_server
This usually means that someone requested an ACCESS EXCLUSIVE lock on the table after the pg_dump parent process had gotten the initial ACCESS SHARE lock on the table.
pg_dump: error: a worker process died unexpectedlyThere may be simpler steps than this to reproduce the issue, i have not try to optimize it.Regards,VigneshEnterpriseDB: http://www.enterprisedb.com
Sent: Thursday, January 16, 2020 10:01 AM
To: Luis Carril <luis.carril@swarm64.com>
Cc: Alvaro Herrera <alvherre@2ndquadrant.com>; Daniel Gustafsson <daniel@yesql.se>; Laurenz Albe <laurenz.albe@cybertec.at>; PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org>
Subject: Re: Option to dump foreign data in pg_dump
Can you have a look at dump with parallel option. Parallel option will
take a lock on table while invoking lockTableForWorker. May be this is
not required for foreign tables.
Thoughts?I tried with -j and found no issue. I guess that the foreign table needs locking anyway to prevent anyone to modify it while is being dumped.
- CREATE EXTENSION postgres_fdw;
- CREATE SERVER foreign_server FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host '127.0.0.1', port '5433', dbname 'postgres');
- create user user1 password '123';
- alter user user1 with superuser;
- CREATE USER MAPPING FOR user1 SERVER foreign_server OPTIONS (user 'user1', password '123');
- create user user1 password '123';
- alter user user1 with superuser;
- create schema test;
- create table test.test1(id int);
- insert into test.test1 values(10);
- CREATE FOREIGN TABLE foreign_table1 (id integer NOT NULL) SERVER foreign_server OPTIONS (schema_name 'test', table_name 'test1');
- ./pg_dump -d postgres -f dumpdir -U user1 -F d --include-foreign-data foreign_server
- ./pg_dump -d postgres -f dumpdir1 -U user1 -F d -j 5 --include-foreign-data foreign_server
This usually means that someone requested an ACCESS EXCLUSIVE lock on the table after the pg_dump parent process had gotten the initial ACCESS SHARE lock on the table.
pg_dump: error: a worker process died unexpectedly
On Mon, Jan 20, 2020 at 8:34 PM Luis Carril <luis.carril@swarm64.com> wrote: > > > Hi Vignesh, > > yes you are right I could reproduce it also with 'file_fdw'. The issue is that LOCK is not supported on foreign tables,so I guess that the safest solution is to make the --include-foreign-data incompatible with --jobs, because skippingthe locking for foreign tables maybe can lead to a deadlock anyway. Suggestions? > Yes we can support --include-foreign-data without parallel option and later add support for parallel option as a different patch. Regards, Vignesh EnterpriseDB: http://www.enterprisedb.com
Yes we can support --include-foreign-data without parallel option and
later add support for parallel option as a different patch.
Hi,
I've attached a new version of the patch in which an error is emitted if the parallel backup is used with the --include-foreign-data option.
Cheers
Luis M. Carril
Attachment
On Tue, Jan 21, 2020 at 3:06 PM Luis Carril <luis.carril@swarm64.com> wrote: > > Yes we can support --include-foreign-data without parallel option and > later add support for parallel option as a different patch. > > Hi, > > I've attached a new version of the patch in which an error is emitted if the parallel backup is used with the --include-foreign-dataoption. > Thanks for working on the comments. I noticed one behavior is different when --table option is specified. When --table is specified the following are not getting dumped: CREATE SERVER foreign_server I felt the above also should be included as part of the dump when include-foreign-data option is specified. Regards, Vignesh EnterpriseDB: http://www.enterprisedb.com
Thanks for working on the comments. I noticed one behavior is
different when --table option is specified. When --table is specified
the following are not getting dumped:
CREATE SERVER foreign_server
I felt the above also should be included as part of the dump when
include-foreign-data option is specified.
Yes, it also happens on master. A dump of a foreign table using --table, which only dumps the table definition, does not include the extension nor the server.
I guess that the idea behind --table is that the table prerequisites should already exist on the database.
A similar behavior can be reproduced for a non foreign table. If a table is created in a specific schema, dumping only the table with --table does not dump the schema definition.
So I think we do not need to dump the server with the table.
Cheers
Luis M Carril
On 2020-01-21 10:36, Luis Carril wrote: >> Yes we can support --include-foreign-data without parallel option and >> later add support for parallel option as a different patch. > > Hi, > > I've attached a new version of the patch in which an error is > emitted if the parallel backup is used with the --include-foreign-data > option. This seems like an overreaction. The whole point of lockTableForWorker() is to avoid deadlocks, but foreign tables don't have locks, so it's not a problem. I think you can just skip foreign tables in lockTableForWorker() using the same logic that getTables() uses. I think parallel data dump would be an especially interesting option when using foreign tables, so it's worth figuring this out. -- Peter Eisentraut http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On Wed, Jan 29, 2020 at 2:00 PM Luis Carril <luis.carril@swarm64.com> wrote: > > Thanks for working on the comments. I noticed one behavior is > different when --table option is specified. When --table is specified > the following are not getting dumped: > CREATE SERVER foreign_server > > I felt the above also should be included as part of the dump when > include-foreign-data option is specified. > > Yes, it also happens on master. A dump of a foreign table using --table, which only dumps the table definition, does notinclude the extension nor the server. > I guess that the idea behind --table is that the table prerequisites should already exist on the database. > > A similar behavior can be reproduced for a non foreign table. If a table is created in a specific schema, dumping onlythe table with --table does not dump the schema definition. > > So I think we do not need to dump the server with the table. > Thanks for the clarification, the behavior sounds reasonable to me unless others have a different opinion on this. Regards, Vignesh EnterpriseDB: http://www.enterprisedb.com
Hi Luis, On 1/29/20 11:05 AM, Peter Eisentraut wrote: > On 2020-01-21 10:36, Luis Carril wrote: >>> Yes we can support --include-foreign-data without parallel option and >>> later add support for parallel option as a different patch. >> >> Hi, >> >> I've attached a new version of the patch in which an error is >> emitted if the parallel backup is used with the --include-foreign-data >> option. > > This seems like an overreaction. The whole point of > lockTableForWorker() is to avoid deadlocks, but foreign tables don't > have locks, so it's not a problem. I think you can just skip foreign > tables in lockTableForWorker() using the same logic that getTables() uses. > > I think parallel data dump would be an especially interesting option > when using foreign tables, so it's worth figuring this out. What do you think of Peter's comment? Regards, -- -David david@pgmasters.net
Hi Luis,
On 1/29/20 11:05 AM, Peter Eisentraut wrote:
> On 2020-01-21 10:36, Luis Carril wrote:
>>> Yes we can support --include-foreign-data without parallel option and
>>> later add support for parallel option as a different patch.
>>
>> Hi,
>>
>> I've attached a new version of the patch in which an error is
>> emitted if the parallel backup is used with the --include-foreign-data
>> option.
>
> This seems like an overreaction. The whole point of
> lockTableForWorker() is to avoid deadlocks, but foreign tables don't
> have locks, so it's not a problem. I think you can just skip foreign
> tables in lockTableForWorker() using the same logic that getTables() uses.
>
> I think parallel data dump would be an especially interesting option
> when using foreign tables, so it's worth figuring this out.
What do you think of Peter's comment?
Regards,
--
-David
david@pgmasters.net
--
I am just responding on the latest mail on this thread. But the question is about functionality. The proposal is to add a single flag --include-foreign-data which controls whether or not data is dumped for all the foreign tables in a database. That may not serve the purpose. A foreign table may point to a view, materialized view or inheritance tree, and so on. A database can have foreign tables pointing to all of those kinds. Restoring data to a view won't be possible and restoring it into an inheritance tree would insert it into the parent only and not the children. Further, a user may not want the data to be dumped for all the foreign tables since their usages are different esp. considering restore. I think a better option is to extract data in a foreign table using --table if that's the only usage. Otherwise, we need a foreign table level flag indicating whether pg_dump should dump the data for that foreign table or not.
> On 2020-01-21 10:36, Luis Carril wrote:
>>> Yes we can support --include-foreign-data without parallel option and
>>> later add support for parallel option as a different patch.
>>
>> Hi,
>>
>> I've attached a new version of the patch in which an error is
>> emitted if the parallel backup is used with the --include-foreign-data
>> option.
>
> This seems like an overreaction. The whole point of
> lockTableForWorker() is to avoid deadlocks, but foreign tables don't
> have locks, so it's not a problem. I think you can just skip foreign
> tables in lockTableForWorker() using the same logic that getTables() uses.
>
> I think parallel data dump would be an especially interesting option
> when using foreign tables, so it's worth figuring this out.
What do you think of Peter's comment?
Sent: Wednesday, March 4, 2020 5:39 PM
To: David Steele <david@pgmasters.net>
Cc: Luis Carril <luis.carril@swarm64.com>; vignesh C <vignesh21@gmail.com>; Peter Eisentraut <peter.eisentraut@2ndquadrant.com>; Alvaro Herrera <alvherre@2ndquadrant.com>; Daniel Gustafsson <daniel@yesql.se>; Laurenz Albe <laurenz.albe@cybertec.at>; PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org>
Subject: Re: Option to dump foreign data in pg_dump
Hi Luis,
On 1/29/20 11:05 AM, Peter Eisentraut wrote:
> On 2020-01-21 10:36, Luis Carril wrote:
>>> Yes we can support --include-foreign-data without parallel option and
>>> later add support for parallel option as a different patch.
>>
>> Hi,
>>
>> I've attached a new version of the patch in which an error is
>> emitted if the parallel backup is used with the --include-foreign-data
>> option.
>
> This seems like an overreaction. The whole point of
> lockTableForWorker() is to avoid deadlocks, but foreign tables don't
> have locks, so it's not a problem. I think you can just skip foreign
> tables in lockTableForWorker() using the same logic that getTables() uses.
>
> I think parallel data dump would be an especially interesting option
> when using foreign tables, so it's worth figuring this out.
What do you think of Peter's comment?
Regards,
--
-David
david@pgmasters.net
--
Hi Luis, Please don't top post. Also be careful to quote prior text when replying. Your message was pretty hard to work through -- i.e. figuring out what you said vs. what you were replying to. On 3/5/20 8:51 AM, Luis Carril wrote: > > At this point I would like to leave the patch as is, and discuss further > improvement in a future patch. I have marked this as Need Review since the author wants the patch considered as-is. I think Ashutosh, at least, has concerns about the patch as it stands, but does anyone else want to chime in? Regards, -- -David david@pgmasters.net
On 2020-Jan-29, Peter Eisentraut wrote: > On 2020-01-21 10:36, Luis Carril wrote: > > > Yes we can support --include-foreign-data without parallel option and > > > later add support for parallel option as a different patch. > > > > I've attached a new version of the patch in which an error is > > emitted if the parallel backup is used with the --include-foreign-data > > option. > > This seems like an overreaction. The whole point of lockTableForWorker() is > to avoid deadlocks, but foreign tables don't have locks, so it's not a > problem. I think you can just skip foreign tables in lockTableForWorker() > using the same logic that getTables() uses. > > I think parallel data dump would be an especially interesting option when > using foreign tables, so it's worth figuring this out. I agree it would be nice to implement this, so I tried to implement it. I found it's not currently workable, because parallel.c only has a tocEntry to work with, not a DumpableObject, so it doesn't know that the table is foreign; to find that out, parallel.c could use findObjectByDumpId, but parallel.c is used by both pg_dump and pg_restore, and findObjectByDumpId is in common.c which cannot be linked in pg_restore because of numerous incompatibilities. One way to make this work would be to put lockTableForWorker somewhere other than parallel.c. Foe example maybe have CreateArchive() set up a new "lock table" ArchiveHandle function ptr that parallel.c can call; lockTableForWorker() becomes the pg_dump implementation of that, while pg_restore uses NULL. Anyway, I think Luis has it right that this should not be a blocker for this feature. -- Álvaro Herrera https://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On 2020-Mar-23, Alvaro Herrera wrote: > > This seems like an overreaction. The whole point of lockTableForWorker() is > > to avoid deadlocks, but foreign tables don't have locks, so it's not a > > problem. I think you can just skip foreign tables in lockTableForWorker() > > using the same logic that getTables() uses. > > > > I think parallel data dump would be an especially interesting option when > > using foreign tables, so it's worth figuring this out. > > I agree it would be nice to implement this, so I tried to implement it. (Here's patch for this, which of course doesn't compile) diff --git a/src/bin/pg_dump/parallel.c b/src/bin/pg_dump/parallel.c index c25e3f7a88..b3000da409 100644 --- a/src/bin/pg_dump/parallel.c +++ b/src/bin/pg_dump/parallel.c @@ -1316,17 +1316,33 @@ IsEveryWorkerIdle(ParallelState *pstate) * then we know that somebody else has requested an ACCESS EXCLUSIVE lock and * so we have a deadlock. We must fail the backup in that case. */ +#include "pg_dump.h" +#include "catalog/pg_class_d.h" static void lockTableForWorker(ArchiveHandle *AH, TocEntry *te) { const char *qualId; PQExpBuffer query; PGresult *res; + DumpableObject *obj; /* Nothing to do for BLOBS */ if (strcmp(te->desc, "BLOBS") == 0) return; + /* + * Nothing to do for foreign tables either, since they don't support LOCK + * TABLE. + */ + obj = findObjectByDumpId(te->dumpId); + if (obj->objType == DO_TABLE_DATA) + { + TableInfo *tabinfo = (TableInfo *) obj; + + if (tabinfo->relkind == RELKIND_FOREIGN_TABLE) + return; + } + query = createPQExpBuffer(); qualId = fmtQualifiedId(te->namespace, te->tag); -- Álvaro Herrera https://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
v8 attached. I modified Luis' v7 a little bit by putting the ftserver acquisition in the main pg_class query instead of adding one separate query for each foreign table. That seems better overall. I don't understand why this code specifically disallows the empty string as an option to --dump-foreign-data. The other pattern-matching options don't do that. This seems to have been added in response to Daniel's review[1], but I don't quite understand the rationale. No other option behaves that way. I'm inclined to remove that, and I have done so in this version. I removed DumpOptions new bool flag. Seems pointless; we can just check that the list is not null, as we do for other such lists. I split out the proposed test in a different commit; there's no consensus that this test is acceptable as-is. Tom proposed a different strategy[2]; if you try to dump a table with a dummy handler, you'll get this: COPY public.ft1 (c1, c2, c3) FROM stdin; pg_dump: error: query failed: ERROR: foreign-data wrapper "dummy" has no handler pg_dump: error: query was: COPY (SELECT c1, c2, c3 FROM public.ft1 ) TO stdout; Maybe what we should do just verify that you do get that error (and no other errors). [1] https://postgr.es/m/E9C5B25C-52E4-49EC-9958-69CD5BD14EDA@yesql.se [2] https://postgr.es/m/8001.1573759651@sss.pgh.pa.us -- Álvaro Herrera https://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Attachment
> On 23 Mar 2020, at 21:40, Alvaro Herrera <alvherre@2ndquadrant.com> wrote: > I don't understand why this code specifically disallows the empty string > as an option to --dump-foreign-data. The other pattern-matching options > don't do that. This seems to have been added in response to Daniel's > review[1], but I don't quite understand the rationale. No other option > behaves that way. I'm inclined to remove that, and I have done so in > this version. It was a response to the discussion upthread about not allowing a blanket dump- everything statement for foreign data, but rather require some form of opt-in. The empty string made the code wildcard to all foreign data, which was thought of as being a footgun for creating problematic dumps. cheers ./daniel
On 2020-Mar-23, Alvaro Herrera wrote: > COPY public.ft1 (c1, c2, c3) FROM stdin; > pg_dump: error: query failed: ERROR: foreign-data wrapper "dummy" has no handler > pg_dump: error: query was: COPY (SELECT c1, c2, c3 FROM public.ft1 ) TO stdout; > > Maybe what we should do just verify that you do get that error (and no > other errors). Done that way. Will be pushing this shortly. -- Álvaro Herrera https://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Attachment
On 2020-Mar-24, Alvaro Herrera wrote: > On 2020-Mar-23, Alvaro Herrera wrote: > > > COPY public.ft1 (c1, c2, c3) FROM stdin; > > pg_dump: error: query failed: ERROR: foreign-data wrapper "dummy" has no handler > > pg_dump: error: query was: COPY (SELECT c1, c2, c3 FROM public.ft1 ) TO stdout; > > > > Maybe what we should do just verify that you do get that error (and no > > other errors). > > Done that way. Will be pushing this shortly. Hmm, but travis is failing on the cfbot, and I can't see why ... -- Álvaro Herrera https://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On 2020-Mar-24, Alvaro Herrera wrote: > Hmm, but travis is failing on the cfbot, and I can't see why ... My only guess, without further output, is that getopt_long is not liking the [ "--include-foreign-data", "xxx" ] style of arguments in the Perl array of the command to run (which we don't use with anywhere else in the files I looked), so I changed it to [ "--include-foreign-data=xxx" ]. If this was not the problem, we'll need more info, which the buildfarm will give us. And pushed. Thanks, Luis, and thanks to all reviewers. -- Álvaro Herrera https://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services