Thread: [9.1] 2 bugs with extensions

[9.1] 2 bugs with extensions

From
Marko Kreen
Date:
I converted Skytools modules to extensions and found 2 problems:

1) Dumpable sequences are not supported - if sequence is tagged  with pg_catalog.pg_extension_config_dump(), the
pg_dumptries  to run COPY on it.
 

2) If there is schema with functions, but nothing else,  then when later converting it to extension by adding
functionsand schema to extension, later drop  of that extension fails to remove schema.  Proper CREATE EXT + DROP EXT
worksfine.
 

To reproduce, checkout 'master' branch and go directly
to module directory:

$ git clone --recursive git://github.com/markokr/skytools.git
$ cd skytools

1) $ cd sql/pgq && make install installcheck
..
pg_dump regression > test.dump
pg_dump: SQL command failed
pg_dump: Error message from server: ERROR:  cannot copy from sequence
"batch_id_seq"
pg_dump: The command was: COPY pgq.batch_id_seq  TO stdout;


2) $ cd sql/pgq_coop && make install installcheck
..create extension pgq_coop from 'unpackaged';drop extension pgq_coop;create extension pgq_coop;ERROR:  schema
"pgq_coop"already exists
 

-- 
marko


Re: [9.1] 2 bugs with extensions

From
Dimitri Fontaine
Date:
Hi,

After much distractions I've at least been able to do something about
that bug report.

Marko Kreen <markokr@gmail.com> writes:
> 1) Dumpable sequences are not supported - if sequence is tagged
>    with pg_catalog.pg_extension_config_dump(), the pg_dump tries
>    to run COPY on it.

I can only reproduce that in 9.1. I first tried in HEAD where pg_dump
will just entirely skip the sequence, which is not the right answer
either, but at least does not spit out that message.

> pg_dump: Error message from server: ERROR:  cannot copy from sequence
> "batch_id_seq"
> pg_dump: The command was: COPY pgq.batch_id_seq  TO stdout;

Please find attached a patch that fixes it in 9.1, in all classic
pg_dump, --data-only and --schema-only.

 git diff --stat
  src/bin/pg_dump/pg_dump.c |   68 +++++++++++++++++++++++++++++++++++---------
  1 files changed, 54 insertions(+), 14 deletions(-)

Once that's ok for 9.1, I'll get to work on a fix for master, oh and
look at what the situation is in 9.2, which I guess is the same as in
master actually.

> 2) If there is schema with functions, but nothing else,
> create extension pgq_coop from 'unpackaged';
> drop extension pgq_coop;
> create extension pgq_coop;
> ERROR:  schema "pgq_coop" already exists

>From reading the scripts, it's not clear to me, but it appears that the
schema existed before the create from unpackaged then got added to the
extension by way of

  ALTER EXTENSION pgq_coop ADD SCHEMA pgq_coop;

Is that true? Can we work out a minimal example to reproduce the bug?
(The Makefile in skytools/sql/pgq_coop fails on my OS)

Regards,
--
Dimitri Fontaine
http://2ndQuadrant.fr     PostgreSQL : Expertise, Formation et Support


Attachment

Re: [9.1] 2 bugs with extensions

From
Marko Kreen
Date:
On Wed, Sep 26, 2012 at 5:36 PM, Dimitri Fontaine
<dimitri@2ndquadrant.fr> wrote:
> After much distractions I've at least been able to do something about
> that bug report.

Thanks.

>> 2) If there is schema with functions, but nothing else,
>> create extension pgq_coop from 'unpackaged';
>> drop extension pgq_coop;
>> create extension pgq_coop;
>> ERROR:  schema "pgq_coop" already exists
>
> From reading the scripts, it's not clear to me, but it appears that the
> schema existed before the create from unpackaged then got added to the
> extension by way of
>
>   ALTER EXTENSION pgq_coop ADD SCHEMA pgq_coop;
>
> Is that true?

Yes.

> Can we work out a minimal example to reproduce the bug?

Yes, the above text or sql/pgq_coop/sql/pgq_coop_init_ext.sql

I guess you could replace pgq_coop with any extension just
consisting of just functions.

> (The Makefile in skytools/sql/pgq_coop fails on my OS)

How does it fail?  Are you using gnu make?  What version?

-- 
marko



Re: [9.1] 2 bugs with extensions

From
Dimitri Fontaine
Date:
Marko Kreen <markokr@gmail.com> writes:
>> Can we work out a minimal example to reproduce the bug?
>
> Yes, the above text or sql/pgq_coop/sql/pgq_coop_init_ext.sql
>
> I guess you could replace pgq_coop with any extension just
> consisting of just functions.

I did just that, with a single function, and couldn't reproduce the
problem either in 9.1 nor in master, with relocatable = true then with
relocatable = false and schema = 'pg_catalog' as in your repository.

The extension has those files contents:
 Makefile   EXTENSION = extschema   DATA = extschema--unpackaged--1.0.sql extschema.sql extschema--1.0.sql
PG_CONFIG= pg_config   PGXS := $(shell $(PG_CONFIG) --pgxs)   include $(PGXS)
 
 extschema.control   # extschema extension   comment = 'debug extension schema handling'   default_version = '1.0'
relocatable= false   schema = 'pg_catalog'
 
 extschema.sql   create schema ext;   create or replace function ext.extschema() returns int language sql   as $$
select1; $$;
 
 extschema--unpackaged--1.0.sql   alter extension extschema add schema ext;   alter extension extschema add function
ext.extschema();
 extschema--1.0.sql   create schema ext;   create or replace function ext.extschema() returns int language sql   as $$
select1; $$;
 

So I guess that needs some more work to reproduce the bug.

>> (The Makefile in skytools/sql/pgq_coop fails on my OS)
>
> How does it fail?  Are you using gnu make?  What version?

I guess sed is the problem here, it's a BSD variant:
 dim ~/skytools/sql/pgq_coop make pgq_coop--unpackaged--3.1.sql sed: 1: "/^\\/{s/\\i //;p}\n": extra characters at the
endof p command sed: 1: "/^\\/{s/\\i //;p}\n": extra characters at the end of p command sed: 1: "/^\\/{s/\\i //;p}\n":
extracharacters at the end of p command cat pgq_coop.upgrade.sql > pgq_coop--unpackaged--3.1.sql
 
 sed --version   sed: illegal option -- - usage: sed script [-Ealn] [-i extension] [file ...]        sed [-Ealn] [-i
extension][-e script] ... [-f script_file] ... [file ...]   
 
Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr     PostgreSQL : Expertise, Formation et Support



Re: [9.1] 2 bugs with extensions

From
Marko Kreen
Date:
On Wed, Sep 26, 2012 at 7:15 PM, Dimitri Fontaine
<dimitri@2ndquadrant.fr> wrote:
> Marko Kreen <markokr@gmail.com> writes:
>>> Can we work out a minimal example to reproduce the bug?
>>
>> Yes, the above text or sql/pgq_coop/sql/pgq_coop_init_ext.sql
>>
>> I guess you could replace pgq_coop with any extension just
>> consisting of just functions.
>
> I did just that, with a single function, and couldn't reproduce the
> problem either in 9.1 nor in master, with relocatable = true then with
> relocatable = false and schema = 'pg_catalog' as in your repository.

Indeed, after another makefile reorg, I could not reproduce it
on skytools master either, some digging showed that due
to a makefile bug ($< instead $^) the ADD SCHEMA
was missing from .sql file.  So no bug in postgres.

>>> (The Makefile in skytools/sql/pgq_coop fails on my OS)
>>
>> How does it fail?  Are you using gnu make?  What version?
>
> I guess sed is the problem here, it's a BSD variant:

Could you test if Skytools git now works for you?

I replaced sed usage with awk there, perhaps that will be
more portable.

-- 
marko



Re: [9.1] 2 bugs with extensions

From
Dimitri Fontaine
Date:
Marko Kreen <markokr@gmail.com> writes:
> Indeed, after another makefile reorg, I could not reproduce it
> on skytools master either, some digging showed that due
> to a makefile bug ($< instead $^) the ADD SCHEMA
> was missing from .sql file.  So no bug in postgres.

That would explain, indeed :)

> Could you test if Skytools git now works for you?

It does:
   dim ~/dev/skytools/sql/pgq_coop make pgq_coop--unpackaged--3.1.1.sql    ../../scripts/catsql.py
structure/upgrade.sql> pgq_coop.upgrade.sql   ../../scripts/catsql.py pgq_coop.upgrade.sql structure/ext_unpackaged.sql
structure/ext_postproc.sql> pgq_coop--unpackaged--3.1.1.sql   dim ~/dev/skytools/sql/pgq_coop make pgq_coop--3.1.1.sql
 ../../scripts/catsql.py structure/install.sql > pgq_coop.sql   ../../scripts/catsql.py pgq_coop.sql
structure/ext_postproc.sql>   pgq_coop--3.1.1.sql    
> I replaced sed usage with awk there, perhaps that will be
> more portable.

I seem to recall needing to explicitly use `gawk` for some scripts,
depending on the features you want to have. Some systems default awk are
`mawk` or even some really old versions and don't have much features…

That said, it seems to work here, now. Thanks!

Regards,
--
Dimitri Fontaine                                        06 63 07 10 78
http://2ndQuadrant.fr     PostgreSQL : Expertise, Formation et Support



Re: [9.1] 2 bugs with extensions

From
Dimitri Fontaine
Date:
Dimitri Fontaine <dimitri@2ndQuadrant.fr> writes:
> Please find attached a patch that fixes it in 9.1, in all classic
> pg_dump, --data-only and --schema-only.

Same for 9.2, attached. master needs yet another patch because of the
recent headers reorg, it seems, that's for another day though.

Regards,
--
Dimitri Fontaine
http://2ndQuadrant.fr     PostgreSQL : Expertise, Formation et Support


Attachment

Re: [9.1] 2 bugs with extensions

From
Alvaro Herrera
Date:
Excerpts from Dimitri Fontaine's message of vie sep 28 17:36:41 -0300 2012:
> Dimitri Fontaine <dimitri@2ndQuadrant.fr> writes:
> > Please find attached a patch that fixes it in 9.1, in all classic
> > pg_dump, --data-only and --schema-only.
>
> Same for 9.2, attached. master needs yet another patch because of the
> recent headers reorg, it seems, that's for another day though.

No, just remove the RELKIND_UNCATALOGUED case in that switch.

--
Álvaro Herrera                http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services



Re: [9.1] 2 bugs with extensions

From
Dimitri Fontaine
Date:
Hi,

Alvaro Herrera <alvherre@2ndquadrant.com> writes:
>> Same for 9.2, attached. master needs yet another patch because of the
>> recent headers reorg, it seems, that's for another day though.
>
> No, just remove the RELKIND_UNCATALOGUED case in that switch.

Oh. As in the attached? :)

Regards,
--
Dimitri Fontaine
http://2ndQuadrant.fr     PostgreSQL : Expertise, Formation et Support


Attachment

Re: [9.1] 2 bugs with extensions

From
Alvaro Herrera
Date:
Excerpts from Dimitri Fontaine's message of lun oct 01 04:44:28 -0300 2012:
> Hi,
>
> Alvaro Herrera <alvherre@2ndquadrant.com> writes:
> >> Same for 9.2, attached. master needs yet another patch because of the
> >> recent headers reorg, it seems, that's for another day though.
> >
> > No, just remove the RELKIND_UNCATALOGUED case in that switch.
>
> Oh. As in the attached? :)

That seems to work, yes.

While testing this out I noticed that this silently does nothing:

SELECT pg_catalog.pg_extension_config_dump('my_config', NULL);

i.e. the table is not marked as configuration but no error or warning
appears, either.  This seems rather surprising.

--
Álvaro Herrera                http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services



Re: [9.1] 2 bugs with extensions

From
Alvaro Herrera
Date:
Excerpts from Dimitri Fontaine's message of mié sep 26 11:36:38 -0300 2012:

> Marko Kreen <markokr@gmail.com> writes:
> > 1) Dumpable sequences are not supported - if sequence is tagged
> >    with pg_catalog.pg_extension_config_dump(), the pg_dump tries
> >    to run COPY on it.
>
> I can only reproduce that in 9.1. I first tried in HEAD where pg_dump
> will just entirely skip the sequence, which is not the right answer
> either, but at least does not spit out that message.
>
> > pg_dump: Error message from server: ERROR:  cannot copy from sequence
> > "batch_id_seq"
> > pg_dump: The command was: COPY pgq.batch_id_seq  TO stdout;
>
> Please find attached a patch that fixes it in 9.1, in all classic
> pg_dump, --data-only and --schema-only.
>
>  git diff --stat
>   src/bin/pg_dump/pg_dump.c |   68 +++++++++++++++++++++++++++++++++++---------
>   1 files changed, 54 insertions(+), 14 deletions(-)
>
> Once that's ok for 9.1, I'll get to work on a fix for master, oh and
> look at what the situation is in 9.2, which I guess is the same as in
> master actually.

I had a look at the patches submitted by Dimitri and in my tests they do
exactly what's intended, i.e. produce a data dump of the extension's
config sequences when necessary.  However, after a couple of hours
trying to understand what the patches are *doing* I failed to figure it
out completely, and I'm afraid that there might be secondary side
effects that I'm failing to notice.  So I'm punting this to Tom, who
seems to be the author of this code.

--
Álvaro Herrera                http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services



Re: [9.1] 2 bugs with extensions

From
Tom Lane
Date:
Dimitri Fontaine <dimitri@2ndQuadrant.fr> writes:
> Alvaro Herrera <alvherre@2ndquadrant.com> writes:
>> No, just remove the RELKIND_UNCATALOGUED case in that switch.

> Oh. As in the attached? :)

I don't think you tested this patch in 9.2 or HEAD, because it bleats
like mad.  I installed an extension containing

create sequence extseq;
select pg_catalog.pg_extension_config_dump('extseq', '');

into the regression database, and then did:

$ pg_dump -Fc regression >r.dump
pg_dump: [archiver] WARNING: archive items not in correct section order
pg_dump: [archiver] WARNING: archive items not in correct section order
pg_dump: [archiver] WARNING: archive items not in correct section order
pg_dump: [archiver] WARNING: archive items not in correct section order
pg_dump: [archiver] WARNING: archive items not in correct section order
pg_dump: [archiver] WARNING: archive items not in correct section order
pg_dump: [archiver] WARNING: archive items not in correct section order
pg_dump: [archiver] WARNING: archive items not in correct section order

The reason is that it calls dumpSequence() to emit the SEQUENCE SET
archive item during table-data dumping, but the archive item gets marked
SECTION_PRE_DATA.  As of 9.2 we have to be rigid about keeping those
section markings correct and in-sequence.  This is not really right in
9.1 either (wouldn't be surprised if it breaks parallel restore).

The fact that SEQUENCE SET is considered pre-data has bitten us several
times already, eg
http://archives.postgresql.org/pgsql-bugs/2012-05/msg00084.php

I think it may be time to bite the bullet and change that (including
breaking dumpSequence() into two separate functions).  I'm a little bit
worried about the compatibility implications of back-patching such a
change, though.  Is it likely that anybody out there is depending on the
fact that, eg, pg_dump --section=pre-data currently includes SEQUENCE SET
items?  Personally I think it's more likely that that'd be seen as a
bug, but ...
        regards, tom lane



Re: [9.1] 2 bugs with extensions

From
Tom Lane
Date:
I wrote:
> The fact that SEQUENCE SET is considered pre-data has bitten us several
> times already, eg
> http://archives.postgresql.org/pgsql-bugs/2012-05/msg00084.php

> I think it may be time to bite the bullet and change that (including
> breaking dumpSequence() into two separate functions).  I'm a little bit
> worried about the compatibility implications of back-patching such a
> change, though.  Is it likely that anybody out there is depending on the
> fact that, eg, pg_dump --section=pre-data currently includes SEQUENCE SET
> items?  Personally I think it's more likely that that'd be seen as a
> bug, but ...

Specifically, I'm thinking this, which looks rather bulky but most of
the diff is from reindenting the guts of dumpSequence().

            regards, tom lane

diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 4223b415362f4673097b6950c1c1f8b8349ca7d7..82330cbd915d7d23f7976253f5135beeec1abcf9 100644
*** a/src/bin/pg_dump/pg_dump.c
--- b/src/bin/pg_dump/pg_dump.c
*************** static void dumpTable(Archive *fout, Tab
*** 192,197 ****
--- 192,198 ----
  static void dumpTableSchema(Archive *fout, TableInfo *tbinfo);
  static void dumpAttrDef(Archive *fout, AttrDefInfo *adinfo);
  static void dumpSequence(Archive *fout, TableInfo *tbinfo);
+ static void dumpSequenceData(Archive *fout, TableDataInfo *tdinfo);
  static void dumpIndex(Archive *fout, IndxInfo *indxinfo);
  static void dumpConstraint(Archive *fout, ConstraintInfo *coninfo);
  static void dumpTableConstraintComment(Archive *fout, ConstraintInfo *coninfo);
*************** makeTableDataInfo(TableInfo *tbinfo, boo
*** 1640,1648 ****
      /* Skip VIEWs (no data to dump) */
      if (tbinfo->relkind == RELKIND_VIEW)
          return;
-     /* Skip SEQUENCEs (handled elsewhere) */
-     if (tbinfo->relkind == RELKIND_SEQUENCE)
-         return;
      /* Skip FOREIGN TABLEs (no data to dump) */
      if (tbinfo->relkind == RELKIND_FOREIGN_TABLE)
          return;
--- 1641,1646 ----
*************** dumpDumpableObject(Archive *fout, Dumpab
*** 7318,7324 ****
              dumpCast(fout, (CastInfo *) dobj);
              break;
          case DO_TABLE_DATA:
!             dumpTableData(fout, (TableDataInfo *) dobj);
              break;
          case DO_DUMMY_TYPE:
              /* table rowtypes and array types are never dumped separately */
--- 7316,7325 ----
              dumpCast(fout, (CastInfo *) dobj);
              break;
          case DO_TABLE_DATA:
!             if (((TableDataInfo *) dobj)->tdtable->relkind == RELKIND_SEQUENCE)
!                 dumpSequenceData(fout, (TableDataInfo *) dobj);
!             else
!                 dumpTableData(fout, (TableDataInfo *) dobj);
              break;
          case DO_DUMMY_TYPE:
              /* table rowtypes and array types are never dumped separately */
*************** collectSecLabels(Archive *fout, SecLabel
*** 12226,12238 ****
  static void
  dumpTable(Archive *fout, TableInfo *tbinfo)
  {
!     if (tbinfo->dobj.dump)
      {
          char       *namecopy;

          if (tbinfo->relkind == RELKIND_SEQUENCE)
              dumpSequence(fout, tbinfo);
!         else if (!dataOnly)
              dumpTableSchema(fout, tbinfo);

          /* Handle the ACL here */
--- 12227,12239 ----
  static void
  dumpTable(Archive *fout, TableInfo *tbinfo)
  {
!     if (tbinfo->dobj.dump && !dataOnly)
      {
          char       *namecopy;

          if (tbinfo->relkind == RELKIND_SEQUENCE)
              dumpSequence(fout, tbinfo);
!         else
              dumpTableSchema(fout, tbinfo);

          /* Handle the ACL here */
*************** findLastBuiltinOid_V70(Archive *fout)
*** 13347,13366 ****
      return last_oid;
  }

  static void
  dumpSequence(Archive *fout, TableInfo *tbinfo)
  {
      PGresult   *res;
      char       *startv,
-                *last,
                 *incby,
                 *maxv = NULL,
                 *minv = NULL,
                 *cache;
      char        bufm[100],
                  bufx[100];
!     bool        cycled,
!                 called;
      PQExpBuffer query = createPQExpBuffer();
      PQExpBuffer delqry = createPQExpBuffer();
      PQExpBuffer labelq = createPQExpBuffer();
--- 13348,13369 ----
      return last_oid;
  }

+ /*
+  * dumpSequence
+  *      write the declaration (not data) of one user-defined sequence
+  */
  static void
  dumpSequence(Archive *fout, TableInfo *tbinfo)
  {
      PGresult   *res;
      char       *startv,
                 *incby,
                 *maxv = NULL,
                 *minv = NULL,
                 *cache;
      char        bufm[100],
                  bufx[100];
!     bool        cycled;
      PQExpBuffer query = createPQExpBuffer();
      PQExpBuffer delqry = createPQExpBuffer();
      PQExpBuffer labelq = createPQExpBuffer();
*************** dumpSequence(Archive *fout, TableInfo *t
*** 13375,13381 ****
      {
          appendPQExpBuffer(query,
                            "SELECT sequence_name, "
!                           "start_value, last_value, increment_by, "
                     "CASE WHEN increment_by > 0 AND max_value = %s THEN NULL "
                     "     WHEN increment_by < 0 AND max_value = -1 THEN NULL "
                            "     ELSE max_value "
--- 13378,13384 ----
      {
          appendPQExpBuffer(query,
                            "SELECT sequence_name, "
!                           "start_value, increment_by, "
                     "CASE WHEN increment_by > 0 AND max_value = %s THEN NULL "
                     "     WHEN increment_by < 0 AND max_value = -1 THEN NULL "
                            "     ELSE max_value "
*************** dumpSequence(Archive *fout, TableInfo *t
*** 13384,13390 ****
                     "     WHEN increment_by < 0 AND min_value = %s THEN NULL "
                            "     ELSE min_value "
                            "END AS min_value, "
!                           "cache_value, is_cycled, is_called from %s",
                            bufx, bufm,
                            fmtId(tbinfo->dobj.name));
      }
--- 13387,13393 ----
                     "     WHEN increment_by < 0 AND min_value = %s THEN NULL "
                            "     ELSE min_value "
                            "END AS min_value, "
!                           "cache_value, is_cycled FROM %s",
                            bufx, bufm,
                            fmtId(tbinfo->dobj.name));
      }
*************** dumpSequence(Archive *fout, TableInfo *t
*** 13392,13398 ****
      {
          appendPQExpBuffer(query,
                            "SELECT sequence_name, "
!                           "0 AS start_value, last_value, increment_by, "
                     "CASE WHEN increment_by > 0 AND max_value = %s THEN NULL "
                     "     WHEN increment_by < 0 AND max_value = -1 THEN NULL "
                            "     ELSE max_value "
--- 13395,13401 ----
      {
          appendPQExpBuffer(query,
                            "SELECT sequence_name, "
!                           "0 AS start_value, increment_by, "
                     "CASE WHEN increment_by > 0 AND max_value = %s THEN NULL "
                     "     WHEN increment_by < 0 AND max_value = -1 THEN NULL "
                            "     ELSE max_value "
*************** dumpSequence(Archive *fout, TableInfo *t
*** 13401,13407 ****
                     "     WHEN increment_by < 0 AND min_value = %s THEN NULL "
                            "     ELSE min_value "
                            "END AS min_value, "
!                           "cache_value, is_cycled, is_called from %s",
                            bufx, bufm,
                            fmtId(tbinfo->dobj.name));
      }
--- 13404,13410 ----
                     "     WHEN increment_by < 0 AND min_value = %s THEN NULL "
                            "     ELSE min_value "
                            "END AS min_value, "
!                           "cache_value, is_cycled FROM %s",
                            bufx, bufm,
                            fmtId(tbinfo->dobj.name));
      }
*************** dumpSequence(Archive *fout, TableInfo *t
*** 13428,13592 ****
  #endif

      startv = PQgetvalue(res, 0, 1);
!     last = PQgetvalue(res, 0, 2);
!     incby = PQgetvalue(res, 0, 3);
      if (!PQgetisnull(res, 0, 4))
!         maxv = PQgetvalue(res, 0, 4);
!     if (!PQgetisnull(res, 0, 5))
!         minv = PQgetvalue(res, 0, 5);
!     cache = PQgetvalue(res, 0, 6);
!     cycled = (strcmp(PQgetvalue(res, 0, 7), "t") == 0);
!     called = (strcmp(PQgetvalue(res, 0, 8), "t") == 0);

      /*
!      * The logic we use for restoring sequences is as follows:
!      *
!      * Add a CREATE SEQUENCE statement as part of a "schema" dump (use
!      * last_val for start if called is false, else use min_val for start_val).
!      * Also, if the sequence is owned by a column, add an ALTER SEQUENCE OWNED
!      * BY command for it.
!      *
!      * Add a 'SETVAL(seq, last_val, iscalled)' as part of a "data" dump.
       */
!     if (!dataOnly)
!     {
!         /*
!          * DROP must be fully qualified in case same name appears in
!          * pg_catalog
!          */
!         appendPQExpBuffer(delqry, "DROP SEQUENCE %s.",
!                           fmtId(tbinfo->dobj.namespace->dobj.name));
!         appendPQExpBuffer(delqry, "%s;\n",
!                           fmtId(tbinfo->dobj.name));

!         resetPQExpBuffer(query);

!         if (binary_upgrade)
!         {
!             binary_upgrade_set_pg_class_oids(fout, query,
!                                              tbinfo->dobj.catId.oid, false);
!             binary_upgrade_set_type_oids_by_rel_oid(fout, query,
!                                                     tbinfo->dobj.catId.oid);
!         }

!         appendPQExpBuffer(query,
!                           "CREATE SEQUENCE %s\n",
!                           fmtId(tbinfo->dobj.name));

!         if (fout->remoteVersion >= 80400)
!             appendPQExpBuffer(query, "    START WITH %s\n", startv);
!         else
!         {
!             /*
!              * Versions before 8.4 did not remember the true start value.  If
!              * is_called is false then the sequence has never been incremented
!              * so we can use last_val.    Otherwise punt and let it default.
!              */
!             if (!called)
!                 appendPQExpBuffer(query, "    START WITH %s\n", last);
!         }

!         appendPQExpBuffer(query, "    INCREMENT BY %s\n", incby);

!         if (minv)
!             appendPQExpBuffer(query, "    MINVALUE %s\n", minv);
!         else
!             appendPQExpBuffer(query, "    NO MINVALUE\n");

!         if (maxv)
!             appendPQExpBuffer(query, "    MAXVALUE %s\n", maxv);
!         else
!             appendPQExpBuffer(query, "    NO MAXVALUE\n");

!         appendPQExpBuffer(query,
!                           "    CACHE %s%s",
!                           cache, (cycled ? "\n    CYCLE" : ""));

!         appendPQExpBuffer(query, ";\n");

!         appendPQExpBuffer(labelq, "SEQUENCE %s", fmtId(tbinfo->dobj.name));

!         /* binary_upgrade:    no need to clear TOAST table oid */

!         if (binary_upgrade)
!             binary_upgrade_extension_member(query, &tbinfo->dobj,
!                                             labelq->data);

!         ArchiveEntry(fout, tbinfo->dobj.catId, tbinfo->dobj.dumpId,
!                      tbinfo->dobj.name,
!                      tbinfo->dobj.namespace->dobj.name,
!                      NULL,
!                      tbinfo->rolname,
!                      false, "SEQUENCE", SECTION_PRE_DATA,
!                      query->data, delqry->data, NULL,
!                      NULL, 0,
!                      NULL, NULL);

!         /*
!          * If the sequence is owned by a table column, emit the ALTER for it
!          * as a separate TOC entry immediately following the sequence's own
!          * entry.  It's OK to do this rather than using full sorting logic,
!          * because the dependency that tells us it's owned will have forced
!          * the table to be created first.  We can't just include the ALTER in
!          * the TOC entry because it will fail if we haven't reassigned the
!          * sequence owner to match the table's owner.
!          *
!          * We need not schema-qualify the table reference because both
!          * sequence and table must be in the same schema.
!          */
!         if (OidIsValid(tbinfo->owning_tab))
!         {
!             TableInfo  *owning_tab = findTableByOid(tbinfo->owning_tab);

!             if (owning_tab && owning_tab->dobj.dump)
!             {
!                 resetPQExpBuffer(query);
!                 appendPQExpBuffer(query, "ALTER SEQUENCE %s",
!                                   fmtId(tbinfo->dobj.name));
!                 appendPQExpBuffer(query, " OWNED BY %s",
!                                   fmtId(owning_tab->dobj.name));
!                 appendPQExpBuffer(query, ".%s;\n",
                          fmtId(owning_tab->attnames[tbinfo->owning_col - 1]));

!                 ArchiveEntry(fout, nilCatalogId, createDumpId(),
!                              tbinfo->dobj.name,
!                              tbinfo->dobj.namespace->dobj.name,
!                              NULL,
!                              tbinfo->rolname,
!                              false, "SEQUENCE OWNED BY", SECTION_PRE_DATA,
!                              query->data, "", NULL,
!                              &(tbinfo->dobj.dumpId), 1,
!                              NULL, NULL);
!             }
          }
-
-         /* Dump Sequence Comments and Security Labels */
-         dumpComment(fout, labelq->data,
-                     tbinfo->dobj.namespace->dobj.name, tbinfo->rolname,
-                     tbinfo->dobj.catId, 0, tbinfo->dobj.dumpId);
-         dumpSecLabel(fout, labelq->data,
-                      tbinfo->dobj.namespace->dobj.name, tbinfo->rolname,
-                      tbinfo->dobj.catId, 0, tbinfo->dobj.dumpId);
      }

!     if (!schemaOnly)
!     {
!         resetPQExpBuffer(query);
!         appendPQExpBuffer(query, "SELECT pg_catalog.setval(");
!         appendStringLiteralAH(query, fmtId(tbinfo->dobj.name), fout);
!         appendPQExpBuffer(query, ", %s, %s);\n",
!                           last, (called ? "true" : "false"));
!
!         ArchiveEntry(fout, nilCatalogId, createDumpId(),
!                      tbinfo->dobj.name,
!                      tbinfo->dobj.namespace->dobj.name,
!                      NULL,
!                      tbinfo->rolname,
!                      false, "SEQUENCE SET", SECTION_PRE_DATA,
!                      query->data, "", NULL,
!                      &(tbinfo->dobj.dumpId), 1,
!                      NULL, NULL);
!     }

      PQclear(res);

--- 13431,13550 ----
  #endif

      startv = PQgetvalue(res, 0, 1);
!     incby = PQgetvalue(res, 0, 2);
!     if (!PQgetisnull(res, 0, 3))
!         maxv = PQgetvalue(res, 0, 3);
      if (!PQgetisnull(res, 0, 4))
!         minv = PQgetvalue(res, 0, 4);
!     cache = PQgetvalue(res, 0, 5);
!     cycled = (strcmp(PQgetvalue(res, 0, 6), "t") == 0);

      /*
!      * DROP must be fully qualified in case same name appears in pg_catalog
       */
!     appendPQExpBuffer(delqry, "DROP SEQUENCE %s.",
!                       fmtId(tbinfo->dobj.namespace->dobj.name));
!     appendPQExpBuffer(delqry, "%s;\n",
!                       fmtId(tbinfo->dobj.name));

!     resetPQExpBuffer(query);

!     if (binary_upgrade)
!     {
!         binary_upgrade_set_pg_class_oids(fout, query,
!                                          tbinfo->dobj.catId.oid, false);
!         binary_upgrade_set_type_oids_by_rel_oid(fout, query,
!                                                 tbinfo->dobj.catId.oid);
!     }

!     appendPQExpBuffer(query,
!                       "CREATE SEQUENCE %s\n",
!                       fmtId(tbinfo->dobj.name));

!     if (fout->remoteVersion >= 80400)
!         appendPQExpBuffer(query, "    START WITH %s\n", startv);

!     appendPQExpBuffer(query, "    INCREMENT BY %s\n", incby);

!     if (minv)
!         appendPQExpBuffer(query, "    MINVALUE %s\n", minv);
!     else
!         appendPQExpBuffer(query, "    NO MINVALUE\n");

!     if (maxv)
!         appendPQExpBuffer(query, "    MAXVALUE %s\n", maxv);
!     else
!         appendPQExpBuffer(query, "    NO MAXVALUE\n");

!     appendPQExpBuffer(query,
!                       "    CACHE %s%s",
!                       cache, (cycled ? "\n    CYCLE" : ""));

!     appendPQExpBuffer(query, ";\n");

!     appendPQExpBuffer(labelq, "SEQUENCE %s", fmtId(tbinfo->dobj.name));

!     /* binary_upgrade:    no need to clear TOAST table oid */

!     if (binary_upgrade)
!         binary_upgrade_extension_member(query, &tbinfo->dobj,
!                                         labelq->data);

!     ArchiveEntry(fout, tbinfo->dobj.catId, tbinfo->dobj.dumpId,
!                  tbinfo->dobj.name,
!                  tbinfo->dobj.namespace->dobj.name,
!                  NULL,
!                  tbinfo->rolname,
!                  false, "SEQUENCE", SECTION_PRE_DATA,
!                  query->data, delqry->data, NULL,
!                  NULL, 0,
!                  NULL, NULL);

!     /*
!      * If the sequence is owned by a table column, emit the ALTER for it as a
!      * separate TOC entry immediately following the sequence's own entry.
!      * It's OK to do this rather than using full sorting logic, because the
!      * dependency that tells us it's owned will have forced the table to be
!      * created first.  We can't just include the ALTER in the TOC entry
!      * because it will fail if we haven't reassigned the sequence owner to
!      * match the table's owner.
!      *
!      * We need not schema-qualify the table reference because both sequence
!      * and table must be in the same schema.
!      */
!     if (OidIsValid(tbinfo->owning_tab))
!     {
!         TableInfo  *owning_tab = findTableByOid(tbinfo->owning_tab);

!         if (owning_tab && owning_tab->dobj.dump)
!         {
!             resetPQExpBuffer(query);
!             appendPQExpBuffer(query, "ALTER SEQUENCE %s",
!                               fmtId(tbinfo->dobj.name));
!             appendPQExpBuffer(query, " OWNED BY %s",
!                               fmtId(owning_tab->dobj.name));
!             appendPQExpBuffer(query, ".%s;\n",
                          fmtId(owning_tab->attnames[tbinfo->owning_col - 1]));

!             ArchiveEntry(fout, nilCatalogId, createDumpId(),
!                          tbinfo->dobj.name,
!                          tbinfo->dobj.namespace->dobj.name,
!                          NULL,
!                          tbinfo->rolname,
!                          false, "SEQUENCE OWNED BY", SECTION_PRE_DATA,
!                          query->data, "", NULL,
!                          &(tbinfo->dobj.dumpId), 1,
!                          NULL, NULL);
          }
      }

!     /* Dump Sequence Comments and Security Labels */
!     dumpComment(fout, labelq->data,
!                 tbinfo->dobj.namespace->dobj.name, tbinfo->rolname,
!                 tbinfo->dobj.catId, 0, tbinfo->dobj.dumpId);
!     dumpSecLabel(fout, labelq->data,
!                  tbinfo->dobj.namespace->dobj.name, tbinfo->rolname,
!                  tbinfo->dobj.catId, 0, tbinfo->dobj.dumpId);

      PQclear(res);

*************** dumpSequence(Archive *fout, TableInfo *t
*** 13595,13600 ****
--- 13553,13613 ----
      destroyPQExpBuffer(labelq);
  }

+ /*
+  * dumpSequenceData
+  *      write the data of one user-defined sequence
+  */
+ static void
+ dumpSequenceData(Archive *fout, TableDataInfo *tdinfo)
+ {
+     TableInfo  *tbinfo = tdinfo->tdtable;
+     PGresult   *res;
+     char       *last;
+     bool        called;
+     PQExpBuffer query = createPQExpBuffer();
+
+     /* Make sure we are in proper schema */
+     selectSourceSchema(fout, tbinfo->dobj.namespace->dobj.name);
+
+     appendPQExpBuffer(query,
+                       "SELECT last_value, is_called FROM %s",
+                       fmtId(tbinfo->dobj.name));
+
+     res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
+
+     if (PQntuples(res) != 1)
+     {
+         write_msg(NULL, ngettext("query to get data of sequence \"%s\" returned %d row (expected 1)\n",
+                                  "query to get data of sequence \"%s\" returned %d rows (expected 1)\n",
+                                  PQntuples(res)),
+                   tbinfo->dobj.name, PQntuples(res));
+         exit_nicely(1);
+     }
+
+     last = PQgetvalue(res, 0, 0);
+     called = (strcmp(PQgetvalue(res, 0, 1), "t") == 0);
+
+     resetPQExpBuffer(query);
+     appendPQExpBuffer(query, "SELECT pg_catalog.setval(");
+     appendStringLiteralAH(query, fmtId(tbinfo->dobj.name), fout);
+     appendPQExpBuffer(query, ", %s, %s);\n",
+                       last, (called ? "true" : "false"));
+
+     ArchiveEntry(fout, nilCatalogId, createDumpId(),
+                  tbinfo->dobj.name,
+                  tbinfo->dobj.namespace->dobj.name,
+                  NULL,
+                  tbinfo->rolname,
+                  false, "SEQUENCE SET", SECTION_DATA,
+                  query->data, "", NULL,
+                  &(tbinfo->dobj.dumpId), 1,
+                  NULL, NULL);
+
+     PQclear(res);
+
+     destroyPQExpBuffer(query);
+ }
+
  static void
  dumpTrigger(Archive *fout, TriggerInfo *tginfo)
  {

Re: [9.1] 2 bugs with extensions

From
Dimitri Fontaine
Date:
Tom Lane <tgl@sss.pgh.pa.us> writes:
>> I think it may be time to bite the bullet and change that (including
>> breaking dumpSequence() into two separate functions).  I'm a little bit
>> worried about the compatibility implications of back-patching such a
>> change, though.  Is it likely that anybody out there is depending on the
>> fact that, eg, pg_dump --section=pre-data currently includes SEQUENCE SET
>> items?  Personally I think it's more likely that that'd be seen as a
>> bug, but ...

FWIW, +1

> Specifically, I'm thinking this, which looks rather bulky but most of
> the diff is from reindenting the guts of dumpSequence().

I see that you commited that patch, thanks a lot Tom!

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr     PostgreSQL : Expertise, Formation et Support