Thread: pg-dump problem

pg-dump problem

From
Herouth Maoz
Date:
Preface:
=======

Well, finally, we have decided to start backing up the Postgres databases
we have. And the decision even came before anything got destroyed...

So, our sysadmin basically made a modified version of the dumpall script,
that creates individual files for each database, which include the
day-of-week in their name. The effect is to have seven days worth of backup
on disk. Since the disk is backed up on tape anyway, it more-or-less covers
all.

Since this is based on dumpall, the files are, of course, merely pg_dump
outputs, no bells and whistles.

The postgres version is 6.2.1

The problem:
===========

Well, as we all know, when one creates a backup scheme, one must try to
restore from backup, to make sure that when shit happens, the backup will
not turn out to be useless.

So I tried to restore from one of those pg_dump outputs into a newly
created database.

It failed. Output includes several occurences of the explanation of
backslash commands, courtesy of psql. Some - but not all - of the tables
were restored correctly.

I think I traced down the problem to the fact that I have default values
which are SQL functions. Pg_dump dumps the SQL functions AFTER it dumps the
tables. So, when a table is defined, and the function on which it is based
does not exist, the creation fails. Then, the copy into that table fails,
and then, PSQL tries to interpret the input for that COPY as commands...

What can I do about it? I guess it is a "damn if you do, damn if you
don't", because some functions require the existence of tables, and some
tables require the existence of functions... But what am I to do about my
backups?

Rant:
====

It was quite difficult to trace the origin of the problem, because when one
uses psql with redirected input, one does not get a visual feedback in the
same way as one gets when using -f. On the other hand, one cannot use -f
with pg_dump files, because it waits for the COPY data on the standard
input - not on the file of origin.

Before you suggest using pg_dump with proper inserts rather than COPY -
there is a bug in psql, which causes some of the text fields to be
truncated when using proper inserts. At least, when I'm using 8-bit text.
So it's not an option.

If any of the above has been fixed in 6.3, let me know. The problem is that
if I don't have a reliable way of restoring the data when we want to
upgrade, our sysadmin won't upgrade - and rightly so.

Herouth

--
Herouth Maoz, Internet developer.
Open University of Israel - Telem project
http://telem.openu.ac.il/~herutma



Re: [GENERAL] pg-dump problem

From
Bruce Momjian
Date:
> Preface:
> =======
>
> Well, finally, we have decided to start backing up the Postgres databases
> we have. And the decision even came before anything got destroyed...
>
> So, our sysadmin basically made a modified version of the dumpall script,
> that creates individual files for each database, which include the
> day-of-week in their name. The effect is to have seven days worth of backup
> on disk. Since the disk is backed up on tape anyway, it more-or-less covers
> all.
>
> Since this is based on dumpall, the files are, of course, merely pg_dump
> outputs, no bells and whistles.
>
> The postgres version is 6.2.1
>
> The problem:
> ===========
>
> Well, as we all know, when one creates a backup scheme, one must try to
> restore from backup, to make sure that when shit happens, the backup will
> not turn out to be useless.
>
> So I tried to restore from one of those pg_dump outputs into a newly
> created database.
>
> It failed. Output includes several occurences of the explanation of
> backslash commands, courtesy of psql. Some - but not all - of the tables
> were restored correctly.
>
> I think I traced down the problem to the fact that I have default values
> which are SQL functions. Pg_dump dumps the SQL functions AFTER it dumps the
> tables. So, when a table is defined, and the function on which it is based
> does not exist, the creation fails. Then, the copy into that table fails,
> and then, PSQL tries to interpret the input for that COPY as commands...
>
> What can I do about it? I guess it is a "damn if you do, damn if you
> don't", because some functions require the existence of tables, and some
> tables require the existence of functions... But what am I to do about my
> backups?

I believe this was fixed in the coming 6.4 beta, September 1.

I also believe we now show the load as it is being loaded by default.
You can use a psql option to get that in earlier releases.
--
Bruce Momjian                          |  830 Blythe Avenue
maillist@candle.pha.pa.us              |  Drexel Hill, Pennsylvania 19026
  +  If your life is a hard drive,     |  (610) 353-9879(w)
  +  Christ can be your backup.        |  (610) 853-3000(h)

Re: [GENERAL] pg-dump problem

From
Herouth Maoz
Date:
At 17:51 +0300 on 19/8/98, Bruce Momjian wrote:


> I believe this was fixed in the coming 6.4 beta, September 1.

Ah, I see... now, where did I put that time machine... :)

Well, I thought I'd do a pg_dump -S and append the normal pg_dump to it.
Thus, the schema will be created twice, and whatever failed in the first
pass will work in the second.

The problem with that is that pg_dump -S dumps the indices, so the indices
will be created before the data copy - which will cause a great slow-down
upon restoration.

> I also believe we now show the load as it is being loaded by default.
> You can use a psql option to get that in earlier releases.

Which option? I used -e, but it doesn't echo the actual data, only the
queries themselves, so I had to guess that the line causing the display of
all slash commands was the "\." at the end of a copy operation.

Herouth

--
Herouth Maoz, Internet developer.
Open University of Israel - Telem project
http://telem.openu.ac.il/~herutma



Re: [GENERAL] pg-dump problem

From
Bruce Momjian
Date:
> At 17:51 +0300 on 19/8/98, Bruce Momjian wrote:
>
>
> > I believe this was fixed in the coming 6.4 beta, September 1.
>
> Ah, I see... now, where did I put that time machine... :)
>
> Well, I thought I'd do a pg_dump -S and append the normal pg_dump to it.
> Thus, the schema will be created twice, and whatever failed in the first
> pass will work in the second.
>
> The problem with that is that pg_dump -S dumps the indices, so the indices
> will be created before the data copy - which will cause a great slow-down
> upon restoration.

Yep.  All the index creation is at the end, so you can just delete them.

>
> > I also believe we now show the load as it is being loaded by default.
> > You can use a psql option to get that in earlier releases.
>
> Which option? I used -e, but it doesn't echo the actual data, only the
> queries themselves, so I had to guess that the line causing the display of
> all slash commands was the "\." at the end of a copy operation.

Oh, you want to see the data?  I think the new version does this.  Not sure.

--
Bruce Momjian                          |  830 Blythe Avenue
maillist@candle.pha.pa.us              |  Drexel Hill, Pennsylvania 19026
  +  If your life is a hard drive,     |  (610) 353-9879(w)
  +  Christ can be your backup.        |  (610) 853-3000(h)

simple auto increment question.

From
Jeremy Hansen
Date:
I want to create a simple auto increment field
starting at 0000 for example.

How do I do this in postgres?

Thanks
-jeremy


Re: [GENERAL] simple auto increment question.

From
Sferacarta Software
Date:
Hello Jeremy,

giovedì, 20 agosto 98, you wrote:


JH> I want to create a simple auto increment field
JH> starting at 0000 for example.
                ^^^^  I supose you mean 1

JH> How do I do this in postgres?

JH> Thanks
JH> -jeremy

Examples:

   -- How to create a sequence starting with 1:
   --
   CREATE SEQUENCE serial START 1;


   -- How to assign the sequence as default to column <did>:

       CREATE TABLE distributors (
              did      INTEGER DEFAULT NEXTVAL('serial'),
              name     VARCHAR(40)
              );



Best regards,
   Jose'                            mailto:sferac@bo.nettuno.it