Thread: pg_dumpall / pg_dump

pg_dumpall / pg_dump

From
jonesbl@WellsFargo.COM
Date:
Postgres ver. 7.3.1, on HPUX.

I'm having a problem dumping my database using pg_dumpall. The
statement_timeout value is set to 15 minutes, and I don't want to set it
higher (protecting the db against long and/or ill formed queries).

My problem is, the dump takes more that 15 minutes and there doesn't seem to
be any way to temporarily increase the timeout value a la:

& psql -c "set statement_timeout = 0;" <dbname>
& pg_dumpall ...
& psql -c "set statement_timeout = 900000;" <dbname>

I did hack the vacuumdb script to alter the timeout value so VACUUM would
complete (takes 1 hour 30 minutes).

Is there any way to hack pg_dump and pg_dumpall to temporarily increase
statement_timeout limits so backups can complete? Or, better yet, is there
an undocumented command to alter the statement_timeout value during the
dump?


Bill Jones
Systems Architect
Middleware Services
Wells Fargo Services Company
Office --415.243.1364
PCS -- 415.254.3831 (4152543831@mobile.att.net)

Views expressed are mine. Only in unusual circumstances are they shared by
my employer.


Re: pg_dumpall / pg_dump

From
"Daniel Seichter"
Date:
Hello,

what do you think about to dump each table with the option t and add the output of the
dump to one file?

Daniel

> Postgres ver. 7.3.1, on HPUX.
>
> I'm having a problem dumping my database using pg_dumpall. The
> statement_timeout value is set to 15 minutes, and I don't want to set
> it higher (protecting the db against long and/or ill formed queries).
>
> My problem is, the dump takes more that 15 minutes and there doesn't
> seem to be any way to temporarily increase the timeout value a la:
>
> & psql -c "set statement_timeout = 0;" <dbname>
> & pg_dumpall ...
> & psql -c "set statement_timeout = 900000;" <dbname>
>
> I did hack the vacuumdb script to alter the timeout value so VACUUM
> would complete (takes 1 hour 30 minutes).
>
> Is there any way to hack pg_dump and pg_dumpall to temporarily
> increase statement_timeout limits so backups can complete? Or, better
> yet, is there an undocumented command to alter the statement_timeout
> value during the dump?
>
>
> Bill Jones
> Systems Architect
> Middleware Services
> Wells Fargo Services Company
> Office --415.243.1364
> PCS -- 415.254.3831 (4152543831@mobile.att.net)
>
> Views expressed are mine. Only in unusual circumstances are they
> shared by my employer.
>
>
> ---------------------------(end of
> broadcast)--------------------------- TIP 6: Have you searched our
> list archives?
>
> http://archives.postgresql.org


--
Eigene Software und Projekte, KI, Links und Sonstiges!
Schauen Sie einfach mal vorbei, unter http://www.dseichter.de


Re: pg_dumpall / pg_dump

From
Tom Lane
Date:
jonesbl@WellsFargo.COM writes:
> My problem is, the dump takes more that 15 minutes and there doesn't seem to
> be any way to temporarily increase the timeout value a la:

> & psql -c "set statement_timeout = 0;" <dbname>
> & pg_dumpall ...
> & psql -c "set statement_timeout = 900000;" <dbname>

The grotty way is

    export PGOPTIONS="--statement_timeout=0"
    pg_dumpall
    unset PGOPTIONS

A possibly more convenient answer is to set statement_timeout=0 as a
per-user setting for the superuser (see ALTER USER).  Or, if you want
the limit to apply to superuser too, create a second superuser account
that you use only for pg_dump, vacuumdb, and similar long jobs.

            regards, tom lane


Re: pg_dumpall / pg_dump

From
Andrew Sullivan
Date:
On Fri, May 02, 2003 at 09:23:28PM +0200, Daniel Seichter wrote:
> Hello,
>
> what do you think about to dump each table with the option t and
> add the output of the dump to one file?

I think that's a good way to get an inconsistent dump.

A

----
Andrew Sullivan                         204-4141 Yonge Street
Liberty RMS                           Toronto, Ontario Canada
<andrew@libertyrms.info>                              M2P 2A8
                                         +1 416 646 3304 x110


Re: pg_dumpall / pg_dump

From
"Daniel Seichter"
Date:
Hello,

> > what do you think about to dump each table with the option t and add
> > the output of the dump to one file?
>
> I think that's a good way to get an inconsistent dump.
and what is a better way if you only need three tables of your database?
Daniel

--
postgreSQL on Netware - the red elephant
http://postgresql.dseichter.org


Re: pg_dumpall / pg_dump

From
"Daniel Seichter"
Date:
Hello Andrew,

> The most certain way is to do pg_dump in binary mode, and then use the
> catalogue after the fact to import only the target tables.  But on a
> live database, three pg_dump -t statements is certain _not_ to get you
> a consistent snapshot.  (Of course, if the consistency doesn't matter
> in your case, then it makes no difference.)
I dump the tables at a time, were I am sure, that no user is on the database. The dump
is used for another database, that creates a database with only these three tables.

Daniel
--
postgreSQL on Netware - the red elephant
http://postgresql.dseichter.org


Re: pg_dumpall / pg_dump

From
Andrew Sullivan
Date:
On Tue, May 06, 2003 at 07:55:30PM +0200, Daniel Seichter wrote:
> I dump the tables at a time, were I am sure, that no user is on the
> database.

Then that will work for you.  But it's a bad general prescription
(which was my only point).

A
--
----
Andrew Sullivan                         204-4141 Yonge Street
Liberty RMS                           Toronto, Ontario Canada
<andrew@libertyrms.info>                              M2P 2A8
                                         +1 416 646 3304 x110