Thread: Does pg_dumpall do BLOBs too?

Does pg_dumpall do BLOBs too?

From
Frank Joerdens
Date:
The man page for pg_dumpall does not mention anything about BLOBs,
whereas the man page for pg_dump does. Does that mean you can't dump out
everything at once if you have databases with BLOBs on your server? (I
need to dump and reload everything cuz my new app wants a NAMEDATALEN of
64).

Regards, Frank

Re: Does pg_dumpall do BLOBs too?

From
Daniel Lundin
Date:
On Fri, Jan 11, 2002 at 07:38:01PM +0100, Frank Joerdens wrote:
> The man page for pg_dumpall does not mention anything about BLOBs,
> whereas the man page for pg_dump does. Does that mean you can't dump out
> everything at once if you have databases with BLOBs on your server? (I
> need to dump and reload everything cuz my new app wants a NAMEDATALEN of
> 64).
>
As I understand it, pg_dumpall calls pg_dump, so if pg_dump backups blobs, so
would pg_dumpall.

Safest way of finding out whether your data is backed up safely or not is
trying to restore after a backup. Never trust any backup scheme without trying
it.

> Regards, Frank
>
> ---------------------------(end of broadcast)---------------------------
> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org
>

Re: Does pg_dumpall do BLOBs too?

From
Frank Joerdens
Date:
On Sat, Jan 12, 2002 at 11:37:36AM +0100, Daniel Lundin wrote:
> On Fri, Jan 11, 2002 at 07:38:01PM +0100, Frank Joerdens wrote:
> > The man page for pg_dumpall does not mention anything about BLOBs,
> > whereas the man page for pg_dump does. Does that mean you can't dump out
> > everything at once if you have databases with BLOBs on your server? (I
> > need to dump and reload everything cuz my new app wants a NAMEDATALEN of
> > 64).
> >
> As I understand it, pg_dumpall calls pg_dump, so if pg_dump backups blobs, so
> would pg_dumpall.

You're right: pg_dumpall is just a shell script that calls pg_dump.
Looks like it wouldn't be too hard to hack actually, which it seems
you'd have to do to get what I want:

--------------------------- begin ---------------------------
frank@limedes:~ > pg_dumpall -Ft -b > everything_13_jan_02.out.tar
pg_dump: BLOB output is not supported for plain text dump files. Use a
different output format.
pg_dump failed on template1, exiting
--------------------------- end ---------------------------

The wrapper script doesn't seem to pass the option -Ft to pg_dump so it
doesn't support output formats other than plain text which means you
can't do BLOBs.

I might try to fix this . . .

Regards, Frank

Re: Does pg_dumpall do BLOBs too?

From
Tom Lane
Date:
Frank Joerdens <frank@joerdens.de> writes:
> The wrapper script doesn't seem to pass the option -Ft to pg_dump so it
> doesn't support output formats other than plain text which means you
> can't do BLOBs.

The trouble is that pg_dumpall wants to concatenate the output from
several pg_dumps, intermixed with commands issued by itself.  Easy to do
with text outputs, not so easy with the non-text formats.

I could see making pg_dumpall emit a script that has the global setup
commands plus pg_restore calls referencing separately-created data
files, one per database.  Trouble with that is that the data files
couldn't be sent to pg_dumpall's stdout, which means that pg_dumpall
would have to include options for deciding where to put them.  And what
about the case where you have more than 4GB of data and a system that
doesn't do large files?  Presently it's easy to pipe pg_dumpall to
"split", and cat the segments together to feed to psql when reloading.
But that method won't work under this scenario.

            regards, tom lane

Re: Does pg_dumpall do BLOBs too?

From
Frank Joerdens
Date:
On Sat, Jan 12, 2002 at 11:28:28AM -0500, Tom Lane wrote:
> Frank Joerdens <frank@joerdens.de> writes:
> > The wrapper script doesn't seem to pass the option -Ft to pg_dump so it
> > doesn't support output formats other than plain text which means you
> > can't do BLOBs.
>
> The trouble is that pg_dumpall wants to concatenate the output from
> several pg_dumps, intermixed with commands issued by itself.  Easy to do
> with text outputs, not so easy with the non-text formats.

True.

> I could see making pg_dumpall emit a script that has the global setup
> commands plus pg_restore calls referencing separately-created data
> files, one per database.  Trouble with that is that the data files
> couldn't be sent to pg_dumpall's stdout, which means that pg_dumpall
> would have to include options for deciding where to put them.

As a default, create a subdir (maybe call it pg_dumpall_files[current
system date]) in the current directory where you'd also put the script
itself?

  And what
> about the case where you have more than 4GB of data and a system that
> doesn't do large files?  Presently it's easy to pipe pg_dumpall to
> "split", and cat the segments together to feed to psql when reloading.
> But that method won't work under this scenario.

On starting the script, give a message saying something along the lines
of 'If you have a lot of data, you might end up with files larger than 2
GB which may not work on some systems. In that case you should use the
plaintext option and dump and restore any databases with BLOBs
separately.' Then offer to continue or to quit?

What I just did to dump everything out and reload it was to do
pg_dumpall and pg_dump for the one (out of 26) databases that contained
BLOBs separately, then reload everything from the dumpall script, drop
the BLOB database, recreate it, and then restore it from its separate
script. I suppose that didn't kill me but an integrated solution would
be nicer and cleaner. I've got a sysadmin intern here next month who I
might put to the task.

Regards, Frank

OT: anon CVS hassles

From
Colm McCartan
Date:
Hello all,

When I try to do an anon cvs co I get the following:

cvs server: Updating pgsql/contrib/pgcrypto/expected
cvs server: failed to create lock directory for
`/projects/cvsroot/pgsql/contrib/pgcrypto/expected'
(/projects/cvsroot/pgsql/contrib/pgcrypto/expected/#cvs.lock):
Permission denied
cvs server: failed to obtain dir lock in repository
`/projects/cvsroot/pgsql/contrib/pgcrypto/expected'
cvs [server aborted]: read lock failed - giving up

I can see this in the archives a lot and it seems to reappear now and
then - what gives? Is this a permission thing on new directories?

Also, I really only want to check out the jdbc driver but find that this
is impossible, the entire codebase has to be co'd. Is there a good
reason for this? Having the jdbc source in a seperate tarball would
allow people to debug into the source from java. Perhaps a cvs module
entry for the sql tree?

Anyway, I believe the above error needs to be fixed on the server.

Cheers,
colm