Thread: pg_dump 2GB limit?

pg_dump 2GB limit?

From
Laurette Cisneros
Date:
The archives search is not working on postgresql.org so I need to ask this
question...

We are using postgresql 7.2 and when dumping one of our larger databases,
we get the following error:

File size limit exceeded (core dumped)

We suspect pg_dump.  Is this true?  Why would there be this limit in
pg_dump?  Is it scheduled to be fixed?

Thanks,

-- 
Laurette Cisneros
Database Roadie
(510) 420-3137
NextBus Information Systems, Inc.
www.nextbus.com
Where's my bus?




Re: pg_dump 2GB limit?

From
Doug McNaught
Date:
Laurette Cisneros <laurette@nextbus.com> writes:

> The archives search is not working on postgresql.org so I need to ask this
> question...
> 
> We are using postgresql 7.2 and when dumping one of our larger databases,
> we get the following error:
> 
> File size limit exceeded (core dumped)
> 
> We suspect pg_dump.  Is this true?  Why would there be this limit in
> pg_dump?  Is it scheduled to be fixed?

This means one of two things:

1) Your ulimits are set too low, or
2) Your pg_dump wasn't compiled against a C library with large file  support (greater than 2GB).

Is this on Linux?

-Doug
-- 
Doug McNaught       Wireboard Industries      http://www.wireboard.com/
     Custom software development, systems and network consulting.     Java PostgreSQL Enhydra Python Zope Perl Apache
LinuxBSD...
 


Re: pg_dump 2GB limit?

From
Peter Eisentraut
Date:
Laurette Cisneros writes:

> We are using postgresql 7.2 and when dumping one of our larger databases,
> we get the following error:
>
> File size limit exceeded (core dumped)
>
> We suspect pg_dump.  Is this true?

No, it's your operating sytem.

http://www.us.postgresql.org/users-lounge/docs/7.2/postgres/backup.html#BACKUP-DUMP-LARGE

-- 
Peter Eisentraut   peter_e@gmx.net



Re: pg_dump 2GB limit?

From
dru-sql@redwoodsoft.com
Date:
Are you on linux (most likely)?  If so, then your pgsql was compiled
without large file support.

Dru Nelson
San Carlos, California


> The archives search is not working on postgresql.org so I need to ask this
> question...
>
> We are using postgresql 7.2 and when dumping one of our larger databases,
> we get the following error:
>
> File size limit exceeded (core dumped)
>
> We suspect pg_dump.  Is this true?  Why would there be this limit in
> pg_dump?  Is it scheduled to be fixed?
>
> Thanks,
>
> --
> Laurette Cisneros
> Database Roadie
> (510) 420-3137
> NextBus Information Systems, Inc.
> www.nextbus.com
> Where's my bus?



Re: pg_dump 2GB limit?

From
Laurette Cisneros
Date:
Hi,

I'm on Red Hat.  Here's the uname info:
Linux visor 2.4.2-2 #1 Sun Apr 8 20:41:30 EDT 2001 i686 unknown

What do I need to do to "turn on large file support" in the compile?

Thanks,

L.
On 28 Mar 2002, Doug McNaught wrote:

> Laurette Cisneros <laurette@nextbus.com> writes:
> 
> > The archives search is not working on postgresql.org so I need to ask this
> > question...
> > 
> > We are using postgresql 7.2 and when dumping one of our larger databases,
> > we get the following error:
> > 
> > File size limit exceeded (core dumped)
> > 
> > We suspect pg_dump.  Is this true?  Why would there be this limit in
> > pg_dump?  Is it scheduled to be fixed?
> 
> This means one of two things:
> 
> 1) Your ulimits are set too low, or
> 2) Your pg_dump wasn't compiled against a C library with large file
>    support (greater than 2GB).
> 
> Is this on Linux?
> 
> -Doug
> 

-- 
Laurette Cisneros
Database Roadie
(510) 420-3137
NextBus Information Systems, Inc.
www.nextbus.com
Where's my bus?



Re: pg_dump 2GB limit?

From
Doug McNaught
Date:
Laurette Cisneros <laurette@nextbus.com> writes:

> Hi,
> 
> I'm on Red Hat.  Here's the uname info:
> Linux visor 2.4.2-2 #1 Sun Apr 8 20:41:30 EDT 2001 i686 unknown

That's an old and buggy kernel, BTW--you should install the errata
upgrades, 

> What do I need to do to "turn on large file support" in the compile?

Make sure you are running the latest kernel and libs, and AFAIK
'configure' should set it up for you automatically.

-Doug
-- 
Doug McNaught       Wireboard Industries      http://www.wireboard.com/
     Custom software development, systems and network consulting.     Java PostgreSQL Enhydra Python Zope Perl Apache
LinuxBSD...
 


Re: pg_dump 2GB limit?

From
mmc@maruska.dyndns.org (Michal Maruška)
Date:
Laurette Cisneros <laurette@nextbus.com> writes:

> Hi,
> 
> I'm on Red Hat.  Here's the uname info:
> Linux visor 2.4.2-2 #1 Sun Apr 8 20:41:30 EDT 2001 i686 unknown
> 
> What do I need to do to "turn on large file support" in the compile?
> 

IIRC old version (format) of reiserFS (3.5 ??) has this limit, too. Solutions is
to reformat with new version (kernel & reiserfsprogs).  (possible test with _dd_).



Re: pg_dump 2GB limit?

From
Laurette Cisneros
Date:
Oops sent the wrong uname, here's the one from the machine we compiled on:
Linux lept 2.4.16 #6 SMP Fri Feb 8 13:31:46 PST 2002 i686 unknown

and has: libc-2.2.2.so 

We use ./configure 

Still a problem?

We do compress (-Fc) right now, but are working on a backup scheme that
requires and uncompressed dump.

Thanks for the help!

L.

On 28 Mar 2002, Doug McNaught wrote:

> Laurette Cisneros <laurette@nextbus.com> writes:
> 
> > Hi,
> > 
> > I'm on Red Hat.  Here's the uname info:
> > Linux visor 2.4.2-2 #1 Sun Apr 8 20:41:30 EDT 2001 i686 unknown
> 
> That's an old and buggy kernel, BTW--you should install the errata
> upgrades, 
> 
> > What do I need to do to "turn on large file support" in the compile?
> 
> Make sure you are running the latest kernel and libs, and AFAIK
> 'configure' should set it up for you automatically.
> 
> -Doug
> 

-- 
Laurette Cisneros
Database Roadie
(510) 420-3137
NextBus Information Systems, Inc.
www.nextbus.com
Where's my bus?



Re: pg_dump 2GB limit?

From
Doug McNaught
Date:
Laurette Cisneros <laurette@nextbus.com> writes:

> Oops sent the wrong uname, here's the one from the machine we compiled on:
> Linux lept 2.4.16 #6 SMP Fri Feb 8 13:31:46 PST 2002 i686 unknown
> 
> and has: libc-2.2.2.so 
> 
> We use ./configure 
> 
> Still a problem?

Might be.  Make sure you have up to date kernel and libs on the
compile machine and the one you're running on.  Make sure your
filesystem supports files greater than 2GB.

Also, if you are using shell redirection to create the output file,
it's possible the shell isn't using the right open() flags.

-Doug
-- 
Doug McNaught       Wireboard Industries      http://www.wireboard.com/
     Custom software development, systems and network consulting.     Java PostgreSQL Enhydra Python Zope Perl Apache
LinuxBSD...
 


Re: pg_dump 2GB limit?

From
teg@redhat.com (Trond Eivind Glomsrød)
Date:
Laurette Cisneros <laurette@nextbus.com> writes:

> Hi,
> 
> I'm on Red Hat.  Here's the uname info:
> Linux visor 2.4.2-2 #1 Sun Apr 8 20:41:30 EDT 2001 i686 unknown

You should really upgrade (kernel and the rest), but this kernel
supports large files.

-- 
Trond Eivind Glomsrød
Red Hat, Inc.


Re: pg_dump 2GB limit?

From
teg@redhat.com (Trond Eivind Glomsrød)
Date:
Peter Eisentraut <peter_e@gmx.net> writes:

> Laurette Cisneros writes:
> 
> > We are using postgresql 7.2 and when dumping one of our larger databases,
> > we get the following error:
> >
> > File size limit exceeded (core dumped)
> >
> > We suspect pg_dump.  Is this true?
> 
> No, it's your operating sytem.

Red Hat Linux 7.x which he seems to be using supports this.
-- 
Trond Eivind Glomsrød
Red Hat, Inc.


Re: pg_dump 2GB limit?

From
Christopher Kings-Lynne
Date:
> > File size limit exceeded (core dumped)
> >
> > We suspect pg_dump.  Is this true?  Why would there be this limit in
> > pg_dump?  Is it scheduled to be fixed?

Try piping the output of pg_dump through bzip2 before writing it to disk.
Or else, I think that pg_dump has -z or something parameters for turning
on compression.

Chris




Re: pg_dump 2GB limit?

From
Jan Wieck
Date:
Christopher Kings-Lynne wrote:
> > > File size limit exceeded (core dumped)
> > >
> > > We suspect pg_dump.  Is this true?  Why would there be this limit in
> > > pg_dump?  Is it scheduled to be fixed?
>
> Try piping the output of pg_dump through bzip2 before writing it to disk.
> Or else, I think that pg_dump has -z or something parameters for turning
> on compression.
   And if that isn't enough, you can pipe the output (compressed   or not) into split(1).


Jan

--

#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.                                  #
#================================================== JanWieck@Yahoo.com #



_________________________________________________________
Do You Yahoo!?
Get your free @yahoo.com address at http://mail.yahoo.com