Re: restore of large databases failing--any ideas? - Mailing list pgsql-hackers

From Tom Lane
Subject Re: restore of large databases failing--any ideas?
Date
Msg-id 20719.1081483789@sss.pgh.pa.us
Whole thread Raw
In response to restore of large databases failing--any ideas?  (s_hawkins@mindspring.com (S. Hawkins))
List pgsql-hackers
s_hawkins@mindspring.com (S. Hawkins) writes:
>   * We're running Postgres 7.2.3 on a more-or-less stock Red Hat 7.3
> platform.

Both the database and the platform are seriously obsolete :-(

> The particular file I'm wrestling with at the moment is ~2.2 Gig
> unzipped.  If you try to restore using pg_restore, the process
> immediately fails with the following:
>     pg_restore: [archiver] could not open input file: File too large

It appears that you're working with a pg_restore binary that doesn't
support access to files larger than 2G.  This is mostly an issue of what
the platform's libc can handle; and on many platforms it depends on
build or link options.  I no longer recall whether RH 7.3 supported
largefile access at all, let alone what build-time pushups were needed
to make it happen if it could happen.

My recommendation would be to get hold of a current PG version, dump
using the current version's pg_dump, then install and reload into the
current version.
        regards, tom lane


pgsql-hackers by date:

Previous
From: Tom Lane
Date:
Subject: Re: locale
Next
From: Brett Schwarz
Date:
Subject: Re: postgres/pgtcl & windows