Tom Lane wrote:
> Peter Wilson <petew@yellowhawk.co.uk> writes:
>> Tom Lane wrote:
>>> Could you show us the unmodified output from pg_restore -l, as well as
>>> the edits you made to produce the -L script?
>
>> Raw output from pg_restore -l is as follows:
>
> Hm, this shows only data entries. Was the original pg_dump made with -a,
> or did you use -a in the pg_restore -l command? If the latter, could we
> see the full -l output? I didn't have any luck trying to reproduce this
> behavior, so I'm supposing it depends on something you haven't shown us...
I may end up duplicating myself here - seem to be having lots of problems with
the Postgres new server so apologies.
The data file is from a live server and has been steadily growing. It's
1.2Gbytes(ish) in size. It was built with --compress=9. Does Postgres uncompress
this to something bigger than 2Gbytes before processing and busting what can be
referenced in a 32 bit seek value? What happens (in Linux) when you try to open
a file that is bigger than 2Gbytes - do you loose the ability to seek?
I've just taken apart the schema def. from pg_dump --schema-only and inserted
the data-restore after the tables are created but before
indices/constraints/rules are replied. in this case I don't have to re-order the
tables. Restore doesn't seem to be having problems in this case - although the
restore will take a good while to complete. This is with the same dump file that
failed before.
Pete
>
> regards, tom lane
>
> ---------------------------(end of broadcast)---------------------------
> TIP 1: if posting/reading through Usenet, please send an appropriate
> subscribe-nomail command to majordomo@postgresql.org so that your
> message can get through to the mailing list cleanly
>