Thread: Unable to dump database/table

Unable to dump database/table

From
Krok
Date:
Hello.

pg_dump cann't dump some databases (at least one table) with the
following error ;

$ pg_dump -b -F c -v -f pgdump_void_2004-10-11T03:04:00 void
pg_dump: saving encoding
pg_dump: saving database definition
pg_dump: reading schemas
pg_dump: reading user-defined types
pg_dump: reading user-defined functions
pg_dump: reading user-defined aggregate functions
pg_dump: reading user-defined operators
pg_dump: reading user-defined operator classes
pg_dump: reading user-defined tables
pg_dump: reading table inheritance information
pg_dump: finding inheritance relationships
pg_dump: reading column info for interesting tables
pg_dump: finding the columns and types of table ...
...
pg_dump: flagging inherited columns in subtables
pg_dump: dumping out database comment
pg_dump: dumping out user-defined schemas
pg_dump: dumping out user-defined types
pg_dump: dumping out tables
pg_dump: dumping out indexes
pg_dump: dumping out user-defined procedural languages
pg_dump: dumping out user-defined functions
pg_dump: dumping out user-defined aggregate functions
pg_dump: dumping out user-defined operators
pg_dump: dumping out user-defined operator classes
pg_dump: dumping out user-defined casts
pg_dump: preparing to dump the contents of table ...
...
pg_dump: dumping foreign key constraints for table ...
...
pg_dump: dumping triggers for table ...
...
pg_dump: dumping out rules
pg_dump: dumping contents of table ...
...
pg_dump: dumping contents of table users
pg_dump: ERROR:  out of memory
DETAIL:  Failed on request of size 536870920.
pg_dump: SQL command to dump the contents of table "users" failed:
PQendcopy() failed.
pg_dump: Error message from server: ERROR:  out of memory
DETAIL:  Failed on request of size 536870920.
pg_dump: The command was: COPY public.users (<fields>) TO stdout;
pg_dump: *** aborted because of error
$



Configuration of server is :
1GB RAM

$ ipc_check


shared memory enabled: 262144 kB available

==========================

semaphores enabled
$ uname -r
4.10-RELEASE-p2
$ pg_config --version
PostgreSQL 7.4.5
$ pg_config --configure
'--with-libraries=/usr/local/lib' '--with-includes=/usr/local/include'
'--enable-nls' '--with-openssl=/usr/local' '--prefix=/usr/local'
'i386-portbld-freebsd4.10' 'LDFLAGS= -rpath=/usr/local/lib
-L/usr/local/lib -lgnugetopt' 'CFLAGS=-O2 -pipe -march=pentiumpro -O3
-funroll-loops' 'CPPFLAGS=-O2 -pipe -fmemoize-lookups -fsave-memoized
-I/usr/local/include' 'host_alias=i386-portbld-freebsd4.10'
'build_alias=i386-portbld-freebsd4.10'
'target_alias=i386-portbld-freebsd4.10' 'CC=cc'
$

I've REINDEX and VACUUM table, but it doesn't help.
Can anybody help with this ?

And a question : is the size, that pg_dump has reported (Failed on
request of size 536870920) is required size of available shared memory ?
If it is, then what ?! I need to have 10GB shared memory to dump table
whith 100'000'000 records ?!

--
With best regards,
Krok

Re: Unable to dump database/table

From
Tom Lane
Date:
Krok <krok@void.ru> writes:
> pg_dump: dumping contents of table users
> pg_dump: ERROR:  out of memory
> DETAIL:  Failed on request of size 536870920.
> pg_dump: SQL command to dump the contents of table "users" failed:
> PQendcopy() failed.
> pg_dump: Error message from server: ERROR:  out of memory
> DETAIL:  Failed on request of size 536870920.

Looks like corrupt data to me (specifically, a variable-width field
with a ridiculous length word).

You may care to consult the archives regarding methods for dealing with
this sort of problem.  If you're lucky, there's only one bad row ...

            regards, tom lane

Re: Unable to dump database/table

From
Krok
Date:
Yep !
You are right !

void=> \d users
                              Table "public.users"
      Column     |           Type           |             Modifiers
----------------+--------------------------+-----------------------------------
  id             | integer                  | default
nextval('u_serial'::text)
...
  passwd         | character varying(255)   |
...
void=> select max(length(passwd)) from users;
     max
-----------
  536870919
(1 row)

void=>  select id from users where length(passwd) > 255;
   id
-------
  25019
(1 row)

void=>

I've deleted this row and now backend process crashes with core dumped -
will keep trying to find out why

But how can this happen, that varchar(255) field became broken ?

Tom Lane wrote:
> Krok <krok@void.ru> writes:
>
>>pg_dump: dumping contents of table users
>>pg_dump: ERROR:  out of memory
>>DETAIL:  Failed on request of size 536870920.
>>pg_dump: SQL command to dump the contents of table "users" failed:
>>PQendcopy() failed.
>>pg_dump: Error message from server: ERROR:  out of memory
>>DETAIL:  Failed on request of size 536870920.
>
>
> Looks like corrupt data to me (specifically, a variable-width field
> with a ridiculous length word).
>
> You may care to consult the archives regarding methods for dealing with
> this sort of problem.  If you're lucky, there's only one bad row ...
>
>             regards, tom lane

--
With best regards,
Krok

Re: Unable to dump database/table

From
Tom Lane
Date:
Krok <krok@void.ru> writes:
> But how can this happen, that varchar(255) field became broken ?

I'd wonder about flaky memory or flaky disk drive, for starters ...

            regards, tom lane

Re: Unable to dump database/table

From
Andre Maasikas
Date:
On Mon, 11 Oct 2004, Krok wrote:

> void=> select max(length(passwd)) from users;
>      max
> -----------
>   536870919
> (1 row)

....
>
> But how can this happen, that varchar(255) field became broken ?
>
According to my caclulator, in hex this is 0x20000007
so there's likely a bit flip if the length is actually stored
so ( i don't know the internals yet )

I'd be worried about hardware and more importantly about data
consistency at this point. I would suggest running manufacturer's
test suites or something similar on this machine.

(memtest on x86 would be good, copying some .iso's around and
checksumming them in between was a good way for me to ask
a replacement machine once)

Andre