I'm trying to a restore a local copy of a 30GB DB (with only one
table) dumped using the Heroku wrappers (described here) using
pg_restore. The Heroku DB is using 9.1.7 and I have had the same
result using 9.1.5 and 9.1.7 under OS X 10.8.
Using the command:
pg_restore --clean --no-acl --no-owner -d db_name -U username -h
localhost -v db.dump
I get the following output on the command line ("cleaning" commands
omitted) after 20 minutes and seeing the DB size grow to about 30 GB:
pg_restore: creating SCHEMA public
pg_restore: creating COMMENT SCHEMA public
pg_restore: creating EXTENSION plpgsql
pg_restore: creating COMMENT EXTENSION plpgsql
pg_restore: creating TABLE video_data
pg_restore: creating SEQUENCE video_data_id_seq
pg_restore: creating SEQUENCE OWNED BY video_data_id_seq
pg_restore: executing SEQUENCE SET video_data_id_seq
pg_restore: creating DEFAULT id
pg_restore: restoring data for table "video_data"
pg_restore(11938) malloc: *** mmap(size=18446744071605936128) failed
(error code=12)
*** error: can't allocate region
*** set a breakpoint in malloc_error_break to debug
pg_restore: [custom archiver] out of memory
pg_restore: *** aborted because of error
Turning up log level to DEBUG1 reveals the error:
...(many of these checkpoints hints)...
LOG: checkpoints are occurring too frequently (15 seconds apart)
HINT: Consider increasing the configuration parameter "checkpoint_segments".
ERROR: extra data after last expected column
CONTEXT: COPY video_data, line 463110542: "3599794500:59:01497 797942
2012 23422-08-96 2400892 08.794219 08 0617 2342118769-08 72
115 2046225 1..."
STATEMENT: COPY video_data (id, video_id, views, favorites, comments,
created_at, likes, dislikes) FROM stdin;
I have tried increasing SHMMAX and SHMALL to 2GB, checkpoint_segments
to 30, as well as upgrading PG to match patchlevels. A search reveals
a small handful of people with this problem several years back, and no
resolution.
Any recommendations?