Re: [HACKERS] Problems with >2GB tables on Linux 2.0 - Mailing list pgsql-hackers

From D'Arcy" "J.M." Cain
Subject Re: [HACKERS] Problems with >2GB tables on Linux 2.0
Date
Msg-id m10NGfF-0000cKC@druid.net
Whole thread Raw
In response to Re: [HACKERS] Problems with >2GB tables on Linux 2.0  ("Tim Perdue" <perdue@raccoon.com>)
List pgsql-hackers
Thus spake Tim Perdue
> I am still not able to pg_dump my data after recovering from this disaster.
> My files are now segmented at 1GB, vacuuming is fine, but pg_dump has a
> problem "locating the template1 database".

I recall once creating a shell script that dumped a table.  I can't
remember why I didn't use pg_dump but the shell script was pretty simple.
If you can read the data, you should be able to make this work.  Worst
case you may have to handcraft each table dump but beyond that it should
work pretty good.

Here's how I would do it in Python.  Requires the PostgreSQL module
found at http://www.druid.net/pygresql/.


#! /usr/bin/env python
from pg import DB
db = DB()  # opens default database on local machine - adjust as needed

for t in db.query("SELECT * FROM mytable").dictresult():   print """INSERT INTO mytable (id, desc, val)       VALUES
(%(id),'%(desc)', %(val));""" % t
 

Then feed that back into a new database.  I assume you have the schema
backed up somewhere.  You may have to get fancy with formatting special
characters and stuff.  If you can still read the schema you may be able
to automate this more.  It would depend on the amount of tables in your
database and the complexity.  See the get_tables() and get_attnames()
methods in PyGreSQL.

-- 
D'Arcy J.M. Cain <darcy@{druid|vex}.net>   |  Democracy is three wolves
http://www.druid.net/darcy/                |  and a sheep voting on
+1 416 424 2871     (DoD#0082)    (eNTP)   |  what's for dinner.


pgsql-hackers by date:

Previous
From: "Tim Perdue"
Date:
Subject: Re: [HACKERS] Problems with >2GB tables on Linux 2.0
Next
From: Bruce Momjian
Date:
Subject: Re: [HACKERS] Sequences....