Re: cursor interface to libpq - Mailing list pgsql-interfaces

From Thomas Lockhart
Subject Re: cursor interface to libpq
Date
Msg-id 39C8E45C.AE72C336@alumni.caltech.edu
Whole thread Raw
In response to Re: cursor interface to libpq  ("Kirby Bohling (TRSi)" <kbohling@oasis.novia.net>)
Responses Re: cursor interface to libpq
List pgsql-interfaces
>         Right now I have a database that has right around 4 Million
> rows in the primary table.  I am porting it away from MySQL basically
> because I want transactions.  When the table is written as insert
> statements it is around 3.2Gb, when that is gzipped it comes out to 1.3Gb.
> That table size will probably continue to grow at a rate of 200,000
> rows a week.  I don't know the exact size of each row, but it is under the
> 8K limit.  Right now I am working on a FreeBSD box with 1.5Gb of swap and
> 256MB of Ram.  I believe that I could get that upgraded to 1Gb of ram, and
> add as much swap space as I wanted.  But to be honest I really don't want
> to store every row of every select statement in memory.  I believe that
> the database is around 1.5Gb on disk almost all of it in the one table.

afaik this should all work. You can run pg_dump and pipe the output to a
tape drive or to gzip. You *know* that a real backup will take something
like the size of the database (maybe a factor of two or so less) since
the data has to go somewhere.

Postgres *should* be able to store intermediate results etc on disk, so
the "out of memory" might be due to a per-process limit on your FreeBSD
machine. Others with experience on that platform might have some
specific suggestions.

Scrappy, a FreeBSD partisan, probably has tables on his systems much
bigger than the one under discussion. Perhaps he will speak up here??

Good luck!
                     - Thomas


pgsql-interfaces by date:

Previous
From: "Kirby Bohling (TRSi)"
Date:
Subject: Re: cursor interface to libpq
Next
From: philippe.benaiche@fr.abb.com
Date:
Subject: Replication between MS-Access and PostgreSQL