Re: Large Tables(>1 Gb) - Mailing list pgsql-general

From Jeffery Collins
Subject Re: Large Tables(>1 Gb)
Date
Msg-id 395C96C6.60A1C0C8@onyx-technologies.com
Whole thread Raw
In response to Large Tables(>1 Gb)  (Fred_Zellinger@seagate.com)
List pgsql-general
Fred_Zellinger@seagate.com wrote:

> (I have a 500MHz PentIII, with 256 Mb RAM, UW SCSI, running Linux Kernel
> 2.2.9, with libc-2.1.2
> I am running Postgres 7.0 which I compiled myself.)
>
> So, I created a database, a table, and started dumping data into it.  Then
> I added an index on the table.  Life was good.
>
> After a few weeks, my table eclipsed approximately 1Gb, and when I looked
> at it in my PG_DATA/database directory, I noticed that there were two
> files:  MYTABLE and MYTABLE.1.  I was curious why this happened, but I
> figured that Postgres must break up tables over 1Gb into multiple
> files.(right?)
>
> Then, while running psql, I did a "select * from MYTABLE;"  Well, psql just
> sits there while the hard drive light blinks like crazy, pulling the table
> up into memory.  I have 256Mb of RAM, so this takes awhile.  When I start
> up "top" and watch my process table, the postgres backend is sucking up the
> CPU time pulling the data and the psql frontend is sucking up the memory
> accepting the results.
>
> Fred

Okay, I didn't laugh the entire time...

I suggest you take a look at cursors.  I have the same thing.  There are times
I will need to select my entire >2Gig table but instead of doing:

    SELECT * FROM table ;

I do

    DECLARE tmp CURSOR FOR SELECT * FROM table ;

    do {
        FETCH 100 FORWARD FROM tmp ;
     } while there are rows left.

This only pulls 100 (or whatever number you specify) into memory at a time.

Jeff



pgsql-general by date:

Previous
From: Philip Warner
Date:
Subject: Re: pg_dumpall and check constraints
Next
From: "Mitch Vincent"
Date:
Subject: Re: Large Tables(>1 Gb)