Re: Large Tables(>1 Gb) - Mailing list pgsql-general

From mikeo
Subject Re: Large Tables(>1 Gb)
Date
Msg-id 3.0.1.32.20000630104745.009374e0@pop.spectrumtelecorp.com
Whole thread Raw
In response to Re: Large Tables(>1 Gb)  ("Mitch Vincent" <mitch@venux.net>)
List pgsql-general
we use cursors and they perform well for us for
selects.
our largest table is just over 7.5g containing
38mil+ rows...but we have a lot of tables over
1 gig...

mikeo


At 10:19 AM 6/30/00 -0400, Mitch Vincent wrote:
>You could also use LIMIT and OFFSET.. That's what I do (though my database
>isn't to a gigabyte yet)..
>
>Maybe using a CURSOR  is better, I'm not sure...
>
>-Mitch
>
>----- Original Message -----
>From: Jeffery Collins <collins@onyx-technologies.com>
>To: <Fred_Zellinger@seagate.com>
>Cc: <pgsql-general@hub.org>
>Sent: Friday, June 30, 2000 8:47 AM
>Subject: Re: [GENERAL] Large Tables(>1 Gb)
>
>
>> Fred_Zellinger@seagate.com wrote:
>>
>> > (I have a 500MHz PentIII, with 256 Mb RAM, UW SCSI, running Linux Kernel
>> > 2.2.9, with libc-2.1.2
>> > I am running Postgres 7.0 which I compiled myself.)
>> >
>> > So, I created a database, a table, and started dumping data into it.
>Then
>> > I added an index on the table.  Life was good.
>> >
>> > After a few weeks, my table eclipsed approximately 1Gb, and when I
>looked
>> > at it in my PG_DATA/database directory, I noticed that there were two
>> > files:  MYTABLE and MYTABLE.1.  I was curious why this happened, but I
>> > figured that Postgres must break up tables over 1Gb into multiple
>> > files.(right?)
>> >
>> > Then, while running psql, I did a "select * from MYTABLE;"  Well, psql
>just
>> > sits there while the hard drive light blinks like crazy, pulling the
>table
>> > up into memory.  I have 256Mb of RAM, so this takes awhile.  When I
>start
>> > up "top" and watch my process table, the postgres backend is sucking up
>the
>> > CPU time pulling the data and the psql frontend is sucking up the memory
>> > accepting the results.
>> >
>> > Fred
>>
>> Okay, I didn't laugh the entire time...
>>
>> I suggest you take a look at cursors.  I have the same thing.  There are
>times
>> I will need to select my entire >2Gig table but instead of doing:
>>
>>     SELECT * FROM table ;
>>
>> I do
>>
>>     DECLARE tmp CURSOR FOR SELECT * FROM table ;
>>
>>     do {
>>         FETCH 100 FORWARD FROM tmp ;
>>      } while there are rows left.
>>
>> This only pulls 100 (or whatever number you specify) into memory at a
>time.
>>
>> Jeff
>>
>>
>>
>

pgsql-general by date:

Previous
From: Stephane Bortzmeyer
Date:
Subject: Re: function for date difference ?
Next
From: Martijn van Oosterhout
Date:
Subject: Re: disk backups