Re: Large Tables(>1 Gb) - Mailing list pgsql-general

From Denis Perchine
Subject Re: Large Tables(>1 Gb)
Date
Msg-id 00063012525904.00833@dyp
Whole thread Raw
In response to Large Tables(>1 Gb)  (Fred_Zellinger@seagate.com)
Responses Re: Large Tables(>1 Gb)  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-general
Hello,

> After a few weeks, my table eclipsed approximately 1Gb, and when I looked
> at it in my PG_DATA/database directory, I noticed that there were two
> files:  MYTABLE and MYTABLE.1.  I was curious why this happened, but I
> figured that Postgres must break up tables over 1Gb into multiple
> files.(right?)

Yeps.

> Then, while running psql, I did a "select * from MYTABLE;"  Well, psql just
> sits there while the hard drive light blinks like crazy, pulling the table
> up into memory.  I have 256Mb of RAM, so this takes awhile.  When I start
> up "top" and watch my process table, the postgres backend is sucking up the
> CPU time pulling the data and the psql frontend is sucking up the memory
> accepting the results.

It's OK. The problem is that postgres try to combine fukk answer for your
request in memory. And for sure this is a little bit hard for him. You just have not
enough memory for such games...

But if you would like to do such things you can do them. You should consider to use
either of 2 possible solutions:
1. Use cursors. This is the most natural way to do this. You just should create cursor
and then fetch data by some amount of tuples. Something like:
declare my_cursor cursor for select * from big_table;
fetch 1000;
fetch 1000;
close my_cursor;

2. Use limit & offset capability of postgres.

select * from big_table limit 1000 offset 0;
select * from big_table limit 1000 offset 1000;
...

> So, I figured that psql must be piling everything up in a "less" like
> pager.  So, I kll the current request, do a "\pset pager" and toggle the
> pager off.  I re-run the select *, and the same thing happens.
>
> This time however, I let everything run until my memory taken up by the
> psql process goes over 256Mb, which means that my system RAM is all used
> up.  Then, my whole machine kinda locks up.  My load average hits 5(!) and
> psql starts taking up well over 300Mb.  I am also running X.  As best I can
> figure, my poor machine is getting hammered by physical memory being
> disk-swapped while simultaneously trying to pull up a 1Gb database.  I
> barely have enough CPU power left over for me to telnet in from another box
> and kill psql!
>
> (1)  I don't know what psql thinks it is doing, or why my kernel is letting
> it do it, but...
> (2)  I figure I can fix things....so:
>
> I look around at some backend configuration parameters to see if I can get
> Postgres to do some neat memory stuff(but later realize that it was the
> front-end and not the backend that was eating up memory...I tried pg_dump
> on the database/table, and stuff started spooling right away)
>
> Rather than trying to fix the problem, I decided to subvert it by breaking
> my table into a bunch of little tables, each one less than my RAM size, so
> that I would never dig into SWAP space on a select *....(all of you who are
> laugh at me, you can just quit reading right now).  Then I planned to
> re-join all of the tables in a VIEW by doing a CREATE VIEW AS SELECT *
> UNION SELECT * UNION...etc.  Then I find out that UNIONS and VIEWs aren't
> implemented together....(I don't see this explicitly stated on the to-do
> list either).
>
> Then I started digging into the source code, trying to see if the query
> parser was the reason that this wasn't implemented...perhaps I could help.
> I don't quite see where it is.

--
Sincerely Yours,
Denis Perchine

----------------------------------
E-Mail: dyp@perchine.com
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
----------------------------------

pgsql-general by date:

Previous
From: Andrew Snow
Date:
Subject: Re: Large Tables(>1 Gb)
Next
From: Tom Lane
Date:
Subject: Re: Large Tables(>1 Gb)