Re: Large Tables(>1 Gb) - Mailing list pgsql-general

From Dustin Sallings
Subject Re: Large Tables(>1 Gb)
Date
Msg-id Pine.NEB.4.10.10006292038080.2807-100000@foo.west.spy.net
Whole thread Raw
In response to Large Tables(>1 Gb)  (Fred_Zellinger@seagate.com)
List pgsql-general
On Thu, 29 Jun 2000 Fred_Zellinger@seagate.com wrote:

    This doesn't directly answer your question...but do you actually
have a need to select all of a 1GB table?  I've got about 1.5GB of data in
a table, but I can't think of an application that would need to pull it
all out in one query.

# Date: Thu, 29 Jun 2000 22:26:41 -0500
# From: Fred_Zellinger@seagate.com
# To: pgsql-general@hub.org
# Subject: [GENERAL] Large Tables(>1 Gb)
#
#
# (I have a 500MHz PentIII, with 256 Mb RAM, UW SCSI, running Linux Kernel
# 2.2.9, with libc-2.1.2
# I am running Postgres 7.0 which I compiled myself.)
#
# So, I created a database, a table, and started dumping data into it.  Then
# I added an index on the table.  Life was good.
#
# After a few weeks, my table eclipsed approximately 1Gb, and when I looked
# at it in my PG_DATA/database directory, I noticed that there were two
# files:  MYTABLE and MYTABLE.1.  I was curious why this happened, but I
# figured that Postgres must break up tables over 1Gb into multiple
# files.(right?)
#
# Then, while running psql, I did a "select * from MYTABLE;"  Well, psql just
# sits there while the hard drive light blinks like crazy, pulling the table
# up into memory.  I have 256Mb of RAM, so this takes awhile.  When I start
# up "top" and watch my process table, the postgres backend is sucking up the
# CPU time pulling the data and the psql frontend is sucking up the memory
# accepting the results.
#
# So, I figured that psql must be piling everything up in a "less" like
# pager.  So, I kll the current request, do a "\pset pager" and toggle the
# pager off.  I re-run the select *, and the same thing happens.
#
# This time however, I let everything run until my memory taken up by the
# psql process goes over 256Mb, which means that my system RAM is all used
# up.  Then, my whole machine kinda locks up.  My load average hits 5(!) and
# psql starts taking up well over 300Mb.  I am also running X.  As best I can
# figure, my poor machine is getting hammered by physical memory being
# disk-swapped while simultaneously trying to pull up a 1Gb database.  I
# barely have enough CPU power left over for me to telnet in from another box
# and kill psql!
#
# (1)  I don't know what psql thinks it is doing, or why my kernel is letting
# it do it, but...
# (2)  I figure I can fix things....so:
#
# I look around at some backend configuration parameters to see if I can get
# Postgres to do some neat memory stuff(but later realize that it was the
# front-end and not the backend that was eating up memory...I tried pg_dump
# on the database/table, and stuff started spooling right away)
#
# Rather than trying to fix the problem, I decided to subvert it by breaking
# my table into a bunch of little tables, each one less than my RAM size, so
# that I would never dig into SWAP space on a select *....(all of you who are
# laugh at me, you can just quit reading right now).  Then I planned to
# re-join all of the tables in a VIEW by doing a CREATE VIEW AS SELECT *
# UNION SELECT * UNION...etc.  Then I find out that UNIONS and VIEWs aren't
# implemented together....(I don't see this explicitly stated on the to-do
# list either).
#
# Then I started digging into the source code, trying to see if the query
# parser was the reason that this wasn't implemented...perhaps I could help.
# I don't quite see where it is.
#
#
# Anyway, just wanted to see if all my assumptions are correct, or if anyone
# has a better explanation for my observation, and/or some solutions.
#
#
# Fred
#
#
#
#
#

--
dustin sallings                            The world is watching America,
http://2852210114/~dustin/                 and America is watching TV.


pgsql-general by date:

Previous
From: Fred_Zellinger@seagate.com
Date:
Subject: Large Tables(>1 Gb)
Next
From: Andrew Snow
Date:
Subject: Re: Large Tables(>1 Gb)