Bruce,
Your assumptions were absolutely on target.
I appreciate the fact that you de-coupled my question as well.
As I was in fact using "psql", this certainly explains my system dropping to
it's knees...
Out of curiosity, if I were using something else besides "psql" would this
have still been a problem? Or is pgsql uncommon in this respect?
-Michael
> -----Original Message-----
> From: pgsql-general-owner@postgresql.org
> [mailto:pgsql-general-owner@postgresql.org]On Behalf Of Bruce Guenter
> Sent: Sunday, November 05, 2000 9:34 PM
> To: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] Expectations of MEM requirements for a DB with
> large tables.
>
>
> On Sun, Nov 05, 2000 at 09:17:52PM -0800, Michael
> Miyabara-McCaskey wrote:
> > Anyway, I crashed my system the other day when I did a
> "select *" from one
> > of my large tables (about 5.5gb in size). Now this is not
> something that
> > will normally happen, as I would normally have some
> criteria to reduce the
> > output size, but it got me thinking...
> >
> > Does anyone know what the ratio of data output size (say
> from a select) to
> > the amount of RAM used is?
>
> You are really asking two questions: how much memory does
> the back end
> take to execute that query, and how much memory does the front end
> (psql, I assume) take to receive the response.
>
> To answer the first, the back-ends allocate a fixed pool of
> buffers when
> they start up, and never use more RAM than is in that pool. If they
> need more temporary space (ie for sorting), they will create temporary
> files as necessary.
>
> To answer the second, if you do a plain "SELECT *", it will buffer the
> entire response set into RAM before printing anything out.
> If you have
> more than a trivial number of records to fetch from the database (and
> 5.5GB is certainly more than trivial), use a cursor and only
> fetch a few
> hundred at a time.
> --
> Bruce Guenter <bruceg@em.ca>
http://em.ca/~bruceg/