Re: Postgres Connections Requiring Large Amounts of Memory - Mailing list pgsql-performance

From SZŰCS Gábor
Subject Re: Postgres Connections Requiring Large Amounts of Memory
Date
Msg-id 004a01c33568$d48235c0$0403a8c0@fejleszt4
Whole thread Raw
In response to Postgres Connections Requiring Large Amounts of Memory  (Dawn Hollingsworth <dmh@airdefense.net>)
List pgsql-performance
----- Original Message -----
From: "Dawn Hollingsworth" <dmh@airdefense.net>
Sent: Tuesday, June 17, 2003 11:42 AM


> I'm not starting any of my own transactions and I'm not calling stored
> procedures from withing stored procedures. The stored procedures do have
> large parameters lists, up to 100. The tables are from 300 to 500

Geez! I don't think it'll help you find the memory leak (if any), but
couldn't you normalize the tables to smaller ones? That may be a pain when
updating (views and rules),  but I think it'd worth in resources (time and
memory, but maybe not disk space). I wonder what is the maximum number of
updated  cols and the minimum correlation between their semantics in a
single transaction (i.e. one func call), since there are "only" 100 params
for a proc.

> columns. 90% of the columns are either INT4 or INT8.  Some of these
> tables are inherited. Could that be causing problems?

Huh. It's still 30-50 columns (a size of a fairly large table for me) of
other types :)

G.
------------------------------- cut here -------------------------------


pgsql-performance by date:

Previous
From: Tom Lane
Date:
Subject: Re: Interesting incosistent query timing
Next
From: Bruno Wolff III
Date:
Subject: Recent 7.4 change slowed down a query by a factor of 3