Re: Postgres Connections Requiring Large Amounts of Memory - Mailing list pgsql-performance

From Dawn Hollingsworth
Subject Re: Postgres Connections Requiring Large Amounts of Memory
Date
Msg-id 1055847810.2833.260.camel@kaos
Whole thread Raw
In response to Re: Postgres Connections Requiring Large Amounts of Memory  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: Postgres Connections Requiring Large Amounts of Memory
List pgsql-performance

Each stored procedure only updates one row and inserts one row.

I just connected the user interface to the database. It only does selects on startup. It's connection jumped to a memory usage of 256M.  It's not getting any larger but it's not getting any smaller either.

I'm going to compile postgres with the SHOW_MEMORY_STATS. I'm assuming I can just set ShowStats equal to 1. I'll also pare down the application to only use one of the stored procedures for less noise and maybe I can track where memory might be going. And in the meantime I'll get a test going with Postgres 7.3 to see if I get the same behavior.

Any other suggestions?

-Dawn

On Tue, 2003-06-17 at 22:03, Tom Lane wrote:
> The only theory I can come up with is that the deferred trigger list is
> getting out of hand.  Since you have foreign keys in all the tables,
> each insert or update is going to add a trigger event to the list of
> stuff to check at commit.  The event entries aren't real large but they
> could add up if you insert or update a lot of stuff in a single
> transaction.  How many rows do you process per transaction?
> 
> 			regards, tom lane

pgsql-performance by date:

Previous
From: Ernest E Vogelsinger
Date:
Subject: Re: Interesting incosistent query timing
Next
From: Tom Lane
Date:
Subject: Re: Postgres Connections Requiring Large Amounts of Memory