Re: transaction control in pl/pgsql - Mailing list pgsql-general

From Alban Hertroys
Subject Re: transaction control in pl/pgsql
Date
Msg-id 3CB297F1-CAEE-4EAC-8ECB-09130E4D0ED1@solfertje.student.utwente.nl
Whole thread Raw
In response to transaction control in pl/pgsql  (Birgit Laggner <birgit.laggner@vti.bund.de>)
Responses Re: transaction control in pl/pgsql  (Birgit Laggner <birgit.laggner@vti.bund.de>)
List pgsql-general
On 1 Apr 2010, at 12:22, Birgit Laggner wrote:

> Dear list,
>
> I have some data (big size) and I've written a long function in pl/pgsql
> which processes the data in several steps. At a test run my function
> aborted because of memory exhaustion. My guess is, that everything what
> happens during the function transaction is stored in the memory - until
> it's full...

No, transactions operate on the database just like anything else. Running out of memory doing that is not impossible,
butyou'd need to create some really big transactions to reach those limits (I think I've been quoted the limit being
2^32instructions per transaction a looong time ago, just to give you an indication). 

You're probably running out of memory for another reason. Now to figure out why...

I've seen an issue with deferred constraints causing this, for example. Do you use deferred constraints?

Another cause that pops up regularly is that people specify too much global work_mem for postgres. work_mem gets
allocatedper connection, so the more connections you have the more memory goes to work_mem and other resources may
receivetoo little, or you cross what's available. 
What are your relevant postgres settings (max connections, work_mem, shared mem, etc.) and how much physical, total and
sharedmemory does your server have? 

Another possibility is that your function stores large amounts of data in variables that are not backed up by database
tables.That means all that data will be stored in memory, and even if it goes to swap at some point (not sure it will,
itwould seriously hurt performance) there is a limit. 
If this is the case, maybe you could use temporary tables to process that data instead of trying to do it all in
memory.

> So, my idea for solving that problem would be to cut the
> big function into smaller functions. But, I don't want to write 30
> function calls at the end - I would rather like to have one function
> which is calling all these small functions, so I would only have to
> write one sql-query at the end.

Splitting up big functions into smaller functions is always a good idea. That's part of general programming paradigms.
Itwon't cut down the size of your transaction though. 

> What I fear is either, that, if this
> function calls the other functions, everything is only one trancaction
> again and I get memory overflow once more.


If the problem persists, maybe you could post your function somewhere. As it's apparently a rather long function, can
youstrip it down to something that still causes it to run out of memory but that will be a bit easier for the people on
thislist to wade through? 

Alban Hertroys

--
If you can't see the forest for the trees,
cut the trees and you'll see there is no forest.


!DSPAM:737,4bb4831310417247659380!



pgsql-general by date:

Previous
From: Ron Mayer
Date:
Subject: postgres vs mysql conventional wisdom
Next
From: "Nikhil G. Daddikar"
Date:
Subject: Re: "1-Click" installer problems