Re: Excessive memory used for INSERT - Mailing list pgsql-performance

From Alessandro Ipe
Subject Re: Excessive memory used for INSERT
Date
Msg-id 3672040.FVo2HxYOvt@snow.oma.be
Whole thread Raw
In response to Re: Excessive memory used for INSERT  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: Excessive memory used for INSERT
List pgsql-performance
Hi,


My dtrigger definition is
CREATE TRIGGER msg_trigger BEFORE INSERT ON msg FOR EACH ROW EXECUTE PROCEDURE msg_function();
so it seems that it is a BEFORE trigger.

To be totally honest, I have "really" limited knownledge in SQL and postgresql and all these were gathered from recipes
foundon the web... 


Regards,


Alessandro.


On Wednesday 17 December 2014 10:41:31 Tom Lane wrote:
> Torsten Zuehlsdorff <mailinglists@toco-domains.de> writes:
> > How many rows is "(SELECT * FROM upsert)" returning? Without knowing
> > more i would guess, that the result-set is very big and that could be
> > the reason for the memory usage.
>
> Result sets are not ordinarily accumulated on the server side.
>
> Alessandro didn't show the trigger definition, but my guess is that it's
> an AFTER trigger, which means that a trigger event record is accumulated
> in server memory for each inserted/updated row.  If you're trying to
> update a huge number of rows in one command (or one transaction, if it's
> a DEFERRED trigger) you'll eventually run out of memory for the event
> queue.
>
> An easy workaround is to make it a BEFORE trigger instead.  This isn't
> really nice from a theoretical standpoint; but as long as you make sure
> there are no other BEFORE triggers that might fire after it, it'll work
> well enough.
>
> Alternatively, you might want to reconsider the concept of updating
> hundreds of millions of rows in a single operation ...
>
>             regards, tom lane



pgsql-performance by date:

Previous
From: Alessandro Ipe
Date:
Subject: Re: Excessive memory used for INSERT
Next
From: Tom Lane
Date:
Subject: Re: Excessive memory used for INSERT