Thread: plperl doesn't release memory

plperl doesn't release memory

From
Sven Willenberger
Date:
I have been experiencing an issue with plperl and PostgreSQL 8.0.1 in
that after calling a plperl function memory does not get released. Two
different systems and each show different symptoms:

1) system: FreeBSD 5.3-Stable i386 with 1 GB RAM, dual Xeon P4
processors.
   script: plperl issues an SPI_EXEC_QUERY('select rows from table where
condition');
this loads the result set into memory (to the tune of some 600MB based
on top output). The function iterates through each row to grab some
totals information and spits back a number.
On the 2nd iteration of this function the connection is lost :

Out of memory during request for 1012 bytes, total sbrk() is 291207168
bytes!
Callback called exit.
LOG: server process (PID 12672) exited with exit code 12
LOG: terminating any other active server processes
LOG: received immediate shutdown request
LOG: all server processes terminated; reinitializing
LOG: database system was interrupted at 2005-03-23 17:17:23 EST
LOG: checkpoint record is at 2/4D7F206C
LOG: redo record is at 2/4D7F206C; undo record is at 0/0; shutdown TRUE
LOG: next transaction ID: 4913594; next OID: 60798748
LOG: database system was not properly shut down; automatic recovery in
progress

2) system: FreeBSD 5.3-Stable amd64 with 8GB RAM, dual opteron
processors
   script: plperl issues an SPI_EXEC_QUERY('select rows from table where
condition') which fetches roughly 18k tuples of rather small size. Each
row is acted up and if criteria are met, a reference to the row is
pushed onto an array (reference).
   after several iterations of this script (a dozen or so), a file is
COPYed into the database consisting of some 38k rows and each row is
acted upon by a trigger (plpgsql) -- this process normally takes just
under a minute, but after running the plperl function a dozen or so
times, the run time for the COPY exceeds 3 minutes. Restarting the
PostgreSQL backend (restart) brings the COPY time back down to
sub-minute range.

Is it normal for plperl to *not* release any memory? Or perhaps plperl
is not pfreeing or SPI_FINISHing cleanly?

Sven Willenberger


Re: plperl doesn't release memory

From
Tom Lane
Date:
Sven Willenberger <sven@dmv.com> writes:
> I have been experiencing an issue with plperl and PostgreSQL 8.0.1 in
> that after calling a plperl function memory does not get released.

AFAICT the result of spi_exec_query gets released fine, as soon as it's
no longer referenced within perl.  Perhaps your perl code is written in
such a way that a reference to the hash result value remains live after
the function exit?

I tried this:

create or replace function nrows(text) returns int as $$
  my ($tabname) = @_;
  my $rv = spi_exec_query("select * from $tabname");
  return $rv->{processed};
$$ LANGUAGE plperl;

and ran it repeatedly against a large table.  The memory usage went
up as expected, but back down again as soon as the function exited.

If you think it's actually a plperl bug, please show a self-contained
example.

            regards, tom lane

Re: plperl doesn't release memory

From
Sven Willenberger
Date:
On Wed, 2005-03-23 at 18:25 -0500, Tom Lane wrote:
> Sven Willenberger <sven@dmv.com> writes:
> > I have been experiencing an issue with plperl and PostgreSQL 8.0.1 in
> > that after calling a plperl function memory does not get released.
>
> AFAICT the result of spi_exec_query gets released fine, as soon as it's
> no longer referenced within perl.  Perhaps your perl code is written in
> such a way that a reference to the hash result value remains live after
> the function exit?
>
> I tried this:
>
> create or replace function nrows(text) returns int as $$
>   my ($tabname) = @_;
>   my $rv = spi_exec_query("select * from $tabname");
>   return $rv->{processed};
> $$ LANGUAGE plperl;
>
> and ran it repeatedly against a large table.  The memory usage went
> up as expected, but back down again as soon as the function exited.
>
> If you think it's actually a plperl bug, please show a self-contained
> example.
>

The query in question that we used for testing is:
-------------------------------------
CREATE OR REPLACE FUNCTION f_dom_e_lcr() RETURNS text AS $$


my $on_shore = "select
root_decks.id,material_all.npanxx,material_all.carrier,material_all.inter
from root_decks, material_all, lookup
where lookup.state not in (select state from offshore)
and lookup.npanxx = material_all.npanxx
and root_decks.type = 'ie'
and root_decks.carrier = material_all.carrier;";

my $rv = spi_exec_query($on_shore);
my $nrows = $rv->{processed};

return $nrows ;
$$ LANGUAGE plperl;
-------------------------------------

Now thinking that perhaps the $nrows variable was getting stuck we tried
the following which resulted in the exact same memory issue:
-------------------------------------
CREATE OR REPLACE FUNCTION f_dom_e_lcr() RETURNS text AS $$


my $on_shore = "select
root_decks.id,material_all.npanxx,material_all.carrier,material_all.inter
from root_decks, material_all, lookup
where lookup.state not in (select state from offshore)
and lookup.npanxx = material_all.npanxx
and root_decks.type = 'ie'
and root_decks.carrier = material_all.carrier;";

my $rv = spi_exec_query($on_shore);

return $rv->{processed};
$$ LANGUAGE plperl;
-------------------------------------

The result set is just under 1 million rows and top shows postgres using
some 600MB of memory. After the 3rd run of this function on a 1GB RAM
box, the error mentioned in the original part of this thread occurs and
the database restarts.

Any suggestions on how to trace what is going on? Debug output methods?

Sven


Re: plperl doesn't release memory

From
Tom Lane
Date:
Sven Willenberger <sven@dmv.com> writes:
> Any suggestions on how to trace what is going on? Debug output methods?

The first thing to figure out is whether the leak is inside Perl or in
Postgres proper.  If I were trying to do this I'd run the function a
couple times, then attach to the (idle) backend with gdb and do
    call MemoryContextStats(TopMemoryContext)
to dump a summary of Postgres' memory usage to stderr.  If that doesn't
show any remarkable bloat then the problem is inside Perl (and beyond my
ability to do much with).

One thing I'm still a bit baffled about is why my test didn't show a
problem; it sure looks identical to yours.  Maybe the issue is Perl
version specific?  I tested using the current FC3 version, which is
perl-5.8.5-9.

            regards, tom lane

Re: plperl doesn't release memory

From
Sven Willenberger
Date:
On Thu, 2005-03-24 at 11:34 -0500, Tom Lane wrote:
> Sven Willenberger <sven@dmv.com> writes:
> > Any suggestions on how to trace what is going on? Debug output methods?
>
> The first thing to figure out is whether the leak is inside Perl or in
> Postgres proper.  If I were trying to do this I'd run the function a
> couple times, then attach to the (idle) backend with gdb and do
>     call MemoryContextStats(TopMemoryContext)
> to dump a summary of Postgres' memory usage to stderr.  If that doesn't
> show any remarkable bloat then the problem is inside Perl (and beyond my
> ability to do much with).
>
> One thing I'm still a bit baffled about is why my test didn't show a
> problem; it sure looks identical to yours.  Maybe the issue is Perl
> version specific?  I tested using the current FC3 version, which is
> perl-5.8.5-9.
>
>             regards, tom lane

Not sure entirely how to interpret the results ... a cursory examination
shows 516096 total in cachememory but I don't know if that reflects the
state of "unfreed" memory (or perhaps the 354728 used is unfreed?):

TopMemoryContext: 32768 total in 3 blocks; 7392 free (51 chunks); 25376
used
MessageContext: 8192 total in 1 blocks; 7912 free (1 chunks); 280 used
PortalMemory: 8192 total in 1 blocks; 8176 free (1 chunks); 16 used
CacheMemoryContext: 516096 total in 6 blocks; 161368 free (1 chunks);
354728 used
lookup_state_idx: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
lookup_ocn_idx: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
lookup_lata_idx: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
locate_npanxx_idx: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
matall_intra_idx: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
matall_inter_idx: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
matall_npanxx_idx: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
offshore_pkey: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
pg_index_indrelid_index: 1024 total in 1 blocks; 912 free (0 chunks);
112 used
pg_attrdef_adrelid_adnum_index: 1024 total in 1 blocks; 848 free (0
chunks); 176 used
pg_namespace_nspname_index: 1024 total in 1 blocks; 912 free (0 chunks);
112 used
pg_attribute_relid_attnum_index: 1024 total in 1 blocks; 848 free (0
chunks); 176 used
pg_shadow_usesysid_index: 1024 total in 1 blocks; 912 free (0 chunks);
112 used
pg_aggregate_fnoid_index: 1024 total in 1 blocks; 912 free (0 chunks);
112 used
pg_class_relname_nsp_index: 1024 total in 1 blocks; 848 free (0 chunks);
176 used
pg_rewrite_rel_rulename_index: 1024 total in 1 blocks; 848 free (0
chunks); 176 used
pg_inherits_relid_seqno_index: 1024 total in 1 blocks; 848 free (0
chunks); 176 used
pg_cast_source_target_index: 1024 total in 1 blocks; 848 free (0
chunks); 176 used
pg_type_oid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
pg_language_name_index: 1024 total in 1 blocks; 912 free (0 chunks); 112
used
pg_class_oid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112
used
pg_operator_oid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112
used
pg_operator_oprname_l_r_n_index: 1024 total in 1 blocks; 712 free (0
chunks); 312 used
pg_statistic_relid_att_index: 1024 total in 1 blocks; 848 free (0
chunks); 176 used
pg_proc_oid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
pg_amop_opc_strat_index: 1024 total in 1 blocks; 776 free (0 chunks);
248 used
pg_opclass_am_name_nsp_index: 1024 total in 1 blocks; 776 free (0
chunks); 248 used
pg_index_indexrelid_index: 1024 total in 1 blocks; 912 free (0 chunks);
112 used
pg_conversion_name_nsp_index: 1024 total in 1 blocks; 848 free (0
chunks); 176 used
pg_conversion_oid_index: 1024 total in 1 blocks; 912 free (0 chunks);
112 used
pg_trigger_tgrelid_tgname_index: 1024 total in 1 blocks; 848 free (0
chunks); 176 used
pg_amproc_opc_proc_index: 1024 total in 1 blocks; 776 free (0 chunks);
248 used
pg_language_oid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112
used
pg_conversion_default_index: 1024 total in 1 blocks; 712 free (0
chunks); 312 used
pg_shadow_usename_index: 1024 total in 1 blocks; 912 free (0 chunks);
112 used
pg_attribute_relid_attnam_index: 1024 total in 1 blocks; 848 free (0
chunks); 176 used
pg_namespace_oid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112
used
pg_group_sysid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112
used
pg_proc_proname_args_nsp_index: 1024 total in 1 blocks; 712 free (0
chunks); 312 used
pg_group_name_index: 1024 total in 1 blocks; 912 free (0 chunks); 112
used
pg_type_typname_nsp_index: 1024 total in 1 blocks; 848 free (0 chunks);
176 used
pg_opclass_oid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112
used
pg_amop_opr_opc_index: 1024 total in 1 blocks; 848 free (0 chunks); 176
used
MdSmgr: 8192 total in 1 blocks; 7000 free (0 chunks); 1192 used
DynaHash: 8192 total in 1 blocks; 6776 free (0 chunks); 1416 used
Operator class cache: 8192 total in 1 blocks; 5080 free (0 chunks); 3112
used
CFuncHash: 8192 total in 1 blocks; 5080 free (0 chunks); 3112 used
smgr relation table: 8192 total in 1 blocks; 3016 free (0 chunks); 5176
used
Portal hash: 8192 total in 1 blocks; 2008 free (0 chunks); 6184 used
Relcache by OID: 8192 total in 1 blocks; 3520 free (0 chunks); 4672 used
Relcache by name: 24576 total in 2 blocks; 13240 free (5 chunks); 11336
used
LockTable (locallock hash): 8192 total in 1 blocks; 4056 free (0
chunks); 4136 used
ErrorContext: 8192 total in 1 blocks; 8176 free (0 chunks); 16 used

Sven


Re: plperl doesn't release memory

From
Tom Lane
Date:
Sven Willenberger <sven@dmv.com> writes:
> On Thu, 2005-03-24 at 11:34 -0500, Tom Lane wrote:
>> The first thing to figure out is whether the leak is inside Perl or in
>> Postgres proper.  If I were trying to do this I'd run the function a
>> couple times, then attach to the (idle) backend with gdb and do
>> call MemoryContextStats(TopMemoryContext)

> Not sure entirely how to interpret the results ... a cursory examination
> shows 516096 total in cachememory but I don't know if that reflects the
> state of "unfreed" memory (or perhaps the 354728 used is unfreed?):

That looks like the normal steady-state condition.  The leak must be
inside Perl then.

[ thinks for a bit... ]  Actually it seems possible that there's a
problem with poor interaction between Postgres and Perl.  During the SPI
query they will both be making pretty substantial memory demands, and it
could be that the underlying malloc library isn't coping gracefully and
is ending up with very fragmented memory.  That could result in
out-of-memory problems when in fact neither package is leaking anything
per se.

What you probably ought to do next is build Postgres with a debugging
malloc library to learn more about who's eating up what.  I am not sure
whether libperl will automatically use the malloc attached to the main
executable or whether you need to whack it around too.  (Come to think
of it, doesn't Perl normally use its very own private malloc?  Maybe
there's an issue right there ...)

            regards, tom lane

Re: plperl doesn't release memory

From
Dan Sugalski
Date:
At 1:51 PM -0500 3/24/05, Tom Lane wrote:
>What you probably ought to do next is build Postgres with a debugging
>malloc library to learn more about who's eating up what.  I am not sure
>whether libperl will automatically use the malloc attached to the main
>executable or whether you need to whack it around too.  (Come to think
>of it, doesn't Perl normally use its very own private malloc?  Maybe
>there's an issue right there ...)

Perl can, yeah. If a

    perl -V

shows a "usemymalloc=y" in the output somewhere then perl's using its
own internal malloc and you're definitely never going to release
memory to anything. If it's 'n' then it'll use the default malloc
scheme -- I'm pretty sure for embedding use it uses whatever routines
the embedder defines, but it's been a while since I've poked around
in there.

Anyway, if perl's using its own memory allocator you'll want to
rebuild it to not do that.
--
                Dan

--------------------------------------it's like this-------------------
Dan Sugalski                          even samurai
dan@sidhe.org                         have teddy bears and even
                                       teddy bears get drunk

Re: plperl doesn't release memory

From
Tom Lane
Date:
Dan Sugalski <dan@sidhe.org> writes:
> ... I'm pretty sure for embedding use it uses whatever routines
> the embedder defines, but it's been a while since I've poked around
> in there.

Hmm.  plperl is definitely not doing anything to try to manipulate that
behavior; maybe it should?  Where can we find out about this?

> Anyway, if perl's using its own memory allocator you'll want to
> rebuild it to not do that.

When I tried to test this it seemed that memory did get released at the
conclusion of each query --- at least "top" showed the backend process
size dropping back down.  But, again, I wouldn't be surprised if Sven's
perl installation is configured differently than mine.

            regards, tom lane

Re: plperl doesn't release memory

From
Sven Willenberger
Date:
On Thu, 2005-03-24 at 13:51 -0500, Tom Lane wrote:
> Sven Willenberger <sven@dmv.com> writes:
> > On Thu, 2005-03-24 at 11:34 -0500, Tom Lane wrote:
> >> The first thing to figure out is whether the leak is inside Perl or in
> >> Postgres proper.  If I were trying to do this I'd run the function a
> >> couple times, then attach to the (idle) backend with gdb and do
> >> call MemoryContextStats(TopMemoryContext)
>
> > Not sure entirely how to interpret the results ... a cursory examination
> > shows 516096 total in cachememory but I don't know if that reflects the
> > state of "unfreed" memory (or perhaps the 354728 used is unfreed?):
>
> That looks like the normal steady-state condition.  The leak must be
> inside Perl then.
>
> [ thinks for a bit... ]  Actually it seems possible that there's a
> problem with poor interaction between Postgres and Perl.  During the SPI
> query they will both be making pretty substantial memory demands, and it
> could be that the underlying malloc library isn't coping gracefully and
> is ending up with very fragmented memory.  That could result in
> out-of-memory problems when in fact neither package is leaking anything
> per se.
>
> What you probably ought to do next is build Postgres with a debugging
> malloc library to learn more about who's eating up what.  I am not sure
> whether libperl will automatically use the malloc attached to the main
> executable or whether you need to whack it around too.  (Come to think
> of it, doesn't Perl normally use its very own private malloc?  Maybe
> there's an issue right there ...)
>
>             regards, tom lane
>
Yes, on these systems, perl was build with -Dusemymalloc (and
concurrently -Ui_malloc) so there could very well be an issue with
malloc pools going awry. Doing a quick dig reveals that Linux tends to
build perl (by default) with the system malloc which may explain why
your script did not display this same behavior. I will try to rebuild
perl using system malloc and see how that affects things.

Sven


Re: plperl doesn't release memory

From
Dan Sugalski
Date:
At 3:14 PM -0500 3/24/05, Tom Lane wrote:
>Dan Sugalski <dan@sidhe.org> writes:
>>  ... I'm pretty sure for embedding use it uses whatever routines
>>  the embedder defines, but it's been a while since I've poked around
>>  in there.
>
>Hmm.  plperl is definitely not doing anything to try to manipulate that
>behavior; maybe it should?  Where can we find out about this?

I'll have to go dig, but this:

>  > Anyway, if perl's using its own memory allocator you'll want to
>>  rebuild it to not do that.
>
>When I tried to test this it seemed that memory did get released at the
>conclusion of each query --- at least "top" showed the backend process
>size dropping back down.  But, again, I wouldn't be surprised if Sven's
>perl installation is configured differently than mine.

...implies perl's doing the Right Thing, otherwise there'd be no
release of memory to the system.
--
                Dan

--------------------------------------it's like this-------------------
Dan Sugalski                          even samurai
dan@sidhe.org                         have teddy bears and even
                                       teddy bears get drunk

Re: plperl doesn't release memory

From
Tom Lane
Date:
Sven Willenberger <sven@dmv.com> writes:
> Yes, on these systems, perl was build with -Dusemymalloc (and
> concurrently -Ui_malloc) so there could very well be an issue with
> malloc pools going awry. Doing a quick dig reveals that Linux tends to
> build perl (by default) with the system malloc which may explain why
> your script did not display this same behavior.

I can confirm that my copy is not using mymalloc:

$ perl -V | grep -i alloc
    usemymalloc=n, bincompat5005=undef
$

            regards, tom lane

Re: plperl doesn't release memory

From
Greg Stark
Date:
Tom Lane <tgl@sss.pgh.pa.us> writes:

> (Come to think of it, doesn't Perl normally use its very own private malloc?
> Maybe there's an issue right there ...)

Perl can be built either way. It should work to have two different malloc's
running side by side as long as the correct free() is always called. Ie, as
long as perl doesn't hand any data structures to postgres expecting postgres
to free it or vice versa.

--
greg

Re: plperl doesn't release memory

From
Sven Willenberger
Date:
On Thu, 2005-03-24 at 15:52 -0500, Tom Lane wrote:
> Sven Willenberger <sven@dmv.com> writes:
> > Yes, on these systems, perl was build with -Dusemymalloc (and
> > concurrently -Ui_malloc) so there could very well be an issue with
> > malloc pools going awry. Doing a quick dig reveals that Linux tends to
> > build perl (by default) with the system malloc which may explain why
> > your script did not display this same behavior.
>
> I can confirm that my copy is not using mymalloc:
>
> $ perl -V | grep -i alloc
>     usemymalloc=n, bincompat5005=undef

I went ahead and rebuilt perl using the system malloc instead, and what
I found was that on the function that started this whole topic, the
memory allocation went overboard and postgresql bailed with "out of
memory". Using the perl malloc, apparently postgres/plperl/libperl was
able to manage the memory load although it got "stuck" for the session.

Closing the session (psql) did end up freeing all the memory, at least
from top's perspective (since the process no longer existed) -- running
the query from the command line (psql -c "select function()") worked
over several iterations so I do believe that the memory does get freed
upon closing the connection. In fact we were able to run the query using
this method with 4 simulaneous connections and, although we went heavy
into swap , all four did complete. So the issue can be narrowed down to
a per-connection basis where the amount of memory needed by the function
would normally exceed available memory; the handler for this "overflow"
does something with the extra memory needed such that subsequent
invocations of the function during the same connection end up eventually
creating a malloc error.

(This is inconsistent with the behavior on the 8G opteron system ... but
I will reevaluate the issue I saw there and see if it is related to
something else).

Sven



Re: plperl doesn't release memory

From
Greg Stark
Date:
Dan Sugalski <dan@sidhe.org> writes:

> Anyway, if perl's using its own memory allocator you'll want to rebuild it
> to not do that.

You would need to do that if you wanted to use a debugging malloc. But there's
no particular reason to think that you should need to do this just to work
properly.

Two mallocs can work fine alongside each other. They each call mmap or sbrk to
allocate new pages and they each manage the pages they've received. They won't
have any idea why the allocator seems to be skipping pages, but they should be
careful not to touch those pages.

--
greg

Re: plperl doesn't release memory

From
Tom Lane
Date:
Greg Stark <gsstark@mit.edu> writes:
> Two mallocs can work fine alongside each other. They each call mmap or
> sbrk to allocate new pages and they each manage the pages they've
> received. They won't have any idea why the allocator seems to be
> skipping pages, but they should be careful not to touch those pages.

However, it's quite likely that such a setup will fail to release memory
back to the OS very effectively, and it could easily suffer bad
fragmentation problems even without thinking about whether the program
break address can be moved back.  I think what Sven is seeing is exactly
fragmentation inefficiency.

            regards, tom lane

Re: plperl doesn't release memory

From
Dan Sugalski
Date:
At 6:58 PM -0500 3/24/05, Greg Stark wrote:
>Dan Sugalski <dan@sidhe.org> writes:
>
>>  Anyway, if perl's using its own memory allocator you'll want to rebuild it
>>  to not do that.
>
>You would need to do that if you wanted to use a debugging malloc. But there's
>no particular reason to think that you should need to do this just to work
>properly.
>
>Two mallocs can work fine alongside each other. They each call mmap or sbrk to
>allocate new pages and they each manage the pages they've received. They won't
>have any idea why the allocator seems to be skipping pages, but they should be
>careful not to touch those pages.

Perl will only use a single allocator, so there's not a huge issue
there. It's either the external allocator or the internal one, which
is for the best since you certainly don't want to be handing back
memory to the wrong allocator. That way lies madness and unpleasant
core files.

The bigger issue is that perl's memory allocation system, the one you
get if you build perl with usemymalloc set to yes, never releases
memory back to the system -- once the internal allocator gets a chunk
of memory from the system it's held for the duration of the process.
This is the right answer in many circumstances, and the allocator's
pretty nicely tuned to perl's normal allocation patterns, it's just
not really the right thing in a persistent server situation where
memory usage bounces up and down. It can happen with the system
allocator too, though it's less likely.

One of those engineering tradeoff things, and not much to be done
about it really.
--
                Dan

--------------------------------------it's like this-------------------
Dan Sugalski                          even samurai
dan@sidhe.org                         have teddy bears and even
                                       teddy bears get drunk

Re: plperl doesn't release memory

From
"FERREIRA William (COFRAMI)"
Date:

i have a similar problem
i'm running PostgreSQL on a PIV with 1GO and Windows 2000 NT
i have a large database and a big traitment taking more than 4 hours.
during the first hour postgresql use as much memory as virtual memory and i find this strange (growing to more 800MB)

and during the execution i get :
out of memory
Failed on request of size 56
and at the end, postgresql use 300 MB of memory and more than 2GB of virtual memory

does this problem can be resolve by tuning postgresql settings ?
here are my parameters :
shared_buffers = 1000
work_mem = 131072
maintenance_work_mem = 131072
max_stack_depth = 4096
i tried work_mem with 512MB and 2MB and i get the same error...

i read all the post, but i don't know how i can configure perl on Windows...

thanks in advance
       
        Will

-----Message d'origine-----
De : pgsql-general-owner@postgresql.org
[mailto:pgsql-general-owner@postgresql.org]De la part de Dan Sugalski
Envoyé : vendredi 25 mars 2005 19:34
À : Greg Stark; pgsql-general@postgresql.org
Objet : Re: [GENERAL] plperl doesn't release memory

At 6:58 PM -0500 3/24/05, Greg Stark wrote:
>Dan Sugalski <dan@sidhe.org> writes:
>
>>  Anyway, if perl's using its own memory allocator you'll want to rebuild it
>>  to not do that.
>
>You would need to do that if you wanted to use a debugging malloc. But there's
>no particular reason to think that you should need to do this just to work
>properly.
>
>Two mallocs can work fine alongside each other. They each call mmap or sbrk to
>allocate new pages and they each manage the pages they've received. They won't
>have any idea why the allocator seems to be skipping pages, but they should be
>careful not to touch those pages.

Perl will only use a single allocator, so there's not a huge issue
there. It's either the external allocator or the internal one, which
is for the best since you certainly don't want to be handing back
memory to the wrong allocator. That way lies madness and unpleasant
core files.

The bigger issue is that perl's memory allocation system, the one you
get if you build perl with usemymalloc set to yes, never releases
memory back to the system -- once the internal allocator gets a chunk
of memory from the system it's held for the duration of the process.
This is the right answer in many circumstances, and the allocator's
pretty nicely tuned to perl's normal allocation patterns, it's just
not really the right thing in a persistent server situation where
memory usage bounces up and down. It can happen with the system
allocator too, though it's less likely.

One of those engineering tradeoff things, and not much to be done
about it really.
--
                                Dan

--------------------------------------it's like this-------------------
Dan Sugalski                          even samurai
dan@sidhe.org                         have teddy bears and even
                                       teddy bears get drunk

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

This mail has originated outside your organization,
either from an external partner or the Global Internet.
Keep this in mind if you answer this message.

Re: plperl doesn't release memory

From
Sean Davis
Date:
As I understand it, a single execution of a pl/perl function will not
be affected by the perl memory issue, so I don't think that is your
problem.

My guess is that you are reading a large query into perl, so the whole
thing will be kept in memory (and you can't use more memory than you
have).  For a large query, this can be a huge amount of memory indeed.
You could use another language like plpgsql that can support
cursors/looping over query results or, in plperl you could use DBI (not
spi_exec_query) and loop over query results.

Hope this helps,
Sean

On Mar 30, 2005, at 9:33 AM, FERREIRA William (COFRAMI) wrote:

> i have a similar problem
> i'm running PostgreSQL on a PIV with 1GO and Windows 2000 NT
> i have a large database and a big traitment taking more than 4 hours.
> during the first hour postgresql use as much memory as virtual memory
> and i find this strange (growing to more 800MB)
>
> and during the execution i get :
> out of memory
> Failed on request of size 56
> and at the end, postgresql use 300 MB of memory and more than 2GB of
> virtual memory
>
> does this problem can be resolve by tuning postgresql settings ?
> here are my parameters :
> shared_buffers = 1000
> work_mem = 131072
> maintenance_work_mem = 131072
> max_stack_depth = 4096
> i tried work_mem with 512MB and 2MB and i get the same error...
>
> i read all the post, but i don't know how i can configure perl on
> Windows...
>
> thanks in advance
>        
>          Will
>
> -----Message d'origine-----
> De : pgsql-general-owner@postgresql.org
> [mailto:pgsql-general-owner@postgresql.org]De la part de Dan Sugalski
> Envoyé : vendredi 25 mars 2005 19:34
> À : Greg Stark; pgsql-general@postgresql.org
> Objet : Re: [GENERAL] plperl doesn't release memory
>
>
>
> At 6:58 PM -0500 3/24/05, Greg Stark wrote:
> >Dan Sugalski <dan@sidhe.org> writes:
> >
> >>  Anyway, if perl's using its own memory allocator you'll want to
> rebuild it
> >>  to not do that.
> >
> >You would need to do that if you wanted to use a debugging malloc.
> But there's
> >no particular reason to think that you should need to do this just to
> work
> >properly.
> >
> >Two mallocs can work fine alongside each other. They each call mmap
> or sbrk to
> >allocate new pages and they each manage the pages they've received.
> They won't
> >have any idea why the allocator seems to be skipping pages, but they
> should be
> >careful not to touch those pages.
>
> Perl will only use a single allocator, so there's not a huge issue
> there. It's either the external allocator or the internal one, which
> is for the best since you certainly don't want to be handing back
> memory to the wrong allocator. That way lies madness and unpleasant
> core files.
>
> The bigger issue is that perl's memory allocation system, the one you
> get if you build perl with usemymalloc set to yes, never releases
> memory back to the system -- once the internal allocator gets a chunk
> of memory from the system it's held for the duration of the process.
> This is the right answer in many circumstances, and the allocator's
> pretty nicely tuned to perl's normal allocation patterns, it's just
> not really the right thing in a persistent server situation where
> memory usage bounces up and down. It can happen with the system
> allocator too, though it's less likely.
>
> One of those engineering tradeoff things, and not much to be done
> about it really.
> --
>                                 Dan
>
> --------------------------------------it's like this-------------------
> Dan Sugalski                          even samurai
> dan@sidhe.org                         have teddy bears and even
>                                        teddy bears get drunk
>
> ---------------------------(end of
> broadcast)---------------------------
> TIP 8: explain analyze is your friend
>
> This mail has originated outside your organization,
> either from an external partner or the Global Internet.
>  Keep this in mind if you answer this message.


Re: plperl doesn't release memory

From
"GIROIRE Nicolas (COFRAMI)"
Date:

Hi,
I work with William.

In fact, we have already done the procedure in pl/pgsql but it is too slow and we use array which are native in perl.
The procedure is recursive, and use request on postgreSQL.
According to the evolution of memory use, it seems that no memory is free. I think that comes from the fact we have a recursive procedure.

The execution of the procedure take 3 hours and finishes already by an out of memory.

Can we oblige pl/perl to free memory for variable ?
Or can we configure postgresql to accept this rise in load ?
Or another idea ?

When the procedure crash, postgresql use 280 MB of memory and 2 Go of virtual memory and on the server we have a message "Windows try to increase virtual memory".

Thanks in advance,

Nicolas Giroire.


-----Message d'origine-----
De : pgsql-general-owner@postgresql.org
[mailto:pgsql-general-owner@postgresql.org]De la part de Sean Davis
Envoyé : mercredi 30 mars 2005 17:01
À : FERREIRA William (COFRAMI)
Cc : Postgresql-General list
Objet : Re: [GENERAL] plperl doesn't release memory

As I understand it, a single execution of a pl/perl function will not
be affected by the perl memory issue, so I don't think that is your
problem.

My guess is that you are reading a large query into perl, so the whole
thing will be kept in memory (and you can't use more memory than you
have).  For a large query, this can be a huge amount of memory indeed. 
You could use another language like plpgsql that can support
cursors/looping over query results or, in plperl you could use DBI (not
spi_exec_query) and loop over query results.

Hope this helps,
Sean

On Mar 30, 2005, at 9:33 AM, FERREIRA William (COFRAMI) wrote:

> i have a similar problem
> i'm running PostgreSQL on a PIV with 1GO and Windows 2000 NT
> i have a large database and a big traitment taking more than 4 hours.
> during the first hour postgresql use as much memory as virtual memory
> and i find this strange (growing to more 800MB)
>
> and during the execution i get :
> out of memory
> Failed on request of size 56
> and at the end, postgresql use 300 MB of memory and more than 2GB of
> virtual memory
>
> does this problem can be resolve by tuning postgresql settings ?
> here are my parameters :
> shared_buffers = 1000
> work_mem = 131072
> maintenance_work_mem = 131072
> max_stack_depth = 4096
> i tried work_mem with 512MB and 2MB and i get the same error...
>
> i read all the post, but i don't know how i can configure perl on
> Windows...
>
> thanks in advance
>        
>          Will
>
> -----Message d'origine-----
> De : pgsql-general-owner@postgresql.org
> [mailto:pgsql-general-owner@postgresql.org]De la part de Dan Sugalski
> Envoyé : vendredi 25 mars 2005 19:34
> À : Greg Stark; pgsql-general@postgresql.org
> Objet : Re: [GENERAL] plperl doesn't release memory
>
>
>
> At 6:58 PM -0500 3/24/05, Greg Stark wrote:
> >Dan Sugalski <dan@sidhe.org> writes:
> >
> >>  Anyway, if perl's using its own memory allocator you'll want to
> rebuild it
> >>  to not do that.
> >
> >You would need to do that if you wanted to use a debugging malloc.
> But there's
> >no particular reason to think that you should need to do this just to
> work
> >properly.
> >
> >Two mallocs can work fine alongside each other. They each call mmap
> or sbrk to
> >allocate new pages and they each manage the pages they've received.
> They won't
> >have any idea why the allocator seems to be skipping pages, but they
> should be
> >careful not to touch those pages.
>
> Perl will only use a single allocator, so there's not a huge issue
> there. It's either the external allocator or the internal one, which
> is for the best since you certainly don't want to be handing back
> memory to the wrong allocator. That way lies madness and unpleasant
> core files.
>
> The bigger issue is that perl's memory allocation system, the one you
> get if you build perl with usemymalloc set to yes, never releases
> memory back to the system -- once the internal allocator gets a chunk
> of memory from the system it's held for the duration of the process.
> This is the right answer in many circumstances, and the allocator's
> pretty nicely tuned to perl's normal allocation patterns, it's just
> not really the right thing in a persistent server situation where
> memory usage bounces up and down. It can happen with the system
> allocator too, though it's less likely.
>
> One of those engineering tradeoff things, and not much to be done
> about it really.
> --
>                                 Dan
>
> --------------------------------------it's like this-------------------
> Dan Sugalski                          even samurai
> dan@sidhe.org                         have teddy bears and even
>                                        teddy bears get drunk
>
> ---------------------------(end of
> broadcast)---------------------------
> TIP 8: explain analyze is your friend
>
> This mail has originated outside your organization,
> either from an external partner or the Global Internet.
>  Keep this in mind if you answer this message.

---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
      joining column's datatypes do not match

This mail has originated outside your organization,
either from an external partner or the Global Internet.
Keep this in mind if you answer this message.

Re: plperl doesn't release memory

From
Martijn van Oosterhout
Date:
On Thu, Mar 31, 2005 at 08:38:09AM +0200, GIROIRE Nicolas (COFRAMI) wrote:
> Can we oblige pl/perl to free memory for variable ?
> Or can we configure postgresql to accept this rise in load ?
> Or another idea ?


Perl uses reference counting, so as long as a string is visibile
anywhere (remember closures), it stays around and disappears as soon as
it's unreferenced.

If you have large strings or arrays you don't need, maybe you need to
explicitly undef them. Using shift and pop on arrays doesn't copy the
element for example. Make sure you're using my on all your variables so
they are cleared on exiting a function.

Also, this doesn't work on circular references, so if you build
structures like that you'll need to explicitly break the chain.

Hope this helps,
--
Martijn van Oosterhout   <kleptog@svana.org>   http://svana.org/kleptog/
> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
> tool for doing 5% of the work and then sitting around waiting for someone
> else to do the other 95% so you can sue them.

Attachment

Re: plperl doesn't release memory

From
"GIROIRE Nicolas (COFRAMI)"
Date:
Another solution would be to use pl/python, but i don't know anythig in this language.
 
Is a solution viable ? Can pl/python replace pl/perl without losing performance and use sort under an array ?
Are the array native in python as in perl ?
 

 -----Message d'origine-----
De : pgsql-general-owner@postgresql.org [mailto:pgsql-general-owner@postgresql.org]De la part de GIROIRE Nicolas (COFRAMI)
Envoyé : jeudi 31 mars 2005 08:38
À : Postgresql-General list
Objet : Re: [GENERAL] plperl doesn't release memory

Hi,
I work with William.

In fact, we have already done the procedure in pl/pgsql but it is too slow and we use array which are native in perl.
The procedure is recursive, and use request on postgreSQL.
According to the evolution of memory use, it seems that no memory is free. I think that comes from the fact we have a recursive procedure.

The execution of the procedure take 3 hours and finishes already by an out of memory.

Can we oblige pl/perl to free memory for variable ?
Or can we configure postgresql to accept this rise in load ?
Or another idea ?

When the procedure crash, postgresql use 280 MB of memory and 2 Go of virtual memory and on the server we have a message "Windows try to increase virtual memory".

Thanks in advance,

Nicolas Giroire.


-----Message d'origine-----
De : pgsql-general-owner@postgresql.org
[mailto:pgsql-general-owner@postgresql.org]De la part de Sean Davis
Envoyé : mercredi 30 mars 2005 17:01
À : FERREIRA William (COFRAMI)
Cc : Postgresql-General list
Objet : Re: [GENERAL] plperl doesn't release memory

As I understand it, a single execution of a pl/perl function will not
be affected by the perl memory issue, so I don't think that is your
problem.

My guess is that you are reading a large query into perl, so the whole
thing will be kept in memory (and you can't use more memory than you
have).  For a large query, this can be a huge amount of memory indeed. 
You could use another language like plpgsql that can support
cursors/looping over query results or, in plperl you could use DBI (not
spi_exec_query) and loop over query results.

Hope this helps,
Sean

On Mar 30, 2005, at 9:33 AM, FERREIRA William (COFRAMI) wrote:

> i have a similar problem
> i'm running PostgreSQL on a PIV with 1GO and Windows 2000 NT
> i have a large database and a big traitment taking more than 4 hours.
> during the first hour postgresql use as much memory as virtual memory
> and i find this strange (growing to more 800MB)
>
> and during the execution i get :
> out of memory
> Failed on request of size 56
> and at the end, postgresql use 300 MB of memory and more than 2GB of
> virtual memory
>
> does this problem can be resolve by tuning postgresql settings ?
> here are my parameters :
> shared_buffers = 1000
> work_mem = 131072
> maintenance_work_mem = 131072
> max_stack_depth = 4096
> i tried work_mem with 512MB and 2MB and i get the same error...
>
> i read all the post, but i don't know how i can configure perl on
> Windows...
>
> thanks in advance
>        
>          Will
>
> -----Message d'origine-----
> De : pgsql-general-owner@postgresql.org
> [mailto:pgsql-general-owner@postgresql.org]De la part de Dan Sugalski
> Envoyé : vendredi 25 mars 2005 19:34
> À : Greg Stark; pgsql-general@postgresql.org
> Objet : Re: [GENERAL] plperl doesn't release memory
>
>
>
> At 6:58 PM -0500 3/24/05, Greg Stark wrote:
> >Dan Sugalski <dan@sidhe.org> writes:
> >
> >>  Anyway, if perl's using its own memory allocator you'll want to
> rebuild it
> >>  to not do that.
> >
> >You would need to do that if you wanted to use a debugging malloc.
> But there's
> >no particular reason to think that you should need to do this just to
> work
> >properly.
> >
> >Two mallocs can work fine alongside each other. They each call mmap
> or sbrk to
> >allocate new pages and they each manage the pages they've received.
> They won't
> >have any idea why the allocator seems to be skipping pages, but they
> should be
> >careful not to touch those pages.
>
> Perl will only use a single allocator, so there's not a huge issue
> there. It's either the external allocator or the internal one, which
> is for the best since you certainly don't want to be handing back
> memory to the wrong allocator. That way lies madness and unpleasant
> core files.
>
> The bigger issue is that perl's memory allocation system, the one you
> get if you build perl with usemymalloc set to yes, never releases
> memory back to the system -- once the internal allocator gets a chunk
> of memory from the system it's held for the duration of the process.
> This is the right answer in many circumstances, and the allocator's
> pretty nicely tuned to perl's normal allocation patterns, it's just
> not really the right thing in a persistent server situation where
> memory usage bounces up and down. It can happen with the system
> allocator too, though it's less likely.
>
> One of those engineering tradeoff things, and not much to be done
> about it really.
> --
>                                 Dan
>
> --------------------------------------it's like this-------------------
> Dan Sugalski                          even samurai
> dan@sidhe.org                         have teddy bears and even
>                                        teddy bears get drunk
>
> ---------------------------(end of
> broadcast)---------------------------
> TIP 8: explain analyze is your friend
>
> This mail has originated outside your organization,
> either from an external partner or the Global Internet.
>  Keep this in mind if you answer this message.

---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
      joining column's datatypes do not match

This mail has originated outside your organization,
either from an external partner or the Global Internet.
Keep this in mind if you answer this message.

Re: plperl doesn't release memory

From
Harald Fuchs
Date:
In article <20050331082524.GC30965@svana.org>,
Martijn van Oosterhout <kleptog@svana.org> writes:

> Perl uses reference counting, so as long as a string is visibile
> anywhere (remember closures), it stays around and disappears as soon as
> it's unreferenced.

> If you have large strings or arrays you don't need, maybe you need to
> explicitly undef them. Using shift and pop on arrays doesn't copy the
> element for example. Make sure you're using my on all your variables so
> they are cleared on exiting a function.

> Also, this doesn't work on circular references, so if you build
> structures like that you'll need to explicitly break the chain.

... or use WeakRef.

Re: plperl doesn't release memory

From
Dan Sugalski
Date:
At 8:38 AM +0200 3/31/05, GIROIRE Nicolas (COFRAMI) wrote:
Hi,
I work with William.
In fact, we have already done the procedure in pl/pgsql but it is too slow and we use array which are native in perl.
The procedure is recursive, and use request on postgreSQL.
According to the evolution of memory use, it seems that no memory is free. I think that comes from the fact we have a recursive procedure.
The execution of the procedure take 3 hours and finishes already by an out of memory.
Can we oblige pl/perl to free memory for variable ?
Or can we configure postgresql to accept this rise in load ?
Or another idea ?

Perl generally frees things up as soon as they're no longer used, but there are a few cases where you'll run into trouble.

The first is the circular reference problem -- because perl uses reference counting, circular data structures won't ever die on their own, something you'll need to watch out for.

You need to make sure the variables actually go out of scope. With a recursive procedure this is a definite worry, since perl cleans up when variables go out of scope, and that doesn't happen until a sub actually exits.

Perl also does some optimistic caching as a performance booster, which is generally a win but sometimes it isn't. While perl cleans up the contents of variables, it leaves the structure in place for arrays and hashes for subs. (Though only once for each sub, so this doesn't get nuts for recursive invocations of a subroutine) Not normally a problem, but if you've got a 100M element array the bits add up.

Finally, make sure you're using a relatively recent perl, one of the 5.8 versions. There were some bugs relating to closures that got patched up -- earlier versions had some reference count issues there so closures and their contents tended not to ever get cleaned up.

*Assuming* you're not actually leaking data with circular structures and the like, or throwing massive amounts of data into globals, there are a few things you can do to keep your memory usage in line.

1) Do *not* pass in large arrays or hashes as parameters. Use references to them instead, to avoid perl's parameter flattening
2) Kill your data yourself when you're done with it by undef()ing the variables. (Do *not* assign in empty lists, or empty strings. That isn't enough) "undef @foo", for example, will completely clean out the @foo array, and leave you with a variable that only takes up 56 bytes or so.
3) Try and keep the number of hash keys you use relatively low. (Not normally an issue, but once you start getting into millions of entries it adds up) Perl makes individual small allocations for hash keys and it tends to fragment the free list.

It might be worth a code review to see if you're doing things that are inefficient in general. That tends to be an issue when working with large data sets, since inefficiencies that don't matter with 100 (or 100K) records becomes an issue when you get into massive data sets.

You can also do some memory usage investigation with Devel::Size and some of the other Devel modules. (Though be warned that Devel::Size is pretty profligate itself with memory)

-----Message d'origine-----
De : pgsql-general-owner@postgresql.org
[mailto:pgsql-general-owner@postgresql.org]De la part de Sean Davis
Envoyé : mercredi 30 mars 2005 17:01
À : FERREIRA William (COFRAMI)
Cc : Postgresql-General list
Objet : Re: [GENERAL] plperl doesn't release memory

As I understand it, a single execution of a pl/perl function will not
be affected by the perl memory issue, so I don't think that is your
problem.
My guess is that you are reading a large query into perl, so the whole
thing will be kept in memory (and you can't use more memory than you
have).  For a large query, this can be a huge amount of memory indeed. 
You could use another language like plpgsql that can support
cursors/looping over query results or, in plperl you could use DBI (not
spi_exec_query) and loop over query results.
Hope this helps,
Sean
On Mar 30, 2005, at 9:33 AM, FERREIRA William (COFRAMI) wrote:
> i have a similar problem
> i'm running PostgreSQL on a PIV with 1GO and Windows 2000 NT
> i have a large database and a big traitment taking more than 4 hours.
> during the first hour postgresql use as much memory as virtual memory
> and i find this strange (growing to more 800MB)
>
> and during the execution i get :
> out of memory
> Failed on request of size 56
> and at the end, postgresql use 300 MB of memory and more than 2GB of
> virtual memory
>
> does this problem can be resolve by tuning postgresql settings ?
> here are my parameters :
> shared_buffers = 1000
> work_mem = 131072
> maintenance_work_mem = 131072
> max_stack_depth = 4096
> i tried work_mem with 512MB and 2MB and i get the same error...
>
> i read all the post, but i don't know how i can configure perl on
> Windows...
>
> thanks in advance
>        
>          Will
>
> -----Message d'origine-----
> De : pgsql-general-owner@postgresql.org
> [mailto:pgsql-general-owner@postgresql.org]De la part de Dan Sugalski
> Envoyé : vendredi 25 mars 2005 19:34
> À : Greg Stark; pgsql-general@postgresql.org
> Objet : Re: [GENERAL] plperl doesn't release memory
>
>
>
> At 6:58 PM -0500 3/24/05, Greg Stark wrote:
> >Dan Sugalski <dan@sidhe.org> writes:
> >
> >>  Anyway, if perl's using its own memory allocator you'll want to
> rebuild it
> >>  to not do that.
> >
> >You would need to do that if you wanted to use a debugging malloc.
> But there's
> >no particular reason to think that you should need to do this just to
> work
> >properly.
> >
> >Two mallocs can work fine alongside each other. They each call mmap
> or sbrk to
> >allocate new pages and they each manage the pages they've received.
> They won't
> >have any idea why the allocator seems to be skipping pages, but they
> should be
> >careful not to touch those pages.
>
> Perl will only use a single allocator, so there's not a huge issue
> there. It's either the external allocator or the internal one, which
> is for the best since you certainly don't want to be handing back
> memory to the wrong allocator. That way lies madness and unpleasant
> core files.
>
> The bigger issue is that perl's memory allocation system, the one you
> get if you build perl with usemymalloc set to yes, never releases
> memory back to the system -- once the internal allocator gets a chunk
> of memory from the system it's held for the duration of the process.
> This is the right answer in many circumstances, and the allocator's
> pretty nicely tuned to perl's normal allocation patterns, it's just
> not really the right thing in a persistent server situation where
> memory usage bounces up and down. It can happen with the system
> allocator too, though it's less likely.
>
> One of those engineering tradeoff things, and not much to be done
> about it really.
> --
>                                 Dan
>
> --------------------------------------it's like this-------------------
> Dan Sugalski                          even samurai
> dan@sidhe.org                         have teddy bears and even
>                                        teddy bears get drunk
>
> ---------------------------(end of
> broadcast)---------------------------
> TIP 8: explain analyze is your friend
>
> This mail has originated outside your organization,
> either from an external partner or the Global Internet.
>  Keep this in mind if you answer this message.

---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
      joining column's datatypes do not match
This mail has originated outside your organization,
either from an external partner or the Global Internet.
Keep this in mind if you answer this message.


--
                                Dan

--------------------------------------it's like this-------------------
Dan Sugalski                          even samurai
dan@sidhe.org                         have teddy bears and even
                                      teddy bears get drunk

Re: plperl doesn't release memory

From
"GIROIRE Nicolas (COFRAMI)"
Date:

The function is a little big, so i put it in an enclosure file.

The function write regularly in a file to empty the variable theClob. The fact the written is done all 100000 has no signification, it's just to empty theClob before it's full.

We have a look at the memory and it never decreases.
If you have question about code, tell me ?
This code is correct because it functions over a little example.

Best regards

        COFRAMI
        Nicolas Giroire
        on behalf of AIRBUS France
        for In Flight & Ground Information Services - Development
        Phone : +33 (0)5 67 19 98 74
              Mailto:nicolas.giroire@airbus.com

-----Message d'origine-----
De : Sean Davis [mailto:sdavis2@mail.nih.gov]
Envoyé : jeudi 31 mars 2005 13:15
À : GIROIRE Nicolas (COFRAMI)
Objet : Re: [GENERAL] plperl doesn't release memory

On Mar 31, 2005, at 1:38 AM, GIROIRE Nicolas (COFRAMI) wrote:

> Hi,
> I work with William.
>
> In fact, we have already done the procedure in pl/pgsql but it is too
> slow and we use array which are native in perl.
> The procedure is recursive, and use request on postgreSQL.
> According to the evolution of memory use, it seems that no memory is
> free. I think that comes from the fact we have a recursive procedure.
>
> The execution of the procedure take 3 hours and finishes already by an
> out of memory.
>
> Can we oblige pl/perl to free memory for variable ?
> Or can we configure postgresql to accept this rise in load ?
> Or another idea ?
>
> When the procedure crash, postgresql use 280 MB of memory and 2 Go of
> virtual memory and on the server we have a message "Windows try to
> increase virtual memory".
>

Perhaps, if the function isn't too big, you could post it so that we
might see what you are trying to do.  As other folks have mentioned, as
variables go out of scope, the memory is freed.  However, if they don't
go out of scope, they won't be freed until the end of the function.  My
concern, like that of others, is that your variables are not going out
of scope (or being undefined explicitly).  The only way to know is to
go through the code.

Sean

This mail has originated outside your organization,
either from an external partner or the Global Internet.
Keep this in mind if you answer this message.

 

Attachment