Re: BUG #12183: Memory leak in long running sessions - Mailing list pgsql-bugs
From | Valentine Gogichashvili |
---|---|
Subject | Re: BUG #12183: Memory leak in long running sessions |
Date | |
Msg-id | CAP93muWt0LG=O3Y5-e3Xg1VT+73hOFJnt6B-b9mGUxxqgLQ8Nw@mail.gmail.com Whole thread Raw |
In response to | Re: BUG #12183: Memory leak in long running sessions (Tom Lane <tgl@sss.pgh.pa.us>) |
Responses |
Re: BUG #12183: Memory leak in long running sessions
Re: BUG #12183: Memory leak in long running sessions |
List | pgsql-bugs |
Hello Tom, the biggest evidence for me is that the COMMIT_AS memory grow to the COMMIT_LIMIT at some moment and we do not have any memory left to allocate on the system and sessions start to throw 'out of memory' exceptions. Killing that old running sessions reduces COMMIT_AS value (we get up to 60GB free). So I do not have any explanation other then a leak. We made more statistics gathering monitoring the /proc/PID/statm values "resident" - "shared" for such processes. We see the value growth and could correlate it with the calling of some of stored procedure that is quite heavy and processes relatively large set of data. It also creates a 50MB temp file. Here is an example of the memory growth: timestamp size(kb) 1418187057 79888 1418187087 80192 1418187117 85976 1418187147 86100 1418187177 88292 1418187207 88740 1418187237 380524 1418187267 719960 1418187297 555292 1418187327 560488 1418187357 563500 1418187387 569868 1418187417 573800 1418187447 576692 1418187477 582300 1418187507 584240 1418187537 586036 1418187567 586508 1418187597 586852 1418187628 587284 1418187658 589092 1418187688 589392 1418187718 601164 1418187748 602124 1418187778 605472 1418187808 606520 1418187838 608196 1418187868 609612 1418187898 612588 1418187928 614740 1418187958 616572 1418187988 630092 1418188018 630696 1418188048 632240 1418188078 634220 1418188108 636636 1418188138 637192 1418188168 638120 1418188198 640940 1418188229 642532 1418188259 645040 1418188289 827404 1418188319 662092 1418188349 662676 1418188379 662748 1418188409 663364 1418188439 663408 1418188469 663416 1418188499 663600 1418188529 664224 1418188559 664620 1418188589 666088 1418188619 667176 1418188649 669252 1418188680 669732 1418188710 670176 1418188740 670224 1418188770 676104 1418188800 684256 1418188830 804900 1418188860 1151124 1418188890 1151124 1418188920 949300 1418188950 950456 1418188980 950492 1418189010 950764 1418189040 951756 1418189070 953068 Can we collect some more information that would give more hints on what is going on there? Regards, -- Valentine Gogichashvili On Mon, Dec 8, 2014 at 8:02 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > valgog@gmail.com writes: > > We experience a situations, that some of the sessions (in our case the > > oldest ones) do not give the memory back. > > You have not shown any evidence of an actual problem. In particular, > if you are looking at ps RSS output and claiming that there's a leak, > you are probably simply wrong. The output shown here looks like normal > behavior of the RSS stat: it does not count shared memory pages for a > particular process until that process has touched the individual pages. > So the usual behavior of long-lived PG processes is that the reported > RSS starts small and gradually grows until it includes all of shared > memory ... and that looks like what you've got here, especially since > the larger RSS numbers are pretty similar to the VSZ numbers which are > nearly common across all the backends. > > If you had some individual processes with RSS/VSZ greatly exceeding > your shared memory allocation, then I'd believe you had a leak problem. > > regards, tom lane >
pgsql-bugs by date: