Re: Is this way of testing a bad idea? - Mailing list pgsql-performance

From Tom Lane
Subject Re: Is this way of testing a bad idea?
Date
Msg-id 3115.1156423876@sss.pgh.pa.us
Whole thread Raw
In response to Is this way of testing a bad idea?  ("Fredrik Israelsson" <fredrik.israelsson@eu.biotage.com>)
List pgsql-performance
"Fredrik Israelsson" <fredrik.israelsson@eu.biotage.com> writes:
> Monitoring the processes using top reveals that the total amount of
> memory used slowly increases during the test. When reaching insert
> number 40000, or somewhere around that, memory is exhausted, and the the
> systems begins to swap. Each of the postmaster processes seem to use a
> constant amount of memory, but the total memory usage increases all the
> same.

That statement is basically nonsense.   If there is a memory leak then
you should be able to pin it on some specific process.

What's your test case exactly, and what's your basis for asserting that
the system starts to swap?  We've seen people fooled by the fact that
some versions of ps report a process's total memory size as including
whatever pages of Postgres' shared memory area the process has actually
chanced to touch.  So as a backend randomly happens to use different
shared buffers its reported memory size grows ... but there's no actual
leak, and no reason why the system would start to swap.  (Unless maybe
you've set an unreasonably high shared_buffers setting?)

Another theory is that you're watching free memory go to zero because
the kernel is filling free memory with copies of disk pages.  This is
not a leak either.  Zero free memory is the normal, expected state of
a Unix system that's been up for any length of time.

            regards, tom lane

pgsql-performance by date:

Previous
From: "Fredrik Israelsson"
Date:
Subject: Is this way of testing a bad idea?
Next
From: "Merlin Moncure"
Date:
Subject: Re: PowerEdge 2950 questions