Thread: vm/swap used until exhausted

vm/swap used until exhausted

From
Zane
Date:
Different memory usage 7.4.3 vs 8.0.0beta1

client does:

begin
  bulk inserts into single table via PQexecParams (1.2 million records)
commit

under 7.4.3 memory usage is static
under 8.0.0beta1 server used increasing memory untill depletion of vm/swap

7.4.3
last pid:   974;  load averages:  1.44,  1.12,  0.68    up 0+00:24:08
19:05:08
65 processes:  3 running, 62 sleeping
CPU states: 88.0% user,  0.0% nice,  9.7% system,  2.3% interrupt,  0.0% idle
Mem: 96M Active, 16M Inact, 31M Wired, 7008K Cache, 28M Buf, 28M Free
Swap: 357M Total, 37M Used, 319M Free, 10% Inuse

  PID USERNAME PRI NICE   SIZE    RES STATE    TIME   WCPU    CPU COMMAND
  923 pgsql    115    0 16712K 11792K RUN      8:06 65.82% 65.82% postgres
  921 pgsql     96    0  2292K  1320K RUN      0:01  0.00%  0.00% top
  904 pgsql      4    0 16016K  1204K select   0:00  0.00%  0.00% postgres
  877 pgsql      8    0   916K     0K wait     0:00  0.00%  0.00% <sh>
  906 pgsql      4    0  6808K    12K select   0:00  0.00%  0.00% postgres
  905 pgsql      4    0  7764K    12K select   0:00  0.00%  0.00% postgres

8.0.0 beta1

last pid: 11448;  load averages:  1.00,  0.35,  0.23    up 0+04:57:28
23:38:28
64 processes:  2 running, 62 sleeping
CPU states: 77.0% user,  0.0% nice, 15.6% system,  7.4% interrupt,  0.0% idle
Mem: 115M Active, 15M Inact, 42M Wired, 7540K Cache, 28M Buf, 564K Free
Swap: 357M Total, 57M Used, 300M Free, 15% Inuse, 932K Out

  PID USERNAME PRI NICE   SIZE    RES STATE    TIME   WCPU    CPU COMMAND
11448 pgsql    130    0 83564K 78732K RUN      0:58 72.08% 70.56% postgres
11438 pgsql     96    0 13960K 10156K select   0:00  0.00%  0.00% postgres
11436 pgsql     96    0 13952K     0K select   0:00  0.00%  0.00% <postgres>
  877 pgsql      5    0   920K     0K ttyin    0:00  0.00%  0.00% <sh>
11440 pgsql      4    0  4552K     0K select   0:00  0.00%  0.00% <postgres>
11439 pgsql     96    0  5480K  2028K select   0:00  0.00%  0.00% postgres

Re: vm/swap used until exhausted

From
Tom Lane
Date:
Zane <Zane@mail4z.com> writes:
> client does:
> begin
>   bulk inserts into single table via PQexecParams (1.2 million records)
> commit

Could we see a concrete test case?  I really don't have time to guess
about what contributing factors might be involved ...

            regards, tom lane

Re: vm/swap used until exhausted

From
Tom Lane
Date:
Zane <Zane@mail4z.com> writes:
> Different memory usage 7.4.3 vs 8.0.0beta1
> client does:
> begin
>   bulk inserts into single table via PQexecParams (1.2 million records)
> commit
> under 7.4.3 memory usage is static
> under 8.0.0beta1 server used increasing memory untill depletion of vm/swap

I've looked into this, and the source of the problem is the new
ResourceOwner mechanism we added to manage locks etc. held by
subtransactions.  Each of the INSERT commands takes out another
lock on the target table.  In prior releases this had no effect
except to increment a lock count in shared memory.  In CVS tip,
each lock request is also recorded in a ResourceOwner object,
and it's the accumulation of those that is responsible for the
memory leak.

To deal with this, I am thinking about creating a new hash table
(local in each backend) that records locks already held, the
ResourceOwner(s) they are held on behalf of, and a lock count
for each one.  Increasing the lock count for a lock already held
would thus not need any additional memory.  Another nice property
is that we could have the shared-memory lock table register only
one lock count per backend; increasing the local lock count for
an already-obtained lock wouldn't require touching shared memory
and thus not require obtaining the LockMgrLock.  (This would be
comparable to the existing mechanism for private vs. shared reference
counts for buffers.)  That might be enough of a win to buy back
the extra time spent maintaining the additional hash table.

This is a bigger change than I'd really like to be making in beta,
but I don't see any other good solution to the memory-leak problem.
Anyone have a better idea?
        regards, tom lane