Re: Out of Memory errors while running pg_dump - Mailing list pgsql-general

From Tom Lane
Subject Re: Out of Memory errors while running pg_dump
Date
Msg-id 10447.1202169425@sss.pgh.pa.us
Whole thread Raw
In response to Re: Out of Memory errors while running pg_dump  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-general
I wrote:
> ... I'm wondering a bit why
> CacheMemoryContext has so much free space in it, but even if it had none
> you'd still be at risk.

I tried to reproduce this by creating a whole lot of trivial tables and
then pg_dump'ing them:

create table t0 (f1 int primary key); insert into t0 values(0);
create table t1 (f1 int primary key); insert into t1 values(1);
create table t2 (f1 int primary key); insert into t2 values(2);
create table t3 (f1 int primary key); insert into t3 values(3);
create table t4 (f1 int primary key); insert into t4 values(4);
create table t5 (f1 int primary key); insert into t5 values(5);
...
(about 17000 tables before I got bored)

I looked at the backend memory stats at the end of the pg_dump run
and found

CacheMemoryContext: 50624864 total in 29 blocks; 608160 free (2 chunks); 50016704 used

which compares awfully favorably to your results of

CacheMemoryContext: 897715768 total in 129 blocks; 457826000 free (2305222 chunks); 439889768 used
CacheMemoryContext: 788990232 total in 147 blocks; 192993824 free (1195074 chunks); 595996408 used

Have you really got 200000+ tables?  Even if you do, the amount of wasted
memory in your runs seems really high.  What PG version is this exactly?
Can you show us the exact schemas of some representative tables?

            regards, tom lane

pgsql-general by date:

Previous
From: "Chuck D."
Date:
Subject: Postgres with daemontools or similar
Next
From: "Andrej Ricnik-Bay"
Date:
Subject: Re: Postgres with daemontools or similar