Re: PostgreSQL 9.2 - pg_dump out of memory when backuping a database with 300000000 large objects - Mailing list pgsql-admin

From luckyjackgao
Subject Re: PostgreSQL 9.2 - pg_dump out of memory when backuping a database with 300000000 large objects
Date
Msg-id 1381476299799-5774252.post@n5.nabble.com
Whole thread Raw
In response to PostgreSQL 9.2 - pg_dump out of memory when backuping a database with 300000000 large objects  (Sergey Klochkov <klochkov@iqbuzz.ru>)
List pgsql-admin
Hello

I have encountered some issues of  PG crash when dealing  with too much
data.
It seems that PG tries to do its task as quckly as it can be and will use as
much resource as it can.

Later I tried cgroups to limit resource usage to avoid PG consuming too much
memory etc. too quickly.
And PG works fine.

I edited the following files:

/etc/cgconfig.conf

mount {
    cpuset    = /cgroup/cpuset;
    cpu    = /cgroup/cpu;
    cpuacct    = /cgroup/cpuacct;
    memory    = /cgroup/memory;
    devices    = /cgroup/devices;
    freezer    = /cgroup/freezer;
    net_cls    = /cgroup/net_cls;
    blkio    = /cgroup/blkio;
}

group test1 {
    perm {
          task{
              uid=postgres;
              gid=postgres;
          }

          admin{
             uid=root;
             gid=root;
          }

    } memory {
       memory.limit_in_bytes=300M;
    }
}

/etc/cgrules.conf
# End of file
 postgres      memory           test1/
#
Then set service on and restart , then login as postgres
chkconfig cgconfig  on

chkconfig cgred on

And I can find PG works under 300M memory limit.

Best Regards
jian gao




--
View this message in context:
http://postgresql.1045698.n5.nabble.com/PostgreSQL-9-2-pg-dump-out-of-memory-when-backuping-a-database-with-300000000-large-objects-tp5772931p5774252.html
Sent from the PostgreSQL - admin mailing list archive at Nabble.com.


pgsql-admin by date:

Previous
From: Scott Whitney
Date:
Subject: Re: convert from latin1 to utf8
Next
From: alfred
Date:
Subject: Mysql to Postgresql