Thread: "Out of memory" errors..

"Out of memory" errors..

From
"Lim Berger"
Date:
Hi

I am getting the following error while running queries such as "vacuum
analyze TABLE", even on small tables with a piddly 35,000 rows!

The error message:
--
ERROR:  out of memory
DETAIL:  Failed on request of size 67108860.
--

My postgresql.conf is below. I am on a Dual Core server with 4GB or
RAM, which runs MySQL as well (key_buffer for which is at around
800M). So I have allocated shared_buffers for postgresql based on that
number. The server also runs Apache and other stuff, but I have never
had any problem running the vacuum equivalent called "REPAIR TABLE" on
MySQL.

Thanks in advance for any inputs!



------POSTGRESQL.CONF-------
#--- Some tuning ~
#--- http://www.opennms.org/index.php/Performance_tuning
max_connections = 250
shared_buffers = 21000
effective_cache_size = 21000
max_fsm_relations = 1500
max_fsm_pages = 80000
sort_mem = 16348
work_mem = 16348
vacuum_mem = 16348
temp_buffers = 4096
authentication_timeout = 10s
ssl = off

autovacuum = on
vacuum_cost_delay = 50
stats_start_collector = on
stats_row_level = on

#--- For COPY performance
wal_buffers=64
checkpoint_segments=64
checkpoint_timeout=900
fsync = on
maintenance_work_mem = 64MB

Re: "Out of memory" errors..

From
Gregory Stark
Date:
"Lim Berger" <straightfwd007@gmail.com> writes:

> Hi
>
> I am getting the following error while running queries such as "vacuum
> analyze TABLE", even on small tables with a piddly 35,000 rows!
>
> The error message:
> --
> ERROR:  out of memory
> DETAIL:  Failed on request of size 67108860.
> --
>
> My postgresql.conf is below. I am on a Dual Core server with 4GB or
> RAM, which runs MySQL as well (key_buffer for which is at around
> 800M).

What version of Postgres is this?


--
  Gregory Stark
  EnterpriseDB          http://www.enterprisedb.com

Re: "Out of memory" errors..

From
"Lim Berger"
Date:
On 8/13/07, Gregory Stark <stark@enterprisedb.com> wrote:
> "Lim Berger" <straightfwd007@gmail.com> writes:
>
> > Hi
> >
> > I am getting the following error while running queries such as "vacuum
> > analyze TABLE", even on small tables with a piddly 35,000 rows!
> >
> > The error message:
> > --
> > ERROR:  out of memory
> > DETAIL:  Failed on request of size 67108860.
> > --
> >
> > My postgresql.conf is below. I am on a Dual Core server with 4GB or
> > RAM, which runs MySQL as well (key_buffer for which is at around
> > 800M).
>
> What version of Postgres is this?
>



=# select version();
                                                version
----------------------------------------
 PostgreSQL 8.2.3 on i686-redhat-linux-gnu, compiled by GCC gcc (GCC)
3.4.6 20060404 (Red Hat 3.4.6-3)
(1 row)


Thanks for any tips!

Re: "Out of memory" errors..

From
Gregory Stark
Date:
"Lim Berger" <straightfwd007@gmail.com> writes:

> On 8/13/07, Gregory Stark <stark@enterprisedb.com> wrote:
>> "Lim Berger" <straightfwd007@gmail.com> writes:
>>
>> > Hi
>> >
>> > I am getting the following error while running queries such as "vacuum
>> > analyze TABLE", even on small tables with a piddly 35,000 rows!
>> >
>> > The error message:
>> > --
>> > ERROR:  out of memory
>> > DETAIL:  Failed on request of size 67108860.
>> > --
>> >
>> > My postgresql.conf is below. I am on a Dual Core server with 4GB or
>> > RAM, which runs MySQL as well (key_buffer for which is at around
>> > 800M).
>>
>> What version of Postgres is this?
>
> =# select version();
>                                                 version
> ----------------------------------------
>  PostgreSQL 8.2.3 on i686-redhat-linux-gnu, compiled by GCC gcc (GCC)
> 3.4.6 20060404 (Red Hat 3.4.6-3)
> (1 row)

Hm, this is quite odd.

Could you give more information? You're getting this on lots of different
tables? Could you give more examples? And do you get it on anything other than
vacuum analyze? What does the schema look like?

Do you have any hash indexes? (there was a bug fixed in 8.2.4 with them)
Do you have anything else unusual like tsearch2 or custom C modules loaded?

Has anything unusual happened to this machine such as a server crash or power
failure? Is anything else failing? Can you run a good memory tester like
memtest86? Could you check your dmesg log to see if there are any system
problems?

--
  Gregory Stark
  EnterpriseDB          http://www.enterprisedb.com

Re: "Out of memory" errors..

From
Tom Lane
Date:
"Lim Berger" <straightfwd007@gmail.com> writes:
> ERROR:  out of memory
> DETAIL:  Failed on request of size 67108860.

Apparently, this number:

> maintenance_work_mem = 64MB

is more than your system can actually support.  Which is a bit odd for
any modern-day machine.  I suspect the postmaster is being started with
an unduly small ulimit.

            regards, tom lane

Re: "Out of memory" errors..

From
"Lim Berger"
Date:
My responses below yours. Thanks so much for bearing with me..



On 8/13/07, Gregory Stark <stark@enterprisedb.com> wrote:
> "Lim Berger" <straightfwd007@gmail.com> writes:
>
> > On 8/13/07, Gregory Stark <stark@enterprisedb.com> wrote:
> >> "Lim Berger" <straightfwd007@gmail.com> writes:
> >>
> >> > Hi
> >> >
> >> > I am getting the following error while running queries such as "vacuum
> >> > analyze TABLE", even on small tables with a piddly 35,000 rows!
> >> >
> >> > The error message:
> >> > --
> >> > ERROR:  out of memory
> >> > DETAIL:  Failed on request of size 67108860.
> >> > --
> >> >
> >> > My postgresql.conf is below. I am on a Dual Core server with 4GB or
> >> > RAM, which runs MySQL as well (key_buffer for which is at around
> >> > 800M).
> >>
> >> What version of Postgres is this?
> >
> > =# select version();
> >                                                 version
> > ----------------------------------------
> >  PostgreSQL 8.2.3 on i686-redhat-linux-gnu, compiled by GCC gcc (GCC)
> > 3.4.6 20060404 (Red Hat 3.4.6-3)
> > (1 row)
>
> Hm, this is quite odd.
>



> Could you give more information? You're getting this
> on lots of different tables?


Yes, all of them. The example I gave is the smallest table with about
35,000 rows.

Btw, before anything else, the tweaking comments here --
http://www.powerpostgresql.com/PerfList/  ..suggest that for
autovacuum I need to recompile and have settings like "-D -v 400 -V
0.4 -a 100 -A 0.3".

Where do I do this? I don't have any such setting in my
postgresql.conf! How should I find out if autovacuum is set up and
functional, and at what points it enters and vacuums up?


> Could you give more examples? And do you get it on anything
> other than vacuum analyze?


More examples of "vacuum analyze" on other tables? Every table gives
the same error. ANALYZE alone works ok. SELECTing works ok. Multiple
UPDATEing in a transaction block works ok. So does INSERT. What other
examples could I furnish? Please help me help you help me :)


> What does the schema look like?
>


You mean the entire tables definition? Below it is. It's a simple
schema, because I am sharing the workload between MySQL and PGSQL for
now, slowly switching to PGSQL.


                       List of relations
 Schema |          Name           |   Type   |      Owner
--------+-------------------------+----------+-----------------
 public | program                 | table    | MYUSERID
 public | program_id_seq          | sequence | MYUSERID
 public | program_subscribers     | table    | MYUSERID
 public | mini                    | table    | MYUSERID
 public | users                   | table    | MYUSERID
(5 rows)


PROGRAM table has 35,000 rows
PROGRAM_SUBSCRIBERS has 10,000
MINI has about 3 million
USERS has about 200.

On this small DB, I am not sure why there is memory outage. Just one
thing -- the "MINI" table has **huge** concurrent usage, about 10,000
accesses per minute, and it has only three columns so it is a bit of a
caching table.

It does have an index though, that is used in our queries very simply
and effeciently, because this concurrent use is all with an "=" query
on the indexed column. Very small, fast queries.

In MYSQL, this used to be very fast due to their "query cache", but
there is no equivalent in PGSQL inside the DB, not outside of the
usual filesystem anyway, so I am not sure if PGSQL is holding up to
the concurrent usage.

To test this ignorant hypothesis of whether PGSQL was buckling under
huge concurrent pressure, I restarted the postgresql process, and the
memory problem is still there, so I doubt the memory outage is caused
by huge concurrent access.

PGSQL (seems to) return the results very fast as well.



> Do you have any hash indexes? (there was a bug fixed
> in 8.2.4 with them)


Nope. Very simple tables actually, all with one BTREE index each. The
complex stuff is still in MySQL and totally separate from this. There
are five tables in MYSQL with compound indexes on 2 to 5 columns. But
that is besides the point for now.


> Do you have anything else unusual like tsearch2 or
> custom C modules loaded?


Not that I know of, unless they are included by default. How can I
check? If they are not smooshed in, then no, I don't have them
installed.


> Has anything unusual happened to this machine such as a
> server crash or power failure?


Hmm, not really. But I did reboot it last week when it buckled under a
MYSQL REPAIR TABLE issue. I wonder how that could be related to PGSQL
though? Same machine and all? It wasn't a very dramatic crash or
anything.



> Is anything else failing? Can you run a good memory tester like
> memtest86? Could you check your dmesg log to see if there are any system
> problems?



Wow, this is all a bit technical for me. I went to the memtest86 site,
and downloaded their binary. Now I will try to do the untarring and
make/makeinstall stuff, but their site is very sparse on info.

But here is my memory check info from the server:

---
MemTotal:      4148844 kB
MemFree:        793052 kB
Buffers:        130280 kB
Cached:        2333716 kB
SwapCached:       7304 kB
Active:        2551448 kB
Inactive:       711836 kB
HighTotal:     3276160 kB
HighFree:       237184 kB
LowTotal:       872684 kB
LowFree:        555868 kB
SwapTotal:     2096440 kB
SwapFree:      2084700 kB
Dirty:            1996 kB
Writeback:           0 kB
Mapped:         836816 kB
Slab:            65140 kB
CommitLimit:   4170860 kB
Committed_AS:  2531972 kB
PageTables:       9284 kB
VmallocTotal:   106488 kB
VmallocUsed:      3284 kB
VmallocChunk:   102504 kB
HugePages_Total:     0
HugePages_Free:      0
Hugepagesize:     2048 kB
---


The dmesg output shows me this blabber, which I have no idea where to
begin decoding. It'd be great if you could point me in the right
direction?


-----BEGIN----
Mem-info:
DMA per-cpu:
cpu 0 hot: low 2, high 6, batch 1
cpu 0 cold: low 0, high 2, batch 1
cpu 1 hot: low 2, high 6, batch 1
cpu 1 cold: low 0, high 2, batch 1
cpu 2 hot: low 2, high 6, batch 1
cpu 2 cold: low 0, high 2, batch 1
cpu 3 hot: low 2, high 6, batch 1
cpu 3 cold: low 0, high 2, batch 1

Normal per-cpu:
cpu 0 hot: low 32, high 96, batch 16
cpu 0 cold: low 0, high 32, batch 16
cpu 1 hot: low 32, high 96, batch 16
cpu 1 cold: low 0, high 32, batch 16
cpu 2 hot: low 32, high 96, batch 16
cpu 2 cold: low 0, high 32, batch 16
cpu 3 hot: low 32, high 96, batch 16
cpu 3 cold: low 0, high 32, batch 16

HighMem per-cpu:
cpu 0 hot: low 32, high 96, batch 16
cpu 0 cold: low 0, high 32, batch 16
cpu 1 hot: low 32, high 96, batch 16
cpu 1 cold: low 0, high 32, batch 16
cpu 2 hot: low 32, high 96, batch 16
cpu 2 cold: low 0, high 32, batch 16
cpu 3 hot: low 32, high 96, batch 16
cpu 3 cold: low 0, high 32, batch 16

Free pages:      417404kB (404864kB HighMem)
Active:458596 inactive:451436 dirty:145735 writeback:48813 unstable:0
free:104351 slab:15369 mapped:411925 pagetables:2938
DMA free:12540kB min:16kB low:32kB high:48kB active:0kB inactive:0kB
present:16384kB pages_scanned:297 all_unreclaimable? yes
protections[]: 0 0 0
Normal free:0kB min:928kB low:1856kB high:2784kB active:271116kB
inactive:511380kB present:901120kB pages_scanned:1749
all_unreclaimable? no
protections[]: 0 0 0
HighMem free:404864kB min:512kB low:1024kB high:1536kB
active:1563284kB inactive:1293200kB present:4063232kB pages_scanned:0
all_unreclaimable? no
protections[]: 0 0 0
DMA: 1*4kB 3*8kB 4*16kB 3*32kB 3*64kB 1*128kB 1*256kB 1*512kB 1*1024kB
1*2048kB 2*4096kB = 12540kB
Normal: 0*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB
0*1024kB 0*2048kB 0*4096kB = 0kB
HighMem: 14382*4kB 24921*8kB 5608*16kB 1802*32kB 1*64kB 0*128kB
0*256kB 1*512kB 0*1024kB 0*2048kB 0*4096kB = 404864kB
Swap cache: add 190137, delete 125938, find 44796/57030, race 0+17
0 bounce buffer pages
Free swap:       1817480kB
1245184 pages of RAM
819040 pages of HIGHMEM
207973 reserved pages
469280 pages shared
65337 pages swap cached
-----END-----



Thanks for bearing with me!
LB

Re: "Out of memory" errors..

From
"Lim Berger"
Date:
On 8/13/07, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> "Lim Berger" <straightfwd007@gmail.com> writes:
> > ERROR:  out of memory
> > DETAIL:  Failed on request of size 67108860.
>
> Apparently, this number:
>
> > maintenance_work_mem = 64MB
>
> is more than your system can actually support.  Which is a bit odd for
> any modern-day machine.  I suspect the postmaster is being started with
> an unduly small ulimit.
>
>                         regards, tom lane



Thanks Tom. Where can I check the "ulimit"? Is it in the config?

I did a "ulimit -a"  (found the command through Google, on an archive
posting by in fact you! -- http://snipr.com/pg_ulimit ) and got the
following output:


~ > ulimit -a
core file size          (blocks, -c) 1000000
data seg size           (kbytes, -d) unlimited
file size               (blocks, -f) unlimited
pending signals                 (-i) 1024
max locked memory       (kbytes, -l) 32
max memory size         (kbytes, -m) unlimited
open files                      (-n) 4096
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 14335
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited


Any idea how to configure this? That is how that thread on an archived
discussion ends too -- the poster did not seem to get any response to
his question about how to tweak this.

Many thanks!

Re: "Out of memory" errors..

From
Bill Moran
Date:
In response to "Lim Berger" <straightfwd007@gmail.com>:

> On 8/13/07, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> > "Lim Berger" <straightfwd007@gmail.com> writes:
> > > ERROR:  out of memory
> > > DETAIL:  Failed on request of size 67108860.
> >
> > Apparently, this number:
> >
> > > maintenance_work_mem = 64MB
> >
> > is more than your system can actually support.  Which is a bit odd for
> > any modern-day machine.  I suspect the postmaster is being started with
> > an unduly small ulimit.
> >
> >                         regards, tom lane
>
> Thanks Tom. Where can I check the "ulimit"? Is it in the config?
>
> I did a "ulimit -a"  (found the command through Google, on an archive
> posting by in fact you! -- http://snipr.com/pg_ulimit ) and got the
> following output:
>
>
> ~ > ulimit -a
> core file size          (blocks, -c) 1000000
> data seg size           (kbytes, -d) unlimited
> file size               (blocks, -f) unlimited
> pending signals                 (-i) 1024
> max locked memory       (kbytes, -l) 32
> max memory size         (kbytes, -m) unlimited
> open files                      (-n) 4096
> pipe size            (512 bytes, -p) 8
> POSIX message queues     (bytes, -q) 819200
> stack size              (kbytes, -s) 8192
> cpu time               (seconds, -t) unlimited
> max user processes              (-u) 14335
> virtual memory          (kbytes, -v) unlimited
> file locks                      (-x) unlimited
>
>
> Any idea how to configure this? That is how that thread on an archived
> discussion ends too -- the poster did not seem to get any response to
> his question about how to tweak this.

Make sure your run the command as the same user that PG runs as (usually
"postgres", but sometimes "pgsql")  ulimits can differ from one user to
another.

How to change the limits differs from one OS to another, and (maybe) even
from distro to distro.

--
Bill Moran
http://www.potentialtech.com

Re: "Out of memory" errors..

From
"Lim Berger"
Date:
On 8/13/07, Bill Moran <wmoran@potentialtech.com> wrote:
> In response to "Lim Berger" <straightfwd007@gmail.com>:
>
> > On 8/13/07, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> > > "Lim Berger" <straightfwd007@gmail.com> writes:
> > > > ERROR:  out of memory
> > > > DETAIL:  Failed on request of size 67108860.
> > >
> > > Apparently, this number:
> > >
> > > > maintenance_work_mem = 64MB
> > >
> > > is more than your system can actually support.  Which is a bit odd for
> > > any modern-day machine.  I suspect the postmaster is being started with
> > > an unduly small ulimit.
> > >
> > >                         regards, tom lane
> >
> > Thanks Tom. Where can I check the "ulimit"? Is it in the config?
> >
> > I did a "ulimit -a"  (found the command through Google, on an archive
> > posting by in fact you! -- http://snipr.com/pg_ulimit ) and got the
> > following output:
> >
> >
> > ~ > ulimit -a
> > core file size          (blocks, -c) 1000000
> > data seg size           (kbytes, -d) unlimited
> > file size               (blocks, -f) unlimited
> > pending signals                 (-i) 1024
> > max locked memory       (kbytes, -l) 32
> > max memory size         (kbytes, -m) unlimited
> > open files                      (-n) 4096
> > pipe size            (512 bytes, -p) 8
> > POSIX message queues     (bytes, -q) 819200
> > stack size              (kbytes, -s) 8192
> > cpu time               (seconds, -t) unlimited
> > max user processes              (-u) 14335
> > virtual memory          (kbytes, -v) unlimited
> > file locks                      (-x) unlimited
> >
> >
> > Any idea how to configure this? That is how that thread on an archived
> > discussion ends too -- the poster did not seem to get any response to
> > his question about how to tweak this.
>
> Make sure your run the command as the same user that PG runs as (usually
> "postgres", but sometimes "pgsql")  ulimits can differ from one user to
> another.
>


Thanks. I did "su postgres" and ran the ulimit command again. All
values are the same, except for "open files" which is double in the
case of this user (instead of 4096, it is 8192). Not sure what I can
gather from that?

Re: "Out of memory" errors..

From
Alvaro Herrera
Date:
Lim Berger escribió:

> Thanks. I did "su postgres" and ran the ulimit command again. All
> values are the same, except for "open files" which is double in the
> case of this user (instead of 4096, it is 8192). Not sure what I can
> gather from that?

Try "su - postgres" instead (which will run the user start scripts and
may modify the ulimits for that user), but note that the ulimit can also
be changed in the script that starts the Postgres process on system
boot.

--
Alvaro Herrera                                http://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

Re: "Out of memory" errors..

From
"Lim Berger"
Date:
On 8/14/07, Alvaro Herrera <alvherre@commandprompt.com> wrote:
> Lim Berger escribió:
>
> > Thanks. I did "su postgres" and ran the ulimit command again. All
> > values are the same, except for "open files" which is double in the
> > case of this user (instead of 4096, it is 8192). Not sure what I can
> > gather from that?
>
> Try "su - postgres" instead (which will run the user start scripts and
> may modify the ulimits for that user), but note that the ulimit can also
> be changed in the script that starts the Postgres process on system
> boot.



Wow, you are right! The "su - postgres" showed up with wildly
different values! Most notably, the "max user processes" is only 20!!
Whereas in the regular user stuff it was above 14000. Would you know
how to change this in a CentOS Linux machine? Where can I find the
startup settings for postgresql? Full values below:



~ > su - postgres
-bash-3.00$ ulimit -a
core file size          (blocks, -c) 200000
data seg size           (kbytes, -d) 200000
file size               (blocks, -f) unlimited
pending signals                 (-i) 1024
max locked memory       (kbytes, -l) 32
max memory size         (kbytes, -m) 200000
open files                      (-n) 100
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 20
virtual memory          (kbytes, -v) 200000
file locks                      (-x) unlimited
-bash-3.00$

Re: "Out of memory" errors..

From
"Lim Berger"
Date:
On 8/14/07, Lim Berger <straightfwd007@gmail.com> wrote:
> On 8/14/07, Alvaro Herrera <alvherre@commandprompt.com> wrote:
> > Lim Berger escribió:
> >
> > > Thanks. I did "su postgres" and ran the ulimit command again. All
> > > values are the same, except for "open files" which is double in the
> > > case of this user (instead of 4096, it is 8192). Not sure what I can
> > > gather from that?
> >
> > Try "su - postgres" instead (which will run the user start scripts and
> > may modify the ulimits for that user), but note that the ulimit can also
> > be changed in the script that starts the Postgres process on system
> > boot.
>
>
>
> Wow, you are right! The "su - postgres" showed up with wildly
> different values! Most notably, the "max user processes" is only 20!!
> Whereas in the regular user stuff it was above 14000. Would you know
> how to change this in a CentOS Linux machine? Where can I find the
> startup settings for postgresql? Full values below:
>
>
>
> ~ > su - postgres
> -bash-3.00$ ulimit -a
> core file size          (blocks, -c) 200000
> data seg size           (kbytes, -d) 200000
> file size               (blocks, -f) unlimited
> pending signals                 (-i) 1024
> max locked memory       (kbytes, -l) 32
> max memory size         (kbytes, -m) 200000
> open files                      (-n) 100
> pipe size            (512 bytes, -p) 8
> POSIX message queues     (bytes, -q) 819200
> stack size              (kbytes, -s) 8192
> cpu time               (seconds, -t) unlimited
> max user processes              (-u) 20
> virtual memory          (kbytes, -v) 200000
> file locks                      (-x) unlimited
> -bash-3.00$
>




I tried doing "ulimit -u 90000" for instance, as postgres user, but it
tells me:

-bash-3.00$ ulimit -u 9000
-bash: ulimit: max user processes: cannot modify limit: Operation not permitted

Re: "Out of memory" errors..

From
Tom Lane
Date:
"Lim Berger" <straightfwd007@gmail.com> writes:
> Wow, you are right! The "su - postgres" showed up with wildly
> different values! Most notably, the "max user processes" is only 20!!
> Whereas in the regular user stuff it was above 14000. Would you know
> how to change this in a CentOS Linux machine? Where can I find the
> startup settings for postgresql?

Yipes, that's pretty bogus.  The most likely culprit would be a .profile
or .bashrc script belonging to the postgres user --- poke around in its
home directory.

            regards, tom lane

Re: "Out of memory" errors..

From
"Lim Berger"
Date:
On 8/14/07, Sander Steffann <s.steffann@computel.nl> wrote:
> Hi Lim,
>
> > "Lim Berger" <straightfwd007@gmail.com> writes:
> >> Wow, you are right! The "su - postgres" showed up with wildly
> >> different values! Most notably, the "max user processes" is only 20!!
> >> Whereas in the regular user stuff it was above 14000. Would you know
> >> how to change this in a CentOS Linux machine? Where can I find the
> >> startup settings for postgresql?
> >
> > Yipes, that's pretty bogus.  The most likely culprit would be a .profile
> > or .bashrc script belonging to the postgres user --- poke around in its
> > home directory.
>
> It might also be in /etc/security/limits.conf.


Thanks. I see these two lines in that file:


postgres        soft    nofile  8192
postgres        hard    nofile  8192

How should I change these values? I am not sure how this reflects the
"ulimit" options.

Thanks!

Re: "Out of memory" errors..

From
"Sander Steffann"
Date:
Hi Lim,

> "Lim Berger" <straightfwd007@gmail.com> writes:
>> Wow, you are right! The "su - postgres" showed up with wildly
>> different values! Most notably, the "max user processes" is only 20!!
>> Whereas in the regular user stuff it was above 14000. Would you know
>> how to change this in a CentOS Linux machine? Where can I find the
>> startup settings for postgresql?
>
> Yipes, that's pretty bogus.  The most likely culprit would be a .profile
> or .bashrc script belonging to the postgres user --- poke around in its
> home directory.

It might also be in /etc/security/limits.conf.

Good luck,
Sander



Re: "Out of memory" errors..

From
"Sander Steffann"
Date:
Hi Lim,

>> It might also be in /etc/security/limits.conf.
>
> Thanks. I see these two lines in that file:
>
> postgres        soft    nofile  8192
> postgres        hard    nofile  8192
>
> How should I change these values? I am not sure how this reflects the
> "ulimit" options.

Those are limits to the allowed number of open files (ulimit -n). I think
8192 should be enough for PostgreSQL. The problem you had were related to
other settings, so if only the "nofile" setting is changed your strange
ulimits do not come from here :-)

- Sander



Re: "Out of memory" errors..

From
"Lim Berger"
Date:
On 8/14/07, Sander Steffann <s.steffann@computel.nl> wrote:
> Hi Lim,
>
> >> It might also be in /etc/security/limits.conf.
> >
> > Thanks. I see these two lines in that file:
> >
> > postgres        soft    nofile  8192
> > postgres        hard    nofile  8192
> >
> > How should I change these values? I am not sure how this reflects the
> > "ulimit" options.
>
> Those are limits to the allowed number of open files (ulimit -n). I think
> 8192 should be enough for PostgreSQL. The problem you had were related to
> other settings, so if only the "nofile" setting is changed your strange
> ulimits do not come from here :-)


I think I have located the problem. It is in "/etc/profile" where some
ulimits are added. This is the offending text, I think:


#********************* cPanel Added Limit Protections -- BEGIN

#unlimit so we can run the whoami
ulimit -n 4096 -u 14335 -m unlimited -d unlimited -s 8192 -c 1000000
-v unlimited 2>/dev/null

LIMITUSER=$USER
if [ -e "/usr/bin/whoami" ]; then
        LIMITUSER=`/usr/bin/whoami`
fi
if [ "$LIMITUSER" != "root" ]; then
        ulimit -n 100 -u 20 -m 200000 -d 200000 -s 8192 -c 200000 -v
200000 2>/dev/null
else
        ulimit -n 4096 -u 14335 -m unlimited -d unlimited -s 8192 -c
1000000 -v unlimited 2>/dev/null
fi
#********************* cPanel Added Limit Protections -- END



I am not much of a shell scripter so I am afraid of breaking this, but
how can I change the line "if [ "$LIMITUSER" != "root" ];" to include
the postgres user as well? Can I do something like:

   if [ "$LIMITUSER" != "root" and "$LIMITUSER" != "postgres" ];

Would appreciate any thoughts!

Re: "Out of memory" errors..

From
Tom Lane
Date:
"Lim Berger" <straightfwd007@gmail.com> writes:
> I think I have located the problem. It is in "/etc/profile" where some
> ulimits are added. This is the offending text, I think:


> #********************* cPanel Added Limit Protections -- BEGIN

> #unlimit so we can run the whoami
> ulimit -n 4096 -u 14335 -m unlimited -d unlimited -s 8192 -c 1000000
> -v unlimited 2>/dev/null

> LIMITUSER=$USER
> if [ -e "/usr/bin/whoami" ]; then
>         LIMITUSER=`/usr/bin/whoami`
> fi
> if [ "$LIMITUSER" != "root" ]; then
>         ulimit -n 100 -u 20 -m 200000 -d 200000 -s 8192 -c 200000 -v
> 200000 2>/dev/null
> else
>         ulimit -n 4096 -u 14335 -m unlimited -d unlimited -s 8192 -c
> 1000000 -v unlimited 2>/dev/null
> fi
> #********************* cPanel Added Limit Protections -- END

> I am not much of a shell scripter so I am afraid of breaking this,

My advice: remove the whole block that you've quoted.  After that, find
out what "cPanel" is, and get rid of that entire piece of brain-damage.
There might be some merit to restrictions as draconian as the above on
an overloaded multi-user machine, but there is no call for anything to
install restrictions like that behind the back of the machine's admin.

            regards, tom lane

Re: "Out of memory" errors..

From
"Lim Berger"
Date:
On 8/14/07, Sander Steffann <s.steffann@computel.nl> wrote:
> Hi Lim,
>
> >> It might also be in /etc/security/limits.conf.
> >
> > Thanks. I see these two lines in that file:
> >
> > postgres        soft    nofile  8192
> > postgres        hard    nofile  8192
> >
> > How should I change these values? I am not sure how this reflects the
> > "ulimit" options.
>
> Those are limits to the allowed number of open files (ulimit -n). I think
> 8192 should be enough for PostgreSQL. The problem you had were related to
> other settings, so if only the "nofile" setting is changed your strange
> ulimits do not come from here :-)



I have finally figured out how to increase the ulimit for postgres
user. My new ulimit values are:

------
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
file size               (blocks, -f) unlimited
pending signals                 (-i) 1024
max locked memory       (kbytes, -l) 32
max memory size         (kbytes, -m) unlimited
open files                      (-n) 4096
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 14335
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
------


Now if I want a "maintenance_work_mem" of 64M for Postgresql, what
should the "max user processes" setting be in my ulimit, or the "open
files" setting etc? Is there a Postgresql help or doc page I can read
to see how these values map? I'd like to be more educated in how I
test to tweak these OS level values!

Thanks.

Re: "Out of memory" errors..

From
"Sander Steffann"
Date:
Hi,

> Now if I want a "maintenance_work_mem" of 64M for Postgresql, what
> should the "max user processes" setting be in my ulimit, or the "open
> files" setting etc? Is there a Postgresql help or doc page I can read
> to see how these values map? I'd like to be more educated in how I
> test to tweak these OS level values!

If this is only a PostgreSQL database server, don't limit the postgres user.
Don't tweak these limits unless you know exactly what you are doing.

- Sander

PS: "maintenance_work_mem" is completely unrelated to "max user processes"
or "open files", it's related to the allowed memory size.


Re: "Out of memory" errors..

From
"Lim Berger"
Date:
> If this is only a PostgreSQL database server, don't limit the postgres user.
> Don't tweak these limits unless you know exactly what you are doing.


Unfortunately, it is not. It has other applications. Including Apache
and so on. I tried not setting the ulimits at all, but it seems to be
required for the system (by other requirements). So I would like to
know optimal mappings between ulimits and postgres.


> PS: "maintenance_work_mem" is completely unrelated to "max user processes"
> or "open files", it's related to the allowed memory size.
>


Sorry, but this was suggested in this thread earlier. So how should I
make sure that the vacuum analyze on slightly large tables is allowed
without running out of memory? Would "shared_buffer" in conf be
relevant, but I doubt it.

Re: "Out of memory" errors..

From
"Lim Berger"
Date:
>
> Vacuum memory usage is tuned by the "maintenance_work_mem" parameter. I
> suggest you look at
> http://www.postgresql.org/docs/8.2/static/runtime-config-resource.html and
> http://www.postgresql.org/docs/8.2/static/kernel-resources.html#AEN19338.



Thanks Sander, I've read so many of these pages that my head spins. I
notice maintenance_work_mem shows clearly it will affect VACUUM
performance. But the other parameters are less clear, and do not
clearly state if they will become active only when I restart the
postmaster.