Re: postgresql recommendation memory - Mailing list pgsql-performance

From Евгений Селявка
Subject Re: postgresql recommendation memory
Date
Msg-id CAKPhvNaON-vLKmsCoCjFquQ6wD0_kPRR=1v-8petpyRtykQ3Jg@mail.gmail.com
Whole thread Raw
In response to Re: postgresql recommendation memory  (Scott Marlowe <scott.marlowe@gmail.com>)
Responses Re: postgresql recommendation memory  (Scott Marlowe <scott.marlowe@gmail.com>)
List pgsql-performance
Thank you for advice.

1)
First off all, we use java app with jdbc driver wich can pool connection, thats why i don't think that this is good decision to put one more pooler between app and DB. May be someone have an experience with  pgbouncer and jdbc and could give a good advice with advantage and disadvantage of this architecture.

2) Yes this is my error in configuration and every day or two i decrease work_mem and monitor for my system and postgresql log try to find record about temp files. I will decrease work_mem to 16MB or may be 32MB. But really i repeat that i have about 100 concurrent connections to my DB. I set this value with big reserve. I can't change this setting because db in production.

3)
I also read about disabling OOM killer but when i set vm.overcommit_memory=2. My DB work couple of a day and then pacemaker stop it because i set wrong value for vm.overcommit_ratio i set it to 90. And when pacemaker try to execute psql -c 'select 1' postmaster return 'out of memory' and pacemaker stop my production DB. I need to know what is the correct value for vm.overcommit_ratio or how postmaster allocate memory when fork may be formula or something? If i get answers on this question i can pick up vm.overcommit_ratio.

4)
About vm.swappiness totally agree and i turn it on for experiment goals, because i have problem and my db freeze. I play with different kernel setting try to pick up correct value. In the beginning  i set it to 0 and all works fine.

5)
I will be planing downtime and decrease max_connection and shared_buffers.

6) I set to this values
vm.dirty_bytes=67108864 this value equal my Smart Array E200i Cache Size.
vm.dirty_background_bytes = 16777216 - 1/4 from vm.dirty_bytes


<Basically don't TRY to allocate all the memory, try to leave 75% or so
<free for the OS to allocate as buffers. After getting a baseline for
<performance under load then make bigger changes

This means that i should set effective_cache_size to 75% of my RAM?



2013/11/6 Scott Marlowe <scott.marlowe@gmail.com>
On Sat, Nov 2, 2013 at 12:54 PM, Евгений Селявка <evg.selyavka@gmail.com> wrote:

SNIP

> max_connections = 350
SNIP
> work_mem = 256MB

These two settings together are quite dangerous.

1: Look into a db pooler to get your connections needed down to no
more than 2x # of cores in your machine. I recommend pgbouncer
2: Your current settings mean that if you max out connections and each
of those connections does a large sort at the same time, they'll try
to allocated 256MB*250 or 89,600MB. If you run one job that can use
that much work_mem, then set it by connection or user for that one
connection or user only. Allowing any process to allocate 256MB is
usually a bad idea, and doubly so if you allow 350 incoming
connections.  Dropping work_mem to 16MB means a ceiling of about 5G
memory if you get swamped and each query is averaging 1 sort. Note
that a single query CAN run > 1 sort, so it's not a hard limit and you
could still swamp your machine, but it's less likely.
3: Turn off the OOM killer. On a production DB it is unwelcome and
could cause some serious issues.
4: vm.swappiness = 0 is my normal setting. I also tend to just turn
off swap on big memory linux boxes because linux cirtual memory is
often counterproductive on db servers. Some people might even say it
is broken, I tend to agree. Better to have a process fail to allocate
memory and report it in logs than have a machine slow to a crawl under
load. But that's your choice. And 64G isn't that big, so you're in the
in between zone for me on whether to just turn off swap.
5: turn down shared_buffers to 1 or 2G.
6: lower all your vm dirty ratio / size settings so that the machine
never has to write a lot at one time.

Basically don't TRY to allocate all the memory, try to leave 75% or so
free for the OS to allocate as buffers. After getting a baseline for
performance under load then make bigger changes.



--
С уважением Селявка Евгений

pgsql-performance by date:

Previous
From: Jan Walter
Date:
Subject: Re: Trees: integer[] outperformed by ltree
Next
From: Merlin Moncure
Date:
Subject: Re: Trees: integer[] outperformed by ltree