I had an issue today where the OOM killer terminated one of my postgres
processes.
On my server I have 8 GB of RAM, shared_memory is 1 GB and work_memory is
24MB.
I have connection pooling which limits us to 25 connections. Even if I'm
maxed out there, I'm still only using 1.6 MB of RAM of my 8 which seems like
it shouldn't be a problem.
Looking through my postgres logs, I noticed that right about the time of the
OOM incident, I had some queries running with pretty massive in clauses
(thank you ruby/ActiveRecord). One of the queries was about 28MB in size.
So, I decided to try an experiment. I wrote 2 queries as follows:
1 ) select pg_sleep(100) ;
2 ) with q (s1, s2) as (select pg_sleep(100), 1)
select * from q where s2 in ( 1, <about 28 MB worth of comma
delimited numbers>)
I ran those queries via psql and did this:
-sh-4.1$ ps aux | grep -i -E "local|COMMAND" | grep -v ruby
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
postgres 20896 27.0 28.2 3416812 2132112 ? Ss 21:18 0:02 postgres:
hireology hireology [local] SELECT
postgres 20899 0.0 0.0 1281368 4800 ? Ss 21:18 0:00 postgres:
hireology hireology [local] SELECT
It looks to me like the connection running the big query is using about 2GB
more memory than the other one. I could see why it might use *some* more
(like 28MB more?), but 2GB more seems excessive.
So, the question is why does it use so much more memory. And is there
anything I can do to limit this problem other than fixing the silly queries?
Thanks in advance for any help,
Greig Wise
--
View this message in context:
http://postgresql.nabble.com/Query-with-large-in-clauses-uses-a-lot-of-memory-tp5931716.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.