Thread: [GENERAL] Out of memory/corrupted shared memory problem on server

[GENERAL] Out of memory/corrupted shared memory problem on server

From
Johann Spies
Date:
While restoring a dump from our development server (768G ram) to the
production server, (PG 9.6.3 on Debian Stretch with 128G Ram) the
refreshing of a Materialized View fails like this:

local] js@wos=# REFRESH MATERIALIZED VIEW wos_2017_1.citation_window_mv ;
server closed the connection unexpectedly
        This probably means the server terminated abnormally
        before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.
Time: 11556111.977 ms

In the log:
------------------
2017-08-24 19:23:26 SAST [7532-18] LOG:  server process (PID 4890) was
terminated by signal 9: Killed
2017-08-24 19:23:26 SAST [7532-19] DETAIL:  Failed process was
running: REFRESH MATERIALIZED VIEW wos_2017_1.citation_window_mv ;
2017-08-24 19:23:26 SAST [7532-20] LOG:  terminating any other active
server processes
2017-08-24 19:23:26 SAST [16376-1] crest@data_portal WARNING:
terminating connection because of crash of another server process
2017-08-24 19:23:26 SAST [16376-2] crest@data_portal DETAIL:  The
postmaster has commanded this server process to roll back the current
transaction and exit, because anothe\
r server process exited abnormally and possibly corrupted shared memory.
2017-08-24 19:23:26 SAST [16376-3] crest@data_portal HINT:  In a
moment you should be able to reconnect to the database and repeat your
command.
--------------------

This show a cross-database  problem.  The Mateiralized Vew is in
database wos while the other related problem seems to be in database
data_portal.  We could not determine what caused the problem in
database_portal.  Or was it caused by the out-of-memory problem in the
wos-process?

The Materialized View  uses a complex query and  should contain 69 772
381 records.

Monitoring the memory usage while running the refresh materialized
view command show  a steady increase by the process until reaches 100%
and breaks.

The server has 128G Ram with the following changes to the default
setup (and you can see how we tried to solve the problem by opting for
lower thresholds in many cases):

# http://edoceo.com/howto/postgresql-performance
# https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server

# pgtune wiz ard 21.01.2016:
max_connections = 80
#shared_buffers = 32GB
shared_buffers = 14GB
#effective_cache_size = 96GB
effective_cache_size = 20GB
#work_mem = 4GB
work_mem = 2GB
maintenance_work_mem = 2GB
checkpoint_completion_target = 0.9
wal_buffers = 16MB
default_statistics_target = 100
temp_buffers = 1GB
fsync = true
max_worker_processes = 24        # (change requires restart)
max_parallel_workers_per_gather = 4    # taken from max_worker_processes
checkpoint_flush_after = 256kB
idle_in_transaction_session_timeout = 3600000

# Other:

# max_wal_size = (3 * checkpoint_segments) *16MB
# http://www.postgresql.org/docs/9.5/static/release-9-5.html
max_wal_size = 3GB # Replace checkpoint_segments
huge_pages = try



# - Archiving -
wal_level = archive
wal_sync_method = fdatasync
full_page_writes = on           # recover from partial page writes
wal_buffers = -1

#archive_mode = on      # allows archiving to be done
archive_mode = off      # allows archiving to be done

And in /etc/sysctl.conf:


# http://padmavyuha.blogspot.co.za/2010/12/configuring-shmmax-and-shmall-for.html

# (for 60GB)
kernel.shmall = 15728640
kernel.shmmax = 64424509440
# run "sudo sysctl -p" after editing

We are stuck at the moment and do not know how to proceed from here.
Help will be appreciated.

Regards
Johann

--
Because experiencing your loyal love is better than life itself,
my lips will praise you.  (Psalm 63:3)


Re: [GENERAL] Out of memory/corrupted shared memory problem on server

From
Christoph Moench-Tegeder
Date:
## Johann Spies (johann.spies@gmail.com):

> ------------------
> 2017-08-24 19:23:26 SAST [7532-18] LOG:  server process (PID 4890) was
> terminated by signal 9: Killed

That looks like out-of-memory. Check the kernel log/dmesg to verify.

If it's the dreaded OOM-killer, you should check your overcommit
settings (sysctl vm.overcommit_*) and fix them in a way that
the kernel isn't forced to kill processes (that is, reduce overcommit).

Finally, in some cases it has been helpful to reduce work_mem -
that way PostgreSQL may be skewed away from memory intensive
operations (at the cost of processing time and/or disk IO - but
that's still better than having processes killed and getting no
result at all).
You could check the query plan for the query behind your view
(EXPLAIN) for potentially memory hungry operations.

> max_worker_processes = 24        # (change requires restart)
> max_parallel_workers_per_gather = 4    # taken from max_worker_processes

In case the query updating the materialized view uses parallel
processing, you could save quite some memory by turning that off
(more processes -> more memory usage -> not helpful in your case).

> # (for 60GB)
> kernel.shmall = 15728640
> kernel.shmmax = 64424509440

This is obsolete since PostgreSQL 9.3 ("Greatly reduce System V shared
memory requirements" says the Release Notes).

Regards,
Christoph

--
Spare Space.


Re: [GENERAL] Out of memory/corrupted shared memory problem on server

From
Tom Lane
Date:
Johann Spies <johann.spies@gmail.com> writes:
> While restoring a dump from our development server (768G ram) to the
> production server, (PG 9.6.3 on Debian Stretch with 128G Ram) the
> refreshing of a Materialized View fails like this:

> local] js@wos=# REFRESH MATERIALIZED VIEW wos_2017_1.citation_window_mv ;
> server closed the connection unexpectedly

> In the log:
> 2017-08-24 19:23:26 SAST [7532-18] LOG:  server process (PID 4890) was
> terminated by signal 9: Killed

As Christoph said, this looks a lot like the kernel OOM killer decided
you'd eaten too much memory.

> The Materialized View  uses a complex query and  should contain 69 772
> 381 records.

How complex is "complex"?  I can think of two likely scenarios:
1. You've stumbled across some kind of memory-leak bug in Postgres.
2. The query's just using too much memory.  In this connection, it's
not good that you've got
> work_mem = 2GB
Remember that "work_mem" is "work memory per plan node", so a complex
query could easily chew up a multiple of that number --- and that's
with everything going according to plan.  If, say, the planner
underestimates the number of table entries involved in a hash
aggregation, the actual consumption might be much larger.

My first move would be to reduce work_mem by an order of magnitude
or two.  If that doesn't help, check the plan for the view's query
and see if it contains any hash aggregation steps --- if so, does
"set enable_hashagg = off" help?  (Also, make sure the view's input
tables have been ANALYZEd recently.)

If none of that helps, we should investigate the memory-leak-bug
theory.  One thing you could do in that direction is to run
the postmaster with a "ulimit -v" size less than what will trigger
the ire of the OOM killer, so that the query encounters a normal
ENOMEM error rather than SIGKILL when it's eaten too much memory.
That should result in it dumping a memory consumption map to stderr,
which would give some clue where the problem is.  We'd need to see
that map as well as details about your query to make progress.

            regards, tom lane


Re: [GENERAL] Out of memory/corrupted shared memory problem on server

From
Johann Spies
Date:
On 25 August 2017 at 13:48, Tom Lane <tgl@sss.pgh.pa.us> wrote:

> How complex is "complex"?  I can think of two likely scenarios:
> 1. You've stumbled across some kind of memory-leak bug in Postgres.
> 2. The query's just using too much memory.  In this connection, it's
> not good that you've got
>> work_mem = 2GB
> Remember that "work_mem" is "work memory per plan node", so a complex
> query could easily chew up a multiple of that number --- and that's
> with everything going according to plan.  If, say, the planner
> underestimates the number of table entries involved in a hash
> aggregation, the actual consumption might be much larger.
>
> My first move would be to reduce work_mem by an order of magnitude
> or two.  If that doesn't help, check the plan for the view's query
> and see if it contains any hash aggregation steps --- if so, does
> "set enable_hashagg = off" help?  (Also, make sure the view's input
> tables have been ANALYZEd recently.)
>
> If none of that helps, we should investigate the memory-leak-bug
> theory.  One thing you could do in that direction is to run
> the postmaster with a "ulimit -v" size less than what will trigger
> the ire of the OOM killer, so that the query encounters a normal
> ENOMEM error rather than SIGKILL when it's eaten too much memory.
> That should result in it dumping a memory consumption map to stderr,
> which would give some clue where the problem is.  We'd need to see
> that map as well as details about your query to make progress.


Thanks Tom and Christoph Moench-Tegeder.

I first tried to refresh it after bringing down the work_mem to 1 GB.
It failed again.
The main source of this query (doing a lot of calculations) is another
Materialized View
with more than 700 million records. I then analyzed that MV and this
morning the good news was:

# REFRESH MATERIALIZED VIEW wos_2017_1.citation_window_mv ;
REFRESH MATERIALIZED VIEW
Time: 27128469.899 ms

Regards
Johann
--
Because experiencing your loyal love is better than life itself,
my lips will praise you.  (Psalm 63:3)


Re: [GENERAL] Out of memory/corrupted shared memory problem on server

From
Tom Lane
Date:
Johann Spies <johann.spies@gmail.com> writes:
> On 25 August 2017 at 13:48, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> Remember that "work_mem" is "work memory per plan node", so a complex
>> query could easily chew up a multiple of that number --- and that's
>> with everything going according to plan.  If, say, the planner
>> underestimates the number of table entries involved in a hash
>> aggregation, the actual consumption might be much larger.

> The main source of this query (doing a lot of calculations) is another
> Materialized View
> with more than 700 million records. I then analyzed that MV and this
> morning the good news was:

> # REFRESH MATERIALIZED VIEW wos_2017_1.citation_window_mv ;
> REFRESH MATERIALIZED VIEW
> Time: 27128469.899 ms

OK, so almost certainly the crash was caused by a hash aggregate
using so much memory that it triggered the OOM killer.  Whether
a hash aggregate's hashtable will stay within work_mem is dependent
on whether the planner correctly predicts the number of entries needed.
Analyzing the input MV must have improved that estimate and led the
planner to choose some other plan.

            regards, tom lane