Re: Size estimation of postgres core files - Mailing list pgsql-general

From Jeremy Finzel
Subject Re: Size estimation of postgres core files
Date
Msg-id CAMa1XUgd79VBYFHs=uTQB+YkXQjw-YxEQtB=USwL4BDPAVJD4g@mail.gmail.com
Whole thread Raw
In response to Re: Size estimation of postgres core files  (Andrew Gierth <andrew@tao11.riddles.org.uk>)
Responses Re: Size estimation of postgres core files  ("Peter J. Holzer" <hjp-pgsql@hjp.at>)
List pgsql-general
It doesn't write out all of RAM, only the amount in use by the
particular backend that crashed (plus all the shared segments attached
by that backend, including the main shared_buffers, unless you disable
that as previously mentioned).

And yes, it can take a long time to generate a large core file.

--
Andrew (irc:RhodiumToad)

Based on the Alvaro's response, I thought it is reasonably possible that that *could* include nearly all of RAM, because that was my original question.  If shared buffers is say 50G and my OS has 1T, shared buffers is a small portion of that.  But really my question is what should we reasonably assume is possible - meaning what kind of space should I provision for a volume to be able to contain the core dump in case of crash?  The time of writing the core file would definitely be a concern if it could indeed be that large.

Could someone provide more information on exactly how to do that coredump_filter?

We are looking to enable core dumps to aid in case of unexpected crashes and wondering if there are any recommendations in general in terms of balancing costs/benefits of enabling core dumps.

Thank you!
Jeremy

pgsql-general by date:

Previous
From: Adrian Klaver
Date:
Subject: Re: Problems pushing down WHERE-clause to underlying view
Next
From: Andre Piwoni
Date:
Subject: Re: Promoted slave tries to archive previously archived WAL file