On Sat, Jan 4, 2020 at 6:19 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
>
> =?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@mksoft.nu> writes:
> > I tried starting it from cron and then I got:
> > max_safe_fds = 981, usable_fds = 1000, already_open = 9
>
> Oh! There we have it then.
>
Right.
> I wonder if that's a cron bug (neglecting
> to close its own FDs before forking children) or intentional (maybe
> it uses those FDs to keep tabs on the children?).
>
So, where do we go from here? Shall we try to identify why cron is
keeping extra FDs or we assume that we can't predict how many
pre-opened files there will be? In the latter case, we either want to
(a) tweak the test to raise the value of max_files_per_process, (b)
remove the test entirely. You seem to incline towards (b), but I have
a few things to say about that. We have another strange failure due
to this test on one of Noah's machine, see my email [1]. I have
requested Noah for the stack trace [2]. It is not clear to me whether
the committed code has any problem or the test has discovered a
different problem in v10 specific to that platform. The same test has
passed for v11, v12, and HEAD on the same platform.
[1] - https://www.postgresql.org/message-id/CAA4eK1LMDx6vK8Kdw8WUeW1MjToN2xVffL2kvtHvZg17%3DY6QQg%40mail.gmail.com
[2] - https://www.postgresql.org/message-id/CAA4eK1LJqMuXoCLuxkTr1HidbR8DkgRrVC7jHWDyXT%3DFD2gt6Q%40mail.gmail.com
--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com