Tom Lane wrote:
>
> Interestingly, this isn't a big problem on platforms where there is ^^^^^^^^^^^^^^^^^^^^^^^^
> a relatively low limit on number of open files per process. A backend
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> will run its open file count up to the limit and then stay there
> (wasting a few more virtual-file-descriptor array slots per vacuum
> cycle, but this is such a small memory leak you'd likely never notice).
> But on systems that let a process have thousands of kernel file
> descriptors, there will be no recycling of kernel descriptors as the
> number of virtual descriptors increases.
>
> What's the consensus, hackers? Do we risk sticking Hiroshi's patch into
> 6.5.2, or not? It should definitely go into current, but I'm worried
> about putting it into the stable branch right before a release...
> Vadim, does it look right to you?
Sorry, I have no time to look in it. But there is another solution:
> From: owner-pgsql-hackers@postgreSQL.org
> [mailto:owner-pgsql-hackers@postgreSQL.org]On Behalf Of Vadim Mikheev
> Sent: Monday, June 07, 1999 7:49 PM
> To: Hiroshi Inoue
> Cc: The Hermit Hacker; pgsql-hackers@postgreSQL.org
> Subject: Re: [HACKERS] postgresql-v6.5beta2.tar.gz ...
>
[snip]
> 2. fd.c:pg_nofile()->sysconf(_SC_OPEN_MAX) returns in FreeBSD
> near total number of files that can be opened in system
> (by _all_ users/procs). With total number of opened files
> ~ 2000 I can run your test with 10-20 simultaneous
> xactions for very short time, -:)
>
> Should we limit fd.c:no_files to ~ 256? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> This is port-specific, of course...
No risk at all...
Vadim