Cyrus Rahman <cr@photox.jcmax.com> writes:
> As you can see, a connection open through a vacuum does end up duplicating
> its open file descriptors.
Indeed, phrased in that fashion it's easy to duplicate the problem.
Interestingly, this isn't a big problem on platforms where there is
a relatively low limit on number of open files per process. A backend
will run its open file count up to the limit and then stay there
(wasting a few more virtual-file-descriptor array slots per vacuum
cycle, but this is such a small memory leak you'd likely never notice).
But on systems that let a process have thousands of kernel file
descriptors, there will be no recycling of kernel descriptors as the
number of virtual descriptors increases.
What's the consensus, hackers? Do we risk sticking Hiroshi's patch into
6.5.2, or not? It should definitely go into current, but I'm worried
about putting it into the stable branch right before a release...
Vadim, does it look right to you?
regards, tom lane