Re: [GENERAL] leaking FD's ? - Mailing list pgsql-general

From Jim Cromie
Subject Re: [GENERAL] leaking FD's ?
Date
Msg-id 37FB992C.7A8D8285@bwn.net
Whole thread Raw
In response to Re: [GENERAL] leaking FD's ?  (Michael Simms <grim@argh.demon.co.uk>)
Responses Re: [GENERAL] leaking FD's ?
List pgsql-general
Michael Simms wrote:

> > > Hi
> > >
> > > I am running a process that does a fair number of selects and updates but
> > > nothing too complex.
> > >
> > > I have the postmaster starting like such:
> > >
> > > /usr/bin/postmaster -o "-F -S 10240" -d 1 -N 128 -B 256 -D/var/lib/pgsql/data -o -F > /tmp/postmasterout 2>
/tmp/postmastererr
> > >

ok, looked up man pages....
I dont have an -N 128   in my man page (v6.5.1)

>
> > > Now, looking at that, I have 256 shared memory segments, and as such,
> > > I would expect the number of file descriptors used by my backends to
> > > be fairly similar.
> >

why this expectation ?  To my knowledge, shared memory shows up with `ipcs`,
not in the process's file descriptors.

Q: What unix/linux utilities would one use to determine descriptor usage ?
Are any of those lost handles unix or inet sockets ? did you try netstat ?

>
> > Each backend keeps up to 64(?) file descriptors open, expecting it may
> > need to access those files in the future, so it uses it as a cache.
>
> Thats fine, except for, as I stated, I was up to 480 at time of writing. As
> time progressed, the number of FDs open maxed out at 1022, which considering
> I have a max of 1024 per process seems to say to me that it was leaking.
> Especially as it became increasingly slower as it went after hitting 1022
> which to me indicates that, as you say, it held fd's open for caching, but
> when it reached its fd limit and still leaked, it had less and less free fds
> to play with.
>
> Sound like a leak to anyone?
>

Actually, it sounds rather like operator error.

In other words; supporting evidence ? more clues available ?

A-priori, Id think that leaking handles would have been seen long ago by hundreds of people, who are almost all using
thestandard 
fd-open-max.  Pushing the limits up just to keep running sounds like a rather desperate solution.

`I dont have enough information to conclude otherwize.  For instance - you havent said what operating system you are
on, Ill assume 
linux since you rebuilt the kernel with more.   What did you start out with ?   Please be specific here, I hope to
learnsomething 
from your experience.

I note on re-reading both your postings that the 2nd one has apparently corrected 2 numbers, youve dropped a zero, and
gotmore 
reasonable numbers.  The 1st numbers were extreme; Im not knowledgeable enough to say that it couldnt be done, but I
wouldntdo it 
wihout good reason.

So, is yours a commercial installation ?   Have you done benchmarks on your system to establish what performance gains
youvegotten 
by enlarging shared memory segments etc...    Such a case study would be very helpful to the postgres community in
establishinga 
Postgres-Tuning-Guide.
Ill admit I didnot do an archive search.

Are you using any of your own extensions, or is it an out-of-the-box postgres ?

Whats a fair number ?  If your transaction count were one of the world-wide maxes for a postgres installation, your
chanceat 
exposing a bug would be better.

How about your 'nothing-to-complex' ?   Plain vanilla operations are less likely to expose new bugs than all the new
features. Do 
you use triggers, refint, etc ?

Granted, something sounds leaky, but youve gotten an answer from someone who regularly answers questions in this forum
(Bruce,not 
me).  He clearly doesnt know of such a leak, so its up to you to find it.

I know I cant help you, Im just a basic user
good luck


pgsql-general by date:

Previous
From: Mike Mascari
Date:
Subject: Re: [GENERAL] Foreign Key
Next
From: "Aaron J. Seigo"
Date:
Subject: Re: [GENERAL] Re: [PHP3] Re: PostgreSQL vs Mysql comparison