Thread:

From
Serge Sozonoff
Date:
Hello all,

Recently I increased my maximum open files settings to 8192 on my
RedHat Linux 6.1 database server running postgres version 6.5.3

Since then I have been observeing /proc/sys/fs/file-nr and I noticed that the
system has already spiked to 8192 open files. I have also seen it sitting
steady at 7356 open files.

I was interested to know if this is "normal" behavior for Postgres, using
"lsof" I noticed
that most of the open files belonged to postmaster and most of them were
not sockets.

What do other people experience with "mid-size" db's?

Is there anywhere I can read about tuning shared memory, open files and
other paremeters
for a database server running postgres?

Any help is much apreciated.

Serge


Re: open pgsql files (was Re: [GENERAL] Mime-Version: 1.0)

From
Ed Loehr
Date:
Serge Sozonoff wrote:
>
> Since then I have been observeing /proc/sys/fs/file-nr and I noticed that the
> system has already spiked to 8192 open files. I have also seen it sitting
> steady at 7356 open files.
> d
> I was interested to know if this is "normal" behavior for Postgres, using
> "lsof" I noticed
> that most of the open files belonged to postmaster and most of them were
> not sockets.

Your numbers do not surprise me at all.  Backend pgsql servers live
for the lifetime of the client connection, and they open a similarly
large number of files on my system.  I throttle them by throttling the
life of the apache children who are generally the only clients.

Cheers,
Ed Loehr

Re: open pgsql files (was Re: [GENERAL] Mime-Version: 1.0)

From
Lincoln Yeoh
Date:
At 03:28 PM 21-02-2000 -0600, Ed Loehr wrote:

>Your numbers do not surprise me at all.  Backend pgsql servers live
>for the lifetime of the client connection, and they open a similarly
>large number of files on my system.  I throttle them by throttling the
>life of the apache children who are generally the only clients.

Oh, the poor little kiddies. Looks like I may have to commit genocide from
time to time as well. <grin>.

But doesn't the backend close the files after it's done with em? Or it
doesn't know when it's done with the files?

I've really nothing against native americans, is there a way to throttle or
fix our good ol elephant instead?

Cheerio,
Link.




Re: open pgsql files (was Re: [GENERAL] Mime-Version: 1.0)

From
Bruce Momjian
Date:
> At 03:28 PM 21-02-2000 -0600, Ed Loehr wrote:
>
> >Your numbers do not surprise me at all.  Backend pgsql servers live
> >for the lifetime of the client connection, and they open a similarly
> >large number of files on my system.  I throttle them by throttling the
> >life of the apache children who are generally the only clients.
>
> Oh, the poor little kiddies. Looks like I may have to commit genocide from
> time to time as well. <grin>.
>
> But doesn't the backend close the files after it's done with em? Or it
> doesn't know when it's done with the files?
>
> I've really nothing against native americans, is there a way to throttle or
> fix our good ol elephant instead?

Keeps files open in the expectation it may need them again.

--
  Bruce Momjian                        |  http://www.op.net/~candle
  pgman@candle.pha.pa.us               |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026

Re: open pgsql files (was Re: [GENERAL] Mime-Version: 1.0)

From
Ed Loehr
Date:
Bruce Momjian wrote:
>
> > At 03:28 PM 21-02-2000 -0600, Ed Loehr wrote:
> >
> > >Your numbers do not surprise me at all.  Backend pgsql servers live
> > >for the lifetime of the client connection, and they open a similarly
> > >large number of files on my system.  I throttle them by throttling the
> > >life of the apache children who are generally the only clients.
> >
> > But doesn't the backend close the files after it's done with em? Or it
> > doesn't know when it's done with the files?
> >
> Keeps files open in the expectation it may need them again.

I assumed so.  "It would be nice" if one could constrain the open file
consumption of the backends directly in the same manner as sort
buffersizes, etc (i.e., setting a maximum number of files to open).
Some sort of LRU cache on file handles, maybe...worthy of a todo item?

Cheers,
Ed Loehr

Re: open pgsql files (was Re: [GENERAL] Mime-Version: 1.0)

From
Serge Sozonoff
Date:
Hi all,

>> But doesn't the backend close the files after it's done with em? Or it
>> doesn't know when it's done with the files?
>>
>> I've really nothing against native americans, is there a way to throttle or
>> fix our good ol elephant instead?
>
>Keeps files open in the expectation it may need them again.

Well, is there not a way to tell Postgres to close the files when he is
finished?

If I have 40 tables and each table is made up of 6-7 files including
index's etc then that
means that per process I could be opening up to 200-240 !!

This means that with 64 db connections I could be hitting  12800-15360 open
files
on my system!!! What is the current Linux limit without kernel re-compile?
What is the Linux
limit with kernel re-compile?

Why can't I just tell postgres to close thos files say 2 minutes after he
is done with them
and they have been idle?

Thanks, Serge



Re: open pgsql files (was Re: [GENERAL] Mime-Version: 1.0)

From
Bruce Momjian
Date:
> If I have 40 tables and each table is made up of 6-7 files including
> index's etc then that
> means that per process I could be opening up to 200-240 !!
>
> This means that with 64 db connections I could be hitting  12800-15360 open
> files
> on my system!!! What is the current Linux limit without kernel re-compile?
> What is the Linux
> limit with kernel re-compile?
>
> Why can't I just tell postgres to close thos files say 2 minutes after he
> is done with them
> and they have been idle?

Take a look at /pg/backend/storage/file/fd.c::pg_nofile().  If you
change the line:

    return no_files;

to

   return 32;

PostgreSQL will never use more than 32 files open at the same time.
Setting this too low will cause the system not to start.  The default is
to allow opening of the maximum number of open files for a process.
There is no "close after X minutes of inactivity", though file
descriptors are closed as they reach the pg_nofile limit.

Give it a try and let us know how you like it.  Maybe we can add a
configuration option for this.


--
  Bruce Momjian                        |  http://www.op.net/~candle
  pgman@candle.pha.pa.us               |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026

Re: open pgsql files (was Re: [GENERAL] Mime-Version: 1.0)

From
Bruce Momjian
Date:
> > If I have 40 tables and each table is made up of 6-7 files including
> > index's etc then that
> > means that per process I could be opening up to 200-240 !!
> >
> > This means that with 64 db connections I could be hitting  12800-15360 open
> > files
> > on my system!!! What is the current Linux limit without kernel re-compile?
> > What is the Linux
> > limit with kernel re-compile?
> >
> > Why can't I just tell postgres to close thos files say 2 minutes after he
> > is done with them
> > and they have been idle?
>
> Take a look at /pg/backend/storage/file/fd.c::pg_nofile().  If you
> change the line:

This actually brings up a good point.  We currently cache all
descriptors up to the limit the OS will allow for a process.

Is this too aggressive?  Should we limit it to 50% of the maximum?


--
  Bruce Momjian                        |  http://www.op.net/~candle
  pgman@candle.pha.pa.us               |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026

Re: open pgsql files (was Re: [GENERAL] Mime-Version: 1.0)

From
Lincoln Spiteri
Date:
Hello,

Even after extending the number of file descriptors on the kernel quite
significantly I still get the occasional crash due to too many open files. I
would say that the current ploicy is too aggressive uder heavy loads.

Regards
Lincoln

On Mon, 28 Feb 2000, pgman@candle.pha.pa.us wrote:

>
> This actually brings up a good point.  We currently cache all
> descriptors up to the limit the OS will allow for a process.
>
> Is this too aggressive?  Should we limit it to 50% of the maximum?
>
>
> --
>   Bruce Momjian                        |  http://www.op.net/~candle
>   pgman@candle.pha.pa.us               |  (610) 853-3000
>   +  If your life is a hard drive,     |  830 Blythe Avenue
>   +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026
>
> ************
--
------------------------------------------------------------------------------

Lincoln Spiteri

Manufacturing Systems
STMicroelectronics, Malta

e-mail: lincoln.spiteri@st.com

------------------------------------------------------------------------------

Re: open pgsql files (was Re: [GENERAL] Mime-Version: 1.0)

From
Ed Loehr
Date:
Bruce Momjian wrote:
>
> > > If I have 40 tables and each table is made up of 6-7 files including
> > > index's etc then that
> > > means that per process I could be opening up to 200-240 !!
> > >
> > > This means that with 64 db connections I could be hitting  12800-15360 open
> > > files
> > > on my system!!! What is the current Linux limit without kernel re-compile?
> > > What is the Linux
> > > limit with kernel re-compile?
> > >
> > > Why can't I just tell postgres to close thos files say 2 minutes after he
> > > is done with them
> > > and they have been idle?
> >
> > Take a look at /pg/backend/storage/file/fd.c::pg_nofile().  If you
> > change the line:
>
> This actually brings up a good point.  We currently cache all
> descriptors up to the limit the OS will allow for a process.
>
> Is this too aggressive?  Should we limit it to 50% of the maximum?

Seems difficulty to guess correctly on one setting for all.  How
difficult would it be to make the limit configurable so that folks
could set it as a hard limit (eg. 100 open files per backend or 1000
per server) or a percentage of OS max?  SET would seem most convenient
from a user point of view, but even something for configure would be
very useful in managing resources.

Cheers,
Ed Loehr

Re: open pgsql files (was Re: [GENERAL] Mime-Version: 1.0)

From
Tatsuo Ishii
Date:
> Even after extending the number of file descriptors on the kernel quite
> significantly I still get the occasional crash due to too many open files. I
> would say that the current ploicy is too aggressive uder heavy loads.
>> This actually brings up a good point.  We currently cache all
>> descriptors up to the limit the OS will allow for a process.
>>
>> Is this too aggressive?  Should we limit it to 50% of the maximum?

We could limit the number of open files/per backend by using limit, or
ulimit etc. if all file accesses would go through Vfd.  Is there any
reason to use open() directly, for example, in mdblindwrt()?

Also, I have noticed that some files such as pg_internal.init are not
necessary kept open and should be closed after we finish to use it to
save a fd.
--
Tatsuo Ishii