Re: Too many open files in system FATAL2 - Mailing list pgsql-general

From Shaun Thomas
Subject Re: Too many open files in system FATAL2
Date
Msg-id Pine.LNX.4.33L2.0108311130400.1999-100000@hamster.lee.net
Whole thread Raw
In response to Too many open files in system FATAL2  ("Christian MEUNIER" <webmaster@magelo.com>)
List pgsql-general
On Thu, 30 Aug 2001, Christian MEUNIER wrote:

> got the following happened yesterday:
>
> postmaster: StreamConnection: accept: Too many open files in system
> postmaster: StreamConnection: accept: Too many open files in system
> postmaster: StreamConnection: accept: Too many open files in system
> 2001-08-30 03:04:27 FATAL 2:  InitOpen(logfile 3 seg 199) failed: Too many
> open files in system
> Server process (pid 21508) exited with status 512 at Thu Aug 30 03:04:27
> 2001
> Terminating any active server processes...

Most unix systems have a pre-set limit for the number of open file
handles over every running application.  If you're running a lot of
applications on your server along with postgres, they may be consuming
vital system resources (file handles) that postgres wants.

Or, your database may just be making enough connections that it's
consuming all open file handles.  Whatever OS you're using, check
the manual to see how to add more file handles.  This may involve
recompiling the kernel.

Your other problem might be a deadlock.  If postgres gets deadlocked in a
transaction, or has a lock during a vacuum, all subsequent connections
will connect, try a query and then wait indefinitely in an idle state.
This keeps up until there are possibly hundreds (if you allow that many)
postgres connections tying up more and more file handles until there are
none left.

In any case, I'd check the other apps first.  Then, see if the kernel is
compiled with an adequate amount of file handles.  Then, check through
your application for deadlock conditions and vacuums during transactions.
(don't do that, by the way.)

If you have a high-traffic DB with lots of inserts, updates, and
deletes, your indexes might be disgustingly out of sync and turning your
DB into a slow memory, cpu, and file-handle hogging dog.  Postgres has a
reindex command, run that on your DB and see if the problem goes away.

--
+-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-+
| Shaun M. Thomas                INN Database Programmer              |
| Phone: (309) 743-0812          Fax  : (309) 743-0830                |
| Email: sthomas@townnews.com    AIM  : trifthen                      |
| Web  : hamster.lee.net                                              |
|                                                                     |
|     "Most of our lives are about proving something, either to       |
|      ourselves or to someone else."                                 |
|                                           -- Anonymous              |
+-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-+



pgsql-general by date:

Previous
From: Stephan Szabo
Date:
Subject: Re: Locate on Max() and Group By
Next
From: wabi youssouf
Date:
Subject: handling results of pl/pgsql functions