Re: Performance Woes - Mailing list pgsql-performance
From | Scott Mohekey |
---|---|
Subject | Re: Performance Woes |
Date | |
Msg-id | e07118820705091850s1941e5dek32f57ea3ed1c9d04@mail.gmail.com Whole thread Raw |
In response to | Re: Performance Woes (Jeff Davis <pgsql@j-davis.com>) |
List | pgsql-performance |
Just adding a bit of relevant information:
We have the kernel file-max setting set to 297834 (256 per 4mb of ram).
/proc/sys/fs/file-nr tells us that we have roughly 13000 allocated handles of which zero are always free.
--
Scott Mohekey
Systems Administrator
Telogis
Intelligent Location Technologies
NOTICE:
This message (including any attachments) contains CONFIDENTIAL INFORMATION intended for a specific individual and purpose, and is protected by law. If you are not the intended recipient, you should delete this message and are hereby notified that any disclosure, copying, or distribution of this message, or the taking of any action based on it, is strictly prohibited
We have the kernel file-max setting set to 297834 (256 per 4mb of ram).
/proc/sys/fs/file-nr tells us that we have roughly 13000 allocated handles of which zero are always free.
On 10/05/07, Jeff Davis <pgsql@j-davis.com> wrote:
On Wed, 2007-05-09 at 17:29 -0700, Joshua D. Drake wrote:
> > 2007-05-09 03:07:50.083 GMT 1146975740: LOCATION: BasicOpenFile,
> > fd.c:471
> >
> > 2007-05-09 03:07:50.091 GMT 0: LOG: 00000: duration: 12.362 ms
> >
> > 2007-05-09 03:07:50.091 GMT 0: LOCATION: exec_simple_query,
> > postgres.c:1090
> >
> >
> >
> > So we decreased the max_files_per_process to 800. This took care
> > of the error **BUT** about quadrupled the IO wait that is happening
> > on the machine. It went from a peek of about 50% to peeks of over
> > 200% (4 processor machines, 4 gigs ram, raid). The load on the
> > machine remained constant.
> >
>
> Sounds to me like you just need to up the total amount of open files
> allowed by the operating system.
It looks more like the opposite, here's the docs for
max_files_per_process:
"Sets the maximum number of simultaneously open files allowed to each
server subprocess. The default is one thousand files. If the kernel is
enforcing a safe per-process limit, you don't need to worry about this
setting. But on some platforms (notably, most BSD systems), the kernel
will allow individual processes to open many more files than the system
can really support when a large number of processes all try to open that
many files. If you find yourself seeing "Too many open files" failures,
try reducing this setting. This parameter can only be set at server
start."
To me, that means that his machine is allowing the new FD to be created,
but then can't really support that many so it gives an error.
Ralph, how many connections do you have open at once? It seems like the
machine perhaps just can't handle that many FDs in all of those
processes at once.
That is a lot of tables. Maybe a different OS will handle it better?
Maybe there's some way that you can use fewer connections and then the
OS could still handle it?
Regards,
Jeff Davis
---------------------------(end of broadcast)---------------------------
TIP 2: Don't 'kill -9' the postmaster
--
Scott Mohekey
Systems Administrator
Telogis
Intelligent Location Technologies
NOTICE:
This message (including any attachments) contains CONFIDENTIAL INFORMATION intended for a specific individual and purpose, and is protected by law. If you are not the intended recipient, you should delete this message and are hereby notified that any disclosure, copying, or distribution of this message, or the taking of any action based on it, is strictly prohibited
pgsql-performance by date: