Thread: mod_per, postgres, and the file system: limitations

mod_per, postgres, and the file system: limitations

From
"Thomas F. O'Connell"
Date:
ok, so i've been using postgres in a production environment for some
time, now, and for a long time, things were peachy with it.

recently, however, there have been some disturbing trends.

during peak usage periods, i've run into a series of walls, each of
which i have some control over and that i am trying to set reasonably.
the walls are: the number of mod_perl clients i'm allowing to attempt to
connect to postgres at any given time, the number of postgres backends
i'm willing to spawn at any time, and the number of files available for
opening by the filesystem.

the first two affect users in the sense that requests get denied if
either of those limits is set too low. this is not too big a deal, but
it'd be nice to get to a point of striking a balance allowing as many
requests as possible without overly taxing the filesystem.

and i guess the filesystem is the source of the majority of my
questions. i ran into the "too many open files" error a few times and
increased the limit. after that, though, along with an increase in the
first two limits mentioned above, the system became taxed to the point
of unresponsiveness and there were 5 digits worth of files open (i lost
the ability to count them because load was way too high).

so i guess this is more of an infrastructure issue than anything else.
here, then are my questions:

1) what is a reasonable number of files to allow postgres to open in a
production environment? let's say there could be 200-500 postgres
backends running. how many files does postgres use for each active
backend? i've not been able to find a good estimate among the
documentation.

2) does anyone have any good benchmarks for actual physical needs for a
production environment postgres server? i.e., amount of diskspace, RAM,
processor speed?

3) since postgres is, at this point, monolithic (meaning that there is
no production-ready way to replicate across multiple servers), is there
a benchmark for how many clients and backends people are setting on a
single machine in order to strike the best balance of machine
performance and number of answerable requests from a public environment?

i know this is a lot, but so far none of the books or documentation
really seem to cover tuning issues, and if anyone has any tuning
suggestions, i'd love to hear them.

right now, i'm plagued with a postgres setup that works fine except when
under stress, and i guess what i'm saying is i can't figure out how to
cope with the stress!

thanks!

-tfo

Re: mod_per, postgres, and the file system: limitations

From
"Thomas F. O'Connell"
Date:
In article <tfo-182ADD.15141812122001@news.hub.org>,
 "Thomas F. O'Connell" <tfo@monsterlabs.com> wrote:

> recently, however, there have been some disturbing trends.

additionally, i've been seeing the error "pq_recvbuf: unexpected EOF on
client connection" quite a bit during periods of heavy postgres activity.

according to Tom (Lane), the only known problems that generate this are
a psql scenario and an ODBC problem. i'm using mod_perl, so i don't know
what could be causing these clients to die ungracefully.

any thoughts?

thanks!

-tfo

Re: mod_per, postgres, and the file system: limitations

From
Doug McNaught
Date:
"Thomas F. O'Connell" <tfo@monsterlabs.com> writes:

> additionally, i've been seeing the error "pq_recvbuf: unexpected EOF on
> client connection" quite a bit during periods of heavy postgres activity.
>
> according to Tom (Lane), the only known problems that generate this are
> a psql scenario and an ODBC problem. i'm using mod_perl, so i don't know
> what could be causing these clients to die ungracefully.

It's possible that something in mod_perl is crashing that Apache
backend--anything in the webserver logs?

-Doug
--
Let us cross over the river, and rest under the shade of the trees.
   --T. J. Jackson, 1863