On Thu, 21 Mar 2002, Martijn van Oosterhout wrote:
> On Wed, Mar 20, 2002 at 06:04:17PM -0500, Joshua Hoover wrote:
> > I'm running PostgreSQL 7.1.3 on Red Hat Linux 7.1 and believe there is a
> > problem with my PostgreSQL server. I have a PHP application on a separate
> > server accessing the PostgreSQL server. The PostgreSQL server seems to be
> > getting hammered, as even simple queries on indexed columns are taking
> > FOREVER. When I run top, here I normally see at least 50 entries similar to
> > these for postmaster:
> >
> > 19336 postgres 9 0 92960 90M 92028 S 0.0 9.0 0:18 postmaster
> > 19341 postgres 9 0 87996 85M 87140 S 0.0 8.5 0:09 postmaster
> > 19355 postgres 9 0 87984 85M 87112 S 11.6 8.5 0:09 postmaster
> > 19337 postgres 9 0 87952 85M 87092 S 0.0 8.5 0:09 postmaster
>
> 90MB per process? wow. Can you look in the server logs to see which query is
> taking all the time?
>
I can't help with the problem but is 90MB such a shock? I can get towards that
just by running something like:
SELECT * FROM big_table
WHERE time > 'sometime'
AND time < 'someothertime'
AND name IN ('first', 'second', 'third', 'fourth', 'fifth')
ORDER BY time
Indeed I got blase about running such a thing and rather than the backend
instance dying the last time it froze my kernel. I haven't done it again.
BTW, the killer bit was the fifth name, up to that point things got large but
stayed within capabilities of the machine. I tried all I could think of to get
limits applied to the backend processes (short of editing and recompiling from
source) but nothing worked. There wasn't any change when switching from a IN
test to a string of ORs.
(6.5.1 I think postgres, since upgraded to 7.2 on FreeBSD 3.3-STABLE)
Why am I saying this? No idea. Just not sure why a 90MB footprint for a DB
backend would be so shocking.
Nigel J.Andrews
Logictree Systems Limited