Re: postgresql meltdown on PlanetMath.org - Mailing list pgsql-performance

From Sean Chittenden
Subject Re: postgresql meltdown on PlanetMath.org
Date
Msg-id 20030316061208.GA62529@perrin.int.nxad.com
Whole thread Raw
In response to postgresql meltdown on PlanetMath.org  (Aaron Krowne <akrowne@vt.edu>)
Responses Re: postgresql meltdown on PlanetMath.org
List pgsql-performance
> As the topic suggests, I am having fairly critical troubles with
> postgresql on PlanetMath.org (a site which I run).  You can go there and
> try to pull up some entries and you will see the problem: everything is
> incredibly slow.

Have you read the following?

http://developer.postgresql.org/docs/postgres/performance-tips.html

> First: the machine.  The machine is not too spectactular, but it is not
> so bad that the performance currently witnessed should be happening.  It
> is a dual PIII-650 with 512MB of RAM and a 20gb IDE drive (yes, DMA is
> on).  There is plenty of free space on the drive.

This shouldn't be an issue for the load you describe.  A p-100 should
be okay, but it depends on your queries that you're performing.

> Now, the optimisations I have tried:

*) Stick with btree's.

> - I then tried increasing the machines shared memory max to 75% of the
>   physical memory, and scaled postgresql's buffers accordingly.  This
>   also sped things up for a while, but again resulted in eventual
>   degeneration.  Even worse, there were occasional crashes due to
>   running out of memory that (according to my calculations) shouldn't
>   have been happening.

*) Don't do this, go back to near default levels.  I bet this is
 hurting your setup.

> - Lastly, I tried reducing the shared memory max and limiting postgresql
>   to more conservative values, although still not to the out-of-box
>   values.  Right now shared memory max on the system is 128mb,
>   postgres's shared buffers are at 64mb, sort_mem is at 16mb, and
>   effective cache size is at 10mb.

*) You shouldn't have to do this either.

> For perspective, the size of the PlanetMath database dump is 24mb.
> It should be able to fit in memory easily, so I'm not sure what I'm
> doing wrong regarding the caching.

I hate to say this, but this sounds like a config error.  :-/

> For the most trivial request, Postgresql takes up basically all the
> CPU for the duration of the request.  The load average of the
> machine is over-unity at all times, sometimes as bad as being the
> 30's.  None of this happens without postgres running, so it is
> definitely the culprit.

*) Send an EXPLAIN statement as specified here:

http://developer.postgresql.org/docs/postgres/performance-tips.html#USING-EXPLAIN

> The site averages about one hit every twenty seconds.  This should not
> be an overwhelming load, especially for what is just pulling up cached
> information 99% of the time.

*) Have you done a vacuum analyze?

http://developer.postgresql.org/docs/postgres/populate.html#POPULATE-ANALYZE

> Given this scenario, can anyone advise?  I am particularly puzzled
> as to why everything I tried initially helped, but always
> degenerated rather rapidly to a near standstill.  It seems to me
> that everything should be able to be cached in memory with no
> problem, perhaps I need to force this more explicitly.

*) Send the EXPLAIN output and we can work from there.

> My next step, if I cannot fix this, is to try mysql =(

Bah, don't throw down the gauntlet, it's pretty clear this is a local
issue and not a problem with the DB.  :)

-sc

--
Sean Chittenden

pgsql-performance by date:

Previous
From: Aaron Krowne
Date:
Subject: postgresql meltdown on PlanetMath.org
Next
From: Tom Lane
Date:
Subject: Re: postgresql meltdown on PlanetMath.org