I'm not sure whether you have done this but try 'vacuum analyze' and also
dropping and recreating index.
good luck
----- Original Message -----
From: "Wiecha, Martin" <wiecha@airdata.ag>
To: <pgsql-admin@postgresql.org>
Sent: Wednesday, October 31, 2001 4:57 PM
Subject: [ADMIN] performnce problem
> Hi everybody,
>
> I'm having a serious performance problem. We're using PostgreSQL (7.1.3)
for
> data collection (appr. 3 000 000 new records per day). Every night all
> records older than 3 days are deleted, after that a vacuumdb is done. The
> table has about 7-8 M records and is about 2,5GB size on the file-system.
I
> noticed that the time consumed by vaccum increases from about 5 mins to 11
> mins now within the last 5 weeks which I thought this happens because of
> increased tables over the last weeks on the system.
>
> The whole system worked well for several weeks now. Starting this night
> everything slowed down. Simple SQL commands (Select date(dt),sum(bytes)
from
> table where source between 'xxx.xxx.xxx.xxx' and 'xxx.xxx.xxx.xxx' and dt
>=
> '10-30-2001' and dt < '10-31-2001 00:00:00' group by date(dt)) which are
> done within a few seconds before take minutes now to compute... There were
> no changes on the system the last times. The file-system has no errors.
>
> PostgreSQL is installed on a RedHat 7.1 with Kernel 2.4.2-2. The hardware
is
> a PIII-500 with 256MB RAM and 10GB HD (IDE).
>
> Does anyone has a idea what the reason could be and how it could be fixed?
>
> Thanks in advance!
>
> Martin
>
> ---------------------------(end of broadcast)---------------------------
> TIP 3: if posting/reading through Usenet, please send an appropriate
> subscribe-nomail command to majordomo@postgresql.org so that your
> message can get through to the mailing list cleanly