Performance degredation at client site - Mailing list pgsql-performance

From Bill Chandler
Subject Performance degredation at client site
Date
Msg-id 20050131171918.20665.qmail@web51410.mail.yahoo.com
Whole thread Raw
Responses Re: Performance degredation at client site
Re: Performance degredation at client site
List pgsql-performance
Hello,

Client is seeing continual performance degradation on
updates and queries from a large database.  Any help
appreciated.

Client is using PostgreSQL 7.4.2 on Sparcv9 650MHZ
cpu, 2GB Ram, running Solaris.

We have the following tables:

EVENT_TBL
evt_id       bigserial, unique
d1           numeric(13)
obj_id       numeric(6)
d2           numeric(13)
val          varchar(22)
correction   numeric(1)
delta        numeric(13)

CONTROL_TBL
obj_id       numeric(6), unique
name         varchar(22), unique
dtype        numeric(2)
dfreq        numeric(2)

Indexes:
EVENT_TBL.d1 (non-clustered)
EVENT_TBL.obj_id (non-clustered)
CONTROL_TBL.obj_id (non-clustered)
CONTROL_TBL.name (clustered)

Update processes run continually throughout the day in
which rows are inserted but none deleted.  The
EVENT_TBL is currently very big, w/ over 5 million
rows.  The CONTROL_TBL is fairly small w/ around 4000
rows.  We're doing a "VACUUM ANALYZE" on each table
after each update has been completed and changes
committed.  Each night we drop all the indexes and
recreate them.

Do I understand correctly, however, that when you
create a unique SERIAL column an index is
automatically created on that column?  If so, does
that sound like a possible culprit?  We are not doing
any reindexing on that index at all.  Could it be
suffering from index bloat?  Do we need to
periodically explicity run the command:

reindex index event_tbl_evt_id_key;

???

Even seemingly simple commands are taking forever.
For example:

select evt_id from event_tbl where evt_id=1;

takes over a minute to complete.


Here is a slightly more complicated example along with
its explain output:

select events.evt_id, ctrl.name, events.d1,
events.val, events.d2, events.correction, ctrl.type,
ctrl.freq from event_tbl events, control_tbl ctrl
where events.obj_id = ctrl.obj_id and events.evt_id >
3690000 order by events.evt_id limit 2000;

                     QUERY PLAN
-----------------------------------------------------------------
 Limit  (cost=0.00..6248.56 rows=2000 width=118)
   ->  Nested Loop  (cost=0.00..7540780.32
rows=2413606 width=118)
         ->  Index Scan using event_tbl_evt_id_key on
event_tbl events  (cost=0.00..237208.57 rows=2413606
width=63)
               Filter: (evt_id > 3690000)
         ->  Index Scan using control_tbl_obj_id_idx
on control_tbl ctrl  (cost=0.00..3.01 rows=1 width=75)
               Index Cond: ("outer".obj_id =
ctrl.obj_id)
(6 rows)

This takes minutes to return 2000 rows.

Thank you in advance.

Bill



__________________________________
Do you Yahoo!?
All your favorites on one personal page � Try My Yahoo!
http://my.yahoo.com

pgsql-performance by date:

Previous
From: Marty Scholes
Date:
Subject: Re: PostgreSQL clustering VS MySQL clustering
Next
From: Tom Lane
Date:
Subject: Re: Performance degredation at client site