Thread: Performance degredation at client site

Performance degredation at client site

From
Bill Chandler
Date:
Hello,

Client is seeing continual performance degradation on
updates and queries from a large database.  Any help
appreciated.

Client is using PostgreSQL 7.4.2 on Sparcv9 650MHZ
cpu, 2GB Ram, running Solaris.

We have the following tables:

EVENT_TBL
evt_id       bigserial, unique
d1           numeric(13)
obj_id       numeric(6)
d2           numeric(13)
val          varchar(22)
correction   numeric(1)
delta        numeric(13)

CONTROL_TBL
obj_id       numeric(6), unique
name         varchar(22), unique
dtype        numeric(2)
dfreq        numeric(2)

Indexes:
EVENT_TBL.d1 (non-clustered)
EVENT_TBL.obj_id (non-clustered)
CONTROL_TBL.obj_id (non-clustered)
CONTROL_TBL.name (clustered)

Update processes run continually throughout the day in
which rows are inserted but none deleted.  The
EVENT_TBL is currently very big, w/ over 5 million
rows.  The CONTROL_TBL is fairly small w/ around 4000
rows.  We're doing a "VACUUM ANALYZE" on each table
after each update has been completed and changes
committed.  Each night we drop all the indexes and
recreate them.

Do I understand correctly, however, that when you
create a unique SERIAL column an index is
automatically created on that column?  If so, does
that sound like a possible culprit?  We are not doing
any reindexing on that index at all.  Could it be
suffering from index bloat?  Do we need to
periodically explicity run the command:

reindex index event_tbl_evt_id_key;

???

Even seemingly simple commands are taking forever.
For example:

select evt_id from event_tbl where evt_id=1;

takes over a minute to complete.


Here is a slightly more complicated example along with
its explain output:

select events.evt_id, ctrl.name, events.d1,
events.val, events.d2, events.correction, ctrl.type,
ctrl.freq from event_tbl events, control_tbl ctrl
where events.obj_id = ctrl.obj_id and events.evt_id >
3690000 order by events.evt_id limit 2000;

                     QUERY PLAN
-----------------------------------------------------------------
 Limit  (cost=0.00..6248.56 rows=2000 width=118)
   ->  Nested Loop  (cost=0.00..7540780.32
rows=2413606 width=118)
         ->  Index Scan using event_tbl_evt_id_key on
event_tbl events  (cost=0.00..237208.57 rows=2413606
width=63)
               Filter: (evt_id > 3690000)
         ->  Index Scan using control_tbl_obj_id_idx
on control_tbl ctrl  (cost=0.00..3.01 rows=1 width=75)
               Index Cond: ("outer".obj_id =
ctrl.obj_id)
(6 rows)

This takes minutes to return 2000 rows.

Thank you in advance.

Bill



__________________________________
Do you Yahoo!?
All your favorites on one personal page � Try My Yahoo!
http://my.yahoo.com

Re: Performance degredation at client site

From
Tom Lane
Date:
Bill Chandler <billybobc1210@yahoo.com> writes:
> Update processes run continually throughout the day in
> which rows are inserted but none deleted.

What about row updates?

> Even seemingly simple commands are taking forever.
> For example:
> select evt_id from event_tbl where evt_id=1;
> takes over a minute to complete.

Since evt_id is a bigint, you need to write that as

select evt_id from event_tbl where evt_id=1::bigint;

or various other locutions that have the same effect.  What you have is
a bigint-vs-int comparison, which is not indexable in releases before 8.0.

The same problem is occurring in your other example.

            regards, tom lane

Re: Performance degredation at client site

From
PFC
Date:
> Do I understand correctly, however, that when you
> create a unique SERIAL column an index is
> automatically created on that column?  If so, does
> that sound like a possible culprit?  We are not doing
> any reindexing on that index at all.  Could it be
> suffering from index bloat?  Do we need to
> periodically explicity run the command:

    SERIAL creates a sequence, not an index.
    UNIQUE and PRIMARY KEY do create indexes.


    Regards.

Re: Performance degredation at client site

From
Bill Chandler
Date:
Tom,

Thank you!  I will have the client try that.  What
about the event_tbl_evt_id_key index question.  Could
that also be causing me difficulties?  Should I
periodically reindex it?

thanks,

Bill

--- Tom Lane <tgl@sss.pgh.pa.us> wrote:

> Bill Chandler <billybobc1210@yahoo.com> writes:
> > Update processes run continually throughout the
> day in
> > which rows are inserted but none deleted.
>
> What about row updates?
>
> > Even seemingly simple commands are taking forever.
>
> > For example:
> > select evt_id from event_tbl where evt_id=1;
> > takes over a minute to complete.
>
> Since evt_id is a bigint, you need to write that as
>
> select evt_id from event_tbl where evt_id=1::bigint;
>
> or various other locutions that have the same
> effect.  What you have is
> a bigint-vs-int comparison, which is not indexable
> in releases before 8.0.
>
> The same problem is occurring in your other example.
>
>             regards, tom lane
>


__________________________________________________
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around
http://mail.yahoo.com