Thread: database speed

database speed

From
"Chris Stokes"
Date:
Hi all,

WE currently use PG for the local database on our POS workstations in shops. Data is then extracted and sent back to
headoffice. Also product updates etc are regularly sent to the shops. I have noticed over time that the shop PG
databasesget slower and slower. If I do a pg_dump, recreate the database and then restore the data - all speed problems
aregone and the database folder shrinks in size substantially. 

From this information could anyone tell me what are the likely causes and what we may or may not be doing in our
extractionsand updates that are not good PG insert, update, delete procedures 

Thanks

Chris Stokes
Senior Systems Consultant
Bass Software

Phone: +61 3 8415 9305 (Direct)
Fax: +61 3 9427 1752
Mobile: +61 409 851 447

chriss@basssoftware.com <mailto:chriss@basssoftware.com>
www.basssoftware.com <http://www.basssoftware.com/>

Level 5 534 Church Street
Richmond, Victoria, 3121, Australia




Re: database speed

From
Doug McNaught
Date:
"Chris Stokes" <ChrisS@basssoftware.com> writes:

> WE currently use PG for the local database on our POS workstations
> in shops. Data is then extracted and sent back to head office. Also
> product updates etc are regularly sent to the shops. I have noticed
> over time that the shop PG databases get slower and slower. If I do
> a pg_dump, recreate the database and then restore the data - all
> speed problems are gone and the database folder shrinks in size
> substantially.
>
> >From this information could anyone tell me what are the likely
> >causes and what we may or may not be doing in our extractions and
> >updates that are not good PG insert, update, delete procedures

You should periodically be doing:

a) VACUUM (at least once a day, probably) and
b) REINDEX

Are you doing both of these?  See the docs if you're not familiar with
them.

The REINDEX is needed because VACUUM doesn't free up index space in
some circumstances.  7.4 (currently in late beta) will fix this.

-Doug

Re: database speed

From
"Chris Stokes"
Date:
>You should periodically be doing:

>a) VACUUM (at least once a day, probably) and
>b) REINDEX

>Are you doing both of these?  See the docs if you're not familiar with
>them.

>The REINDEX is needed because VACUUM doesn't free up index space in
>some circumstances.  7.4 (currently in late beta) will fix this.

Sorry Doug,

Yes I am doing a vacuum regularly - in fact, when the speed of the DB becomes slower, the vacuum takes longer and
longertoo. 

I have never used the REINDEX

I wondered if it might be a row chaining issue of some sort ?!?

Chris

Re: database speed

From
Doug McNaught
Date:
"Chris Stokes" <ChrisS@BassSoftware.com> writes:

> >The REINDEX is needed because VACUUM doesn't free up index space in
> >some circumstances.  7.4 (currently in late beta) will fix this.
>
> Sorry Doug,
>
> Yes I am doing a vacuum regularly - in fact, when the speed of the
> DB becomes slower, the vacuum takes longer and longer too.
>
> I have never used the REINDEX
>
> I wondered if it might be a row chaining issue of some sort ?!?

The index bloat problem can occur when you have a an indexed SERIAL
column whose value always increases, and you delete older rows on a
regular basis.  VACUUM recycles the row storage, but the index never
shrinks.

You can check: next time you get the speed/data growth problem, find
which files in the database directory are growing, and use 'oid2name'
(in the source tree under 'contrib') to find out what they are.  Odds
are they are indexes and REINDEXing their table will fix the problem.

If you are running a recent version of PG (7.3 and maybe 7.2 as well)
and have a very active database, you may also need to increase your
free space map (FSM) size in postgresql.conf, and possibly run VACUUM
FULL once a week or so.

-Doug

Re: database speed

From
Martijn van Oosterhout
Date:
On Fri, Oct 31, 2003 at 11:54:55AM +1100, Chris Stokes wrote:
> Hi all,
>
> WE currently use PG for the local database on our POS workstations in
> shops. Data is then extracted and sent back to head office. Also product
> updates etc are regularly sent to the shops. I have noticed over time that
> the shop PG databases get slower and slower. If I do a pg_dump, recreate
> the database and then restore the data - all speed problems are gone and
> the database folder shrinks in size substantially.
>
> >From this information could anyone tell me what are the likely causes and
> >what we may or may not be doing in our extractions and updates that are
> >not good PG insert, update, delete procedures

The three important steps for maintainence of a postgresql database:

1. ANALYZE
2. VACUUM [FULL]
3. REINDEX

Hope this helps,
--
Martijn van Oosterhout   <kleptog@svana.org>   http://svana.org/kleptog/
> "All that is needed for the forces of evil to triumph is for enough good
> men to do nothing." - Edmond Burke
> "The penalty good people pay for not being interested in politics is to be
> governed by people worse than themselves." - Plato

Attachment

Re: database speed

From
Greg Stark
Date:
"Chris Stokes" <ChrisS@BassSoftware.com> writes:

> I wondered if it might be a row chaining issue of some sort ?!?

Just to address this point:

Row Chaining is how Oracle deals with updates when the new record takes more
space than is available in the original block. Since Oracle does in-place
updates this is a real problem. It has to put a pointer in the first block to
a second overflow block. A busy table in which records often grow and shrink
can become slow because of having to follow all these pointers.

Postgres is very different. Updates in Postgres aren't in-place; every update
in Postgres is a delete and insert. Therefore there's no row-chaining problem,
in fact records can be packed (pctfree 0 pctused 100).

There are analogous problems though. It's easier to keep a heavily updated
table "clean" with postgres's approach but there are still a lot of cases to
consider. The free space can be fragmented (vacuum full can help that), the
data can be very poorly distributed (cluster can help that temporarily).

--
greg

Re: database speed

From
Vivek Khera
Date:
>>>>> "DM" == Doug McNaught <doug@mcnaught.org> writes:

DM> The index bloat problem can occur when you have a an indexed SERIAL
DM> column whose value always increases, and you delete older rows on a
DM> regular basis.  VACUUM recycles the row storage, but the index never
DM> shrinks.

I get this on tables which are updated a lot, even if I don't do any
mass deletes of older rows.  The indexes on other columns also bloat
rapidly on some tables.  It is a significant problem, especially since
reindex locks the tables for significant amounts of time.

--
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Vivek Khera, Ph.D.                Khera Communications, Inc.
Internet: khera@kciLink.com       Rockville, MD       +1-240-453-8497
AIM: vivekkhera Y!: vivek_khera   http://www.khera.org/~vivek/

Re: database speed

From
Dennis Gearon
Date:
Vivek Khera wrote:

>>>>>>"DM" == Doug McNaught <doug@mcnaught.org> writes:
>>>>>>
>>>>>>
>
>DM> The index bloat problem can occur when you have a an indexed SERIAL
>DM> column whose value always increases, and you delete older rows on a
>DM> regular basis.  VACUUM recycles the row storage, but the index never
>DM> shrinks.
>
>I get this on tables which are updated a lot, even if I don't do any
>mass deletes of older rows.  The indexes on other columns also bloat
>rapidly on some tables.  It is a significant problem, especially since
>reindex locks the tables for significant amounts of time.
>
>
>
Is ist possible for a trigger to fork off a 'reindex' command and the
trigger terminates leaving the reindex running?

Using PL/PGSQL?

If a counter were kept in some table for each row, and after a certain
amount of updates/deletes, reindex were run on an automatic, more often
time frequency?


Re: database speed

From
Doug McNaught
Date:
Dennis Gearon <gearond@fireserve.net> writes:

> Is ist possible for a trigger to fork off a 'reindex' command and the
> trigger terminates leaving the reindex running?

Even if possible, ick.

I'd say the best thing to do would to have such a trigger set a flag
(value in a one-row table) and have a cron job that checks for that
flag and runs REINDEX.

-Doug

Re: database speed

From
Network Administrator
Date:
Quoting Doug McNaught <doug@mcnaught.org>:

> "Chris Stokes" <ChrisS@BassSoftware.com> writes:
>
> > >The REINDEX is needed because VACUUM doesn't free up index space in
> > >some circumstances.  7.4 (currently in late beta) will fix this.
> >
> > Sorry Doug,
> >
> > Yes I am doing a vacuum regularly - in fact, when the speed of the
> > DB becomes slower, the vacuum takes longer and longer too.
> >
> > I have never used the REINDEX
> >
> > I wondered if it might be a row chaining issue of some sort ?!?
>
> The index bloat problem can occur when you have a an indexed SERIAL
> column whose value always increases, and you delete older rows on a
> regular basis.  VACUUM recycles the row storage, but the index never
> shrinks.
>
> You can check: next time you get the speed/data growth problem, find
> which files in the database directory are growing, and use 'oid2name'
> (in the source tree under 'contrib') to find out what they are.  Odds
> are they are indexes and REINDEXing their table will fix the problem.
>
> If you are running a recent version of PG (7.3 and maybe 7.2 as well)
> and have a very active database, you may also need to increase your
> free space map (FSM) size in postgresql.conf, and possibly run VACUUM
> FULL once a week or so.
>
> -Doug
>
> ---------------------------(end of broadcast)---------------------------
> TIP 6: Have you searched our list archives?
>
>                http://archives.postgresql.org
>


Speaking of recent versions... In 7.4, doesn't the cluster command address this
problem?

--
Keith C. Perry
Director of Networks & Applications
VCSN, Inc.
http://vcsn.com

____________________________________
This email account is being host by:
VCSN, Inc : http://vcsn.com