Re: vacuum and 24/7 uptime - Mailing list pgsql-general

From dennis@zserve.com
Subject Re: vacuum and 24/7 uptime
Date
Msg-id 200107121826.f6CIQfW18221@mail.ldssingles.com
Whole thread Raw
In response to Re: vacuum and 24/7 uptime  (Bruce Momjian <pgman@candle.pha.pa.us>)
List pgsql-general
What type of reads/writes actually have to take place during a vacuum?
Does it make a
difference if there is enough shared memory to fit the entire table
being vacuumed?
After I insert the records, the table size is about 685 M.  I calculate
that to be about 86K pages.
The max we have tried on shared memory is about 12288 buffers ( 96M ).
If buffers ~ pages
then this table takes many more than we have tried so far.  Am I totally
off on this or is there
merit to my logic?

The following may help shed light on the subject.

When I insert about 1 million records into a table and then vacuum that
table, even after I've increased
the shared memory, Vacuum still takes a tremendous amount of time on
that table.  It doesn't
take near the time after I've done the initial vacuum but it is still
very much more than is
an acceptable time to lock that table in our system.

After I the perform the initial vacuum ( which has taken as long as an 2
or more hours for that one table )
I notice two behaviors regarding the shared memory.

First, if I left postmaster with the default shared memory.  Vacuuming
the table took a considerable amount
of time.  Then, increasing the shared memory to about 64 megs sped the
process up quite a bit.
I tried 96 megs and vacuum didn't seem to change much.

The first time I vacuum after doing the insert though, doesn't seem to
be effected by the amount
of shared memory.  PG lists a little vacuum information about the table
and then prints the following for a very long time.

DEBUG:  removing transaction log file 0000000B00000085
DEBUG:  removing transaction log file 0000000B00000083
DEBUG:  XLogWrite: new log file created - consider increasing WAL_FILES
DEBUG:  XLogWrite: new log file created - consider increasing WAL_FILES
DEBUG:  XLogWrite: new log file created - consider increasing WAL_FILES
DEBUG:  removing transaction log file 0000000B00000087
DEBUG:  removing transaction log file 0000000B00000088
DEBUG:  removing transaction log file 0000000B00000086
DEBUG:  XLogWrite: new log file created - consider increasing WAL_FILES
DEBUG:  XLogWrite: new log file created - consider increasing WAL_FILES
DEBUG:  XLogWrite: new log file created - consider increasing WAL_FILES
DEBUG:  removing transaction log file 0000000B0000008A
DEBUG:  removing transaction log file 0000000B0000008B
DEBUG:  removing transaction log file 0000000B00000089
DEBUG:  XLogWrite: new log file created - consider increasing WAL_FILES
DEBUG:  XLogWrite: new log file created - consider increasing WAL_FILES
DEBUG:  XLogWrite: new log file created - consider increasing WAL_FILES


(DEBUG continues this way for approx 1 to 2 hours ).

Finally, vacuum finishes.

This behavior is specific to 7.2devel compiled from CVS on 7/11
When we tried it with REL7.1.2, the timing was similar, but the DEBUG
statements
only had create WAL_FILES and never removed them.  We ran out of disk
space
before vacuum completed.

We hope this is fixed in 7.2 but we also would like to know if there is
anyway we can get
around it now.  Are we doing something wrong that is making vacuum take
so long?
I thought vacuum should never take 2.5 hours even if the table was
pretty big.

Some more info if your still interesed.

The table that has 1 million rows has four ints, four varchars(20) and
one text and a boolean.  It has Primary
key and two indexes each with two fields.

We have one more table that has 2.8 million records.  This one only has
two fields, one index
and the size of the table is only 180 Megs as compared to the 685 in the
first table.  Vacuumb
takes care of this table in a reasonable amount of time.  Maybe number
of pages in the table is
the real key here??

Any suggestions much appreciated :-)
Thanks
Dennis



> > otherwise enhanced to execute more quickly and/or not lock tables?
>
> We hope to eliminate that downtime in 7.2.
>
> --
>   Bruce Momjian                        |  http://candle.pha.pa.us
>   pgman@candle.pha.pa.us               |  (610) 853-3000
>   +  If your life is a hard drive,     |  830 Blythe Avenue
>   +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026
>
> ---------------------------(end of broadcast)---------------------------
> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org
>


pgsql-general by date:

Previous
From: A_Schnabel@t-online.de (Andre Schnabel)
Date:
Subject: Re: PostgreSQL for Windows
Next
From: Tom Lane
Date:
Subject: Re: Re: Pg7.2 (was: vacuum and 24/7 uptime)