Thread: Autovacuum is running forever

Autovacuum is running forever

From
"Sachchida Ojha"
Date:

Is there any way to stop the autovacuum if it is running longer than 10 min or so?

 

Is it good idea to kill autovacuum if it is running longer than expected?

 

 

In my OLTP system, we are inserting, updating and deleting the data every second.

 

Autovacuum started and never finished slowing down the whole system.

 

 

Any help?

 

Thanks

Regards

sachi

Re: Autovacuum is running forever

From
Michael Glaesemann
Date:
On Aug 21, 2007, at 16:26 , Sachchida Ojha wrote:
> In my OLTP system, we are inserting, updating and deleting the data
> every second.
>
> Autovacuum started and never finished slowing down the whole system.
There's the possibility that your autovacuum settings aren't
aggressive enough for your system, so it's never able to catch up.
Without knowing details it's hard to say for certain. What are your
autovacuum settings and other details about the load on your system?

Michael Glaesemann
grzm seespotcode net



Re: Autovacuum is running forever

From
"Sachchida Ojha"
Date:
Total RAM in the system is 2GB

#-----------------------------------------------------------------------
----
# AUTOVACUUM PARAMETERS
#-----------------------------------------------------------------------
----

vacuum_cost_delay                               = 200
# 0-1000 milliseconds
vacuum_cost_page_hit                    = 1
# 0-10000 credits
vacuum_cost_page_miss                   = 10                    #
0-10000 credits
vacuum_cost_page_dirty                  = 20                    #
0-10000 credits
vacuum_cost_limit                               = 200
# 0-10000 credits
autovacuum                                              = on
autovacuum_naptime                              = 3600
#autovacuum_vacuum_threshold    = 1000
#autovacuum_analyze_threshold   = 500
#autovacuum_vacuum_scale_factor         = 0.4
#autovacuum_analyze_scale_factor = 0.2
#autovacuum_vacuum_cost_delay   = -1
#autovacuum_vacuum_cost_limit   = -1

Some other parameter

max_connections                                 = 350
shared_buffers                          = 400MB                 # set to
16 MB min 16 or max_connections*2, 8KB each
temp_buffers                            = 8MB                   # min
100, 8KB each
max_prepared_transactions       = 5                             # can be
0 or more
work_mem                                        = 4MB
# min 64, size in KB
maintenance_work_mem            = 256MB                 # min 1024, size
in KB
max_stack_depth                         = 2048                  # min
100, size in KB
max_fsm_pages                           = 2000000               # min
max_fsm_relations*16, 6 bytes each
max_fsm_relations                       = 1000                  # min
100, ~70 bytes each
max_files_per_process           = 1000                  # min 25
bgwriter_delay                          = 200                   #
10-10000 milliseconds between rounds
bgwriter_lru_percent            = 1.0                   # 0-100% of LRU
buffers scanned/round
bgwriter_lru_maxpages           = 5                             # 0-1000
buffers max written/round
bgwriter_all_percent            = 0.333                 # 0-100% of all
buffers scanned/round
bgwriter_all_maxpages           = 5                             # 0-1000
buffers max written/round


Regards
Sachchida


-----Original Message-----
From: Michael Glaesemann [mailto:grzm@seespotcode.net]
Sent: Tuesday, August 21, 2007 5:36 PM
To: Sachchida Ojha
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Autovacuum is running forever


On Aug 21, 2007, at 16:26 , Sachchida Ojha wrote:
> In my OLTP system, we are inserting, updating and deleting the data
> every second.
>
> Autovacuum started and never finished slowing down the whole system.
There's the possibility that your autovacuum settings aren't aggressive
enough for your system, so it's never able to catch up.
Without knowing details it's hard to say for certain. What are your
autovacuum settings and other details about the load on your system?

Michael Glaesemann
grzm seespotcode net



Re: Autovacuum is running forever

From
"Sachchida Ojha"
Date:
Our model is to provode black box solutions to our customer.

Black box, I mean application system, web sever and database is running
on the same machine.

We are running our sensor on 10 assets (windows machine)  and sending
asset data to the server at every 15 minutes. There are some other user
operations going on to those assets at the same time.

On server

Cpu util ranging from 10-75%
Mem util ranging from 15-50%




Regards
Sachchida


-----Original Message-----
From: Michael Glaesemann [mailto:grzm@seespotcode.net]
Sent: Tuesday, August 21, 2007 5:36 PM
To: Sachchida Ojha
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Autovacuum is running forever


On Aug 21, 2007, at 16:26 , Sachchida Ojha wrote:
> In my OLTP system, we are inserting, updating and deleting the data
> every second.
>
> Autovacuum started and never finished slowing down the whole system.
There's the possibility that your autovacuum settings aren't aggressive
enough for your system, so it's never able to catch up.
Without knowing details it's hard to say for certain. What are your
autovacuum settings and other details about the load on your system?

Michael Glaesemann
grzm seespotcode net



Re: Autovacuum is running forever

From
Alvaro Herrera
Date:
Sachchida Ojha wrote:

> vacuum_cost_delay                               = 200

That is absurdly high.  A setting of 10 is more likely to be useful.

> autovacuum_naptime                              = 3600

That is too high probably as well; particularly so if you have "updates
and deletes every second".

> #autovacuum_vacuum_scale_factor         = 0.4
> #autovacuum_analyze_scale_factor = 0.2

These too.  Try 0.1 for both and see how it goes.

In short, you need autovacuum to run _way more often_ than you are.

--
Alvaro Herrera                        http://www.advogato.org/person/alvherre
"Un poeta es un mundo encerrado en un hombre" (Victor Hugo)

Re: Autovacuum is running forever

From
"Sachchida Ojha"
Date:
Is there any data corruption/damage to the database if we forcefully
kill autovacuum using cron job (if it is running longer than a
predefined time frame).


Regards
Sachchida


-----Original Message-----
From: Sachchida Ojha
Sent: Tuesday, August 21, 2007 5:46 PM
To: 'Michael Glaesemann'
Cc: 'pgsql-performance@postgresql.org'
Subject: RE: [PERFORM] Autovacuum is running forever

Our model is to provode black box solutions to our customer.

Black box, I mean application system, web sever and database is running
on the same machine.

We are running our sensor on 10 assets (windows machine)  and sending
asset data to the server at every 15 minutes. There are some other user
operations going on to those assets at the same time.

On server

Cpu util ranging from 10-75%
Mem util ranging from 15-50%




Regards
Sachchida


-----Original Message-----
From: Michael Glaesemann [mailto:grzm@seespotcode.net]
Sent: Tuesday, August 21, 2007 5:36 PM
To: Sachchida Ojha
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Autovacuum is running forever


On Aug 21, 2007, at 16:26 , Sachchida Ojha wrote:
> In my OLTP system, we are inserting, updating and deleting the data
> every second.
>
> Autovacuum started and never finished slowing down the whole system.
There's the possibility that your autovacuum settings aren't aggressive
enough for your system, so it's never able to catch up.
Without knowing details it's hard to say for certain. What are your
autovacuum settings and other details about the load on your system?

Michael Glaesemann
grzm seespotcode net



Re: Autovacuum is running forever

From
"Scott Marlowe"
Date:
On 8/21/07, Sachchida Ojha <sojha@secure-elements.com> wrote:



> vacuum_cost_delay                               = 200
> vacuum_cost_page_hit                    = 1
> vacuum_cost_page_miss                   = 10
> vacuum_cost_page_dirty                  = 20
> vacuum_cost_limit                               = 200
>
> autovacuum                                              = on
> autovacuum_naptime                              = 3600
>
> maintenance_work_mem            = 256MB                 # min 1024, size

That's a REALLY long naptime.  Better to let autovacuum decide if you
need vacuum more often, and just increase the vacuum_cost_delay and
decrease  vacuum_cost_limit so that vacuum doesn't slam your I/O.

Maintenance work mem on the other hand is plenty big.  and your fsm
settings seem large enough to handle your freed space.

But making vacuum wait so long between runs may be slowly bloating
your data store, and then vacuum becomes more and more expensive
because it takes longer and longer to run.

Re: Autovacuum is running forever

From
"Scott Marlowe"
Date:
On 8/21/07, Sachchida Ojha <sojha@secure-elements.com> wrote:
> Is there any data corruption/damage to the database if we forcefully
> kill autovacuum using cron job (if it is running longer than a
> predefined time frame).

Not really.  but vacuum will just have to run that long again plus
some the next time it starts up.

Again, it's better to run vacuum more often not less often, and keep
the cost_delay high enough that it doesn't interfere with your I/O.
however, that will make it run even longer.

Re: Autovacuum is running forever

From
"Scott Marlowe"
Date:
On 8/21/07, Sachchida Ojha <sojha@secure-elements.com> wrote:
> Is there any data corruption/damage to the database if we forcefully
> kill autovacuum using cron job (if it is running longer than a
> predefined time frame).

Oh, and I'd look at your I/O subsystem.  You might want to look at
putting $300 hardware RAID cards with battery backed cache and 4 or so
disks in a RAID10 in them.  It sounds to me like you could use more
I/O for your vacuuming.  Vacuuming isn't CPU intensive, but it can be
I/O intensive.

Re: Autovacuum is running forever

From
"Sachchida Ojha"
Date:
We are having only two disk (40GB each). One disk is used for OS, App
Server, and application. Second disk is used for postgresql database.
It's a dual cpu machine having 2 GB of ram.


Regards
Sachchida


-----Original Message-----
From: Scott Marlowe [mailto:scott.marlowe@gmail.com]
Sent: Tuesday, August 21, 2007 6:05 PM
To: Sachchida Ojha
Cc: Michael Glaesemann; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Autovacuum is running forever

On 8/21/07, Sachchida Ojha <sojha@secure-elements.com> wrote:
> Is there any data corruption/damage to the database if we forcefully
> kill autovacuum using cron job (if it is running longer than a
> predefined time frame).

Oh, and I'd look at your I/O subsystem.  You might want to look at
putting $300 hardware RAID cards with battery backed cache and 4 or so
disks in a RAID10 in them.  It sounds to me like you could use more I/O
for your vacuuming.  Vacuuming isn't CPU intensive, but it can be I/O
intensive.

Re: Autovacuum is running forever

From
"Scott Marlowe"
Date:
On 8/21/07, Sachchida Ojha <sojha@secure-elements.com> wrote:
> We are having only two disk (40GB each). One disk is used for OS, App
> Server, and application. Second disk is used for postgresql database.
> It's a dual cpu machine having 2 GB of ram.

Even a single disk, with a battery backed caching controller will
generally run things like updates and inserts much faster, and is
usually a much better performance under load than a single disk.

I'd at least look at mirroring them for redundancy and better read performance.

Re: Autovacuum is running forever

From
"Scott Marlowe"
Date:
On 8/21/07, Sachchida Ojha <sojha@secure-elements.com> wrote:
> Thanks to all of you. I have changed the settings and reloaded the
> config. Let me run this system overnight. I will update this forum if
> new settings works for me. I am also asking management to upgrade the
> hardware.

You need to run vacuum verbose on the database (not an individual
table) and note the output at the end.  It will tell you how bloated
your current db is.  If vacuums have been delayed for too long, you
may need to vacuum full and / or reindex the bloated tables and
indexes to reclaim the lost space.

Assuming that there's not too much dead space, or that if there is
you've used vacuum full / reindexdb to reclaim it, then vacuum running
regularly and in the background should fix this issue...

The output of vacuum verbose you're looking for is like this:

DETAIL:  A total of 2096 page slots are in use (including overhead).
2096 page slots are required to track all free space.
Current limits are:  20000 page slots, 1000 relations, using 182 KB.

If it comes back with some huge number for page slots (like in the
millions) needed to track all the dead tuples you'll need that vacuum
full / reindex.  A certain amount of dead space is ok, even a good
thing, since you don't have to extend your table / index files to
insert.  10-30% dead space is normal.  anything around 100% or heading
up from there is bad.

You'll also want to look through the rest of the vacuum verbose output
for things like this:

INFO:  vacuuming "abc.zip_test"
INFO:  index "zip_test_pkey" now contains 1000000 row versions in 3076 pages
DETAIL:  8589 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU 0.37s/0.23u sec elapsed 28.23 sec.
INFO:  "zip_test": removed 8589 row versions in 55 pages
DETAIL:  CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  "zip_test": found 8589 removable, 1000000 nonremovable row
versions in 6425 pages
DETAIL:  0 dead row versions cannot be removed yet.
There were 0 unused item pointers.
0 pages are entirely empty.
CPU 1.36s/0.34u sec elapsed 100.52 sec.

If the number of rows removed and the pages they held were a large
percentage of the table, then you'll likely need to reindex them to
get the space back.  Or cluster on an index.

Re: Autovacuum is running forever

From
"Sachchida Ojha"
Date:
Thanks to all of you. I have changed the settings and reloaded the
config. Let me run this system overnight. I will update this forum if
new settings works for me. I am also asking management to upgrade the
hardware.

Thanks a lot.


Regards
Sachchida


-----Original Message-----
From: Scott Marlowe [mailto:scott.marlowe@gmail.com]
Sent: Tuesday, August 21, 2007 6:05 PM
To: Sachchida Ojha
Cc: Michael Glaesemann; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Autovacuum is running forever

On 8/21/07, Sachchida Ojha <sojha@secure-elements.com> wrote:
> Is there any data corruption/damage to the database if we forcefully
> kill autovacuum using cron job (if it is running longer than a
> predefined time frame).

Oh, and I'd look at your I/O subsystem.  You might want to look at
putting $300 hardware RAID cards with battery backed cache and 4 or so
disks in a RAID10 in them.  It sounds to me like you could use more I/O
for your vacuuming.  Vacuuming isn't CPU intensive, but it can be I/O
intensive.