Thread: vacuum of permanently updating database

vacuum of permanently updating database

From
Oleg Bartunov
Date:
I've already posted my question about NOTICE message I'm getting
from vacuum but didn't get any response :-(

Today I decided to do some experiments to reproduce my problem.

I run two independent processes:

1. send parallel requests to apache server in loop. On this request server  does following:
  LOCK TABLE hits IN SHARE ROW EXCLUSIVE MODE  UPDATE hits SET count=count+1,last_access=now() WHERE msg_id=1468

2. vacuum table hits in shell scripts

#!/bin/sh
while true ;do
/usr/local/pgsql/bin/psql -tq discovery <vacuum_hits.sqlrc=$?i=$((i+1))echo Vaccuming: $i, RC=$rcsleep 10;
done

where vacuum_hits.sql:

begin work;
drop index hits_pkey;
create unique index hits_pkey on hits(msg_id);
end work;
vacuum analyze hits(msg_id);


Sometimes I get the message:

NOTICE:  Index hits_pkey: NUMBER OF INDEX' TUPLES (173) IS NOT THE SAME AS HEAP' (174)

also several times I get:
ERROR:  Can't create lock file.  Is another vacuum cleaner running?       If not, you may remove the pg_vlock file in
the/usr/local/pgsql/data//base/discovery       directory
 

I had to remove this file by hand.


I understand that experiment is a little bit artificial but I'd like
to know what I'm doing wrong and what's the safest way to vacuum
table which is permanently updating. Actually, I got about
12 requests/sec on my home P166, 64Mb, Linux 2.2.12 - each request is a
plenty of database work.  I have to vacuum table because at this rate
I got very quick performance degradation.

This is 6.5.2, Linux
Regards,
    Oleg

_____________________________________________________________
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83



RE: [HACKERS] vacuum of permanently updating database

From
"Hiroshi Inoue"
Date:
Hi

Could you try the current tree ?

As far as I see,there are 2 possibilities.

1. Relation cache invalidation mechanism is much improved   by Tom in the current tree.   In your case,index tuples may
beinserted into invalid index   relation and vanish.
 

2. If vacuum aborts after the internal commit,the transaction   status is changed to be ABORT. This causes
inconsistency.  I have changed not to do so in the current tree.
 


In CURRENT tree,you may have to change vacuum_hits.sql
as follows.
drop index hits_pkey;vacuum analyze hits(msg_id);create unique index hits_pkey on hits(msg_id);

Probably DROP INDEX couldn't be executed inside transactions.

Regards.

Hiroshi Inoue
Inoue@tpf.co.jp

>
> I've already posted my question about NOTICE message I'm getting
> from vacuum but didn't get any response :-(
>
> Today I decided to do some experiments to reproduce my problem.
>
> I run two independent processes:
>
> 1. send parallel requests to apache server in loop. On this request server
>    does following:
>
>    LOCK TABLE hits IN SHARE ROW EXCLUSIVE MODE
>    UPDATE hits SET count=count+1,last_access=now() WHERE msg_id=1468
>
> 2. vacuum table hits in shell scripts
>
> #!/bin/sh
> while true ;do
> /usr/local/pgsql/bin/psql -tq discovery <vacuum_hits.sql
>  rc=$?
>  i=$((i+1))
>  echo Vaccuming: $i, RC=$rc
>  sleep 10;
> done
>
> where vacuum_hits.sql:
>
> begin work;
> drop index hits_pkey;
> create unique index hits_pkey on hits(msg_id);
> end work;
> vacuum analyze hits(msg_id);
>
>
> Sometimes I get the message:
>
> NOTICE:  Index hits_pkey: NUMBER OF INDEX' TUPLES (173) IS NOT
> THE SAME AS HEAP' (174)
>
> also several times I get:
> ERROR:  Can't create lock file.  Is another vacuum cleaner running?
>         If not, you may remove the pg_vlock file in the
> /usr/local/pgsql/data//base/discovery
>         directory
>
> I had to remove this file by hand.
>
>
> I understand that experiment is a little bit artificial but I'd like
> to know what I'm doing wrong and what's the safest way to vacuum
> table which is permanently updating. Actually, I got about
> 12 requests/sec on my home P166, 64Mb, Linux 2.2.12 - each request is a
> plenty of database work.  I have to vacuum table because at this rate
> I got very quick performance degradation.
>
> This is 6.5.2, Linux
>
>     Regards,
>
>         Oleg
>
> _____________________________________________________________
> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
> Sternberg Astronomical Institute, Moscow University (Russia)
> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
> phone: +007(095)939-16-83, +007(095)939-23-83
>
>
> ************
>



RE: [HACKERS] vacuum of permanently updating database

From
Oleg Bartunov
Date:
Hiroshi,

thank you for the message. I'll try current tree but if
it's a bug (probable ?) why don't try to fix it for 6.5.3 ?
Regards,
   Oleg


On Sat, 16 Oct 1999, Hiroshi Inoue wrote:

> Date: Sat, 16 Oct 1999 10:06:36 +0900
> From: Hiroshi Inoue <Inoue@tpf.co.jp>
> To: Oleg Bartunov <oleg@sai.msu.su>
> Cc: hackers@postgreSQL.org
> Subject: RE: [HACKERS] vacuum of permanently updating database 
> 
> Hi
> 
> Could you try the current tree ?
> 
> As far as I see,there are 2 possibilities.
> 
> 1. Relation cache invalidation mechanism is much improved
>     by Tom in the current tree.
>     In your case,index tuples may be inserted into invalid index
>     relation and vanish.
> 
> 2. If vacuum aborts after the internal commit,the transaction
>     status is changed to be ABORT. This causes inconsistency.
>     I have changed not to do so in the current tree.
> 
> 
> In CURRENT tree,you may have to change vacuum_hits.sql
> as follows.
> 
>     drop index hits_pkey;
>     vacuum analyze hits(msg_id);
>     create unique index hits_pkey on hits(msg_id);
> 
> Probably DROP INDEX couldn't be executed inside transactions.
> 
> Regards.
> 
> Hiroshi Inoue
> Inoue@tpf.co.jp
> 
> >
> > I've already posted my question about NOTICE message I'm getting
> > from vacuum but didn't get any response :-(
> >
> > Today I decided to do some experiments to reproduce my problem.
> >
> > I run two independent processes:
> >
> > 1. send parallel requests to apache server in loop. On this request server
> >    does following:
> >
> >    LOCK TABLE hits IN SHARE ROW EXCLUSIVE MODE
> >    UPDATE hits SET count=count+1,last_access=now() WHERE msg_id=1468
> >
> > 2. vacuum table hits in shell scripts
> >
> > #!/bin/sh
> > while true ;do
> > /usr/local/pgsql/bin/psql -tq discovery <vacuum_hits.sql
> >  rc=$?
> >  i=$((i+1))
> >  echo Vaccuming: $i, RC=$rc
> >  sleep 10;
> > done
> >
> > where vacuum_hits.sql:
> >
> > begin work;
> > drop index hits_pkey;
> > create unique index hits_pkey on hits(msg_id);
> > end work;
> > vacuum analyze hits(msg_id);
> >
> >
> > Sometimes I get the message:
> >
> > NOTICE:  Index hits_pkey: NUMBER OF INDEX' TUPLES (173) IS NOT
> > THE SAME AS HEAP' (174)
> >
> > also several times I get:
> > ERROR:  Can't create lock file.  Is another vacuum cleaner running?
> >         If not, you may remove the pg_vlock file in the
> > /usr/local/pgsql/data//base/discovery
> >         directory
> >
> > I had to remove this file by hand.
> >
> >
> > I understand that experiment is a little bit artificial but I'd like
> > to know what I'm doing wrong and what's the safest way to vacuum
> > table which is permanently updating. Actually, I got about
> > 12 requests/sec on my home P166, 64Mb, Linux 2.2.12 - each request is a
> > plenty of database work.  I have to vacuum table because at this rate
> > I got very quick performance degradation.
> >
> > This is 6.5.2, Linux
> >
> >     Regards,
> >
> >         Oleg
> >
> > _____________________________________________________________
> > Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
> > Sternberg Astronomical Institute, Moscow University (Russia)
> > Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
> > phone: +007(095)939-16-83, +007(095)939-23-83
> >
> >
> > ************
> >
> 
> 
> ************
> 

_____________________________________________________________
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83



RE: [HACKERS] vacuum of permanently updating database

From
"Hiroshi Inoue"
Date:
> -----Original Message-----
> From: Oleg Bartunov [mailto:oleg@sai.msu.su]
> Sent: Saturday, October 16, 1999 11:27 PM
> To: Hiroshi Inoue
> Cc: hackers@postgreSQL.org
> Subject: RE: [HACKERS] vacuum of permanently updating database 
> 
> 
> Hiroshi,
> 
> thank you for the message. I'll try current tree but if
> it's a bug (probable ?) why don't try to fix it for 6.5.3 ?
>

Yes it's a bug.
But as for the 1st bug,it requires a lot of changes to fix.
Seems Bruce and Tom have thought that it's dangerous
to apply them to REL6_5.

As for the 2nd bug,it is fixed easily by the following patch.
If there's no objection,I would commit it into REL6_5.

[snip] 

> > 
> > Could you try the current tree ?
> > 
> > As far as I see,there are 2 possibilities.
> > 
> > 1. Relation cache invalidation mechanism is much improved
> >     by Tom in the current tree.
> >     In your case,index tuples may be inserted into invalid index
> >     relation and vanish.
> > 
> > 2. If vacuum aborts after the internal commit,the transaction
> >     status is changed to be ABORT. This causes inconsistency.
> >     I have changed not to do so in the current tree.
> > 

Regards.

Hiroshi Inoue
Inoue@tpf.co.jp

*** xact.c    1999/09/10 07:57:08    1.4
--- xact.c    1999/09/10 08:25:15
***************
*** 736,742 ****      * this transaction id in the pg_log relation. We skip it      * if no one shared buffer was
changedby this transaction.      */
 
!     if (SharedBufferChanged)         TransactionIdAbort(xid);      ResetBufferPool();
--- 736,742 ----      * this transaction id in the pg_log relation. We skip it      * if no one shared buffer was
changedby this transaction.      */
 
!     if (SharedBufferChanged && !TransactionIdDidCommit(xid))         TransactionIdAbort(xid);
ResetBufferPool();