RE: [HACKERS] vacuum of permanently updating database - Mailing list pgsql-hackers

From Hiroshi Inoue
Subject RE: [HACKERS] vacuum of permanently updating database
Date
Msg-id NDBBIJLOILGIKBGDINDFAEELCAAA.Inoue@tpf.co.jp
Whole thread Raw
In response to vacuum of permanently updating database  (Oleg Bartunov <oleg@sai.msu.su>)
Responses RE: [HACKERS] vacuum of permanently updating database
List pgsql-hackers
Hi

Could you try the current tree ?

As far as I see,there are 2 possibilities.

1. Relation cache invalidation mechanism is much improved   by Tom in the current tree.   In your case,index tuples may
beinserted into invalid index   relation and vanish.
 

2. If vacuum aborts after the internal commit,the transaction   status is changed to be ABORT. This causes
inconsistency.  I have changed not to do so in the current tree.
 


In CURRENT tree,you may have to change vacuum_hits.sql
as follows.
drop index hits_pkey;vacuum analyze hits(msg_id);create unique index hits_pkey on hits(msg_id);

Probably DROP INDEX couldn't be executed inside transactions.

Regards.

Hiroshi Inoue
Inoue@tpf.co.jp

>
> I've already posted my question about NOTICE message I'm getting
> from vacuum but didn't get any response :-(
>
> Today I decided to do some experiments to reproduce my problem.
>
> I run two independent processes:
>
> 1. send parallel requests to apache server in loop. On this request server
>    does following:
>
>    LOCK TABLE hits IN SHARE ROW EXCLUSIVE MODE
>    UPDATE hits SET count=count+1,last_access=now() WHERE msg_id=1468
>
> 2. vacuum table hits in shell scripts
>
> #!/bin/sh
> while true ;do
> /usr/local/pgsql/bin/psql -tq discovery <vacuum_hits.sql
>  rc=$?
>  i=$((i+1))
>  echo Vaccuming: $i, RC=$rc
>  sleep 10;
> done
>
> where vacuum_hits.sql:
>
> begin work;
> drop index hits_pkey;
> create unique index hits_pkey on hits(msg_id);
> end work;
> vacuum analyze hits(msg_id);
>
>
> Sometimes I get the message:
>
> NOTICE:  Index hits_pkey: NUMBER OF INDEX' TUPLES (173) IS NOT
> THE SAME AS HEAP' (174)
>
> also several times I get:
> ERROR:  Can't create lock file.  Is another vacuum cleaner running?
>         If not, you may remove the pg_vlock file in the
> /usr/local/pgsql/data//base/discovery
>         directory
>
> I had to remove this file by hand.
>
>
> I understand that experiment is a little bit artificial but I'd like
> to know what I'm doing wrong and what's the safest way to vacuum
> table which is permanently updating. Actually, I got about
> 12 requests/sec on my home P166, 64Mb, Linux 2.2.12 - each request is a
> plenty of database work.  I have to vacuum table because at this rate
> I got very quick performance degradation.
>
> This is 6.5.2, Linux
>
>     Regards,
>
>         Oleg
>
> _____________________________________________________________
> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
> Sternberg Astronomical Institute, Moscow University (Russia)
> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
> phone: +007(095)939-16-83, +007(095)939-23-83
>
>
> ************
>



pgsql-hackers by date:

Previous
From: Oleg Bartunov
Date:
Subject: vacuum of permanently updating database
Next
From: Tatsuo Ishii
Date:
Subject: Re: [HACKERS] indexable and locale