Re: Dead Space Map for vacuum - Mailing list pgsql-hackers

From Heikki Linnakangas
Subject Re: Dead Space Map for vacuum
Date
Msg-id 45943894.7080504@enterprisedb.com
Whole thread Raw
In response to Re: Dead Space Map for vacuum  (Gavin Sherry <swm@linuxworld.com.au>)
Responses Re: Dead Space Map for vacuum
List pgsql-hackers
Gavin Sherry wrote:
> On Thu, 28 Dec 2006, Heikki Linnakangas wrote:
> 
>> ITAGAKI Takahiro wrote:
>> I experimented with a different DSM design last winter. I got busy with
>> other things and never posted it AFAIR, but the idea was to store a
>> bitmap in the special area on every 32k heap page. That had some advantages:
>>
>> * doesn't require a new dedicated shared memory area that needs to be
>> allocated and tuned.
>> * doesn't introduce a (single) new global lwlock that might become hotspot.
>> * WAL logging is quite simple, since the bitmaps are on normal pages
>> handled by buffer manager.
> 
> I too have experimented with this idea. 4 DSM pages means 2 bits per
> block. What were you using the other bit for? When I discussed this some
> time ago with Jan Weick, we recommended we track blocks in the FSM with
> this extra bit.

I only used 1 bit, just like in Itagaki's approach. The bitmap in the 
special area only occupied half a page, the rest was available for 
normal heap tuples. That might seem strange, but it avoids having to 
allocate 2 pages for small tables with just a handful of rows. It also 
spreads out the lock contention a bit.

I think I wasn't clear in my explanation. If a table had less than 32k 
pages, it had just one bitmap with each bit representing a heap page. 
The bitmap was stored in the special area of the first page. If a table 
was larger, the bitmap on the 1st page represented pages 1-32k. On the 
32768th page there's another bitmap that represents the next 32k pages, 
and so on. In other words, the DSM bitmap of page number X was always on 
page number (X / 32768). Vacuum had to check all the bitmaps.

>> BTW: Yesterday I realized that bitmap indexes could be retail vacuumed
>> safely. You'll still need to visit all bitmaps to find the dead bit, but
>> you only need to check the bitmap page that corresponds the tid of the
>> dead tuple.
> 
> Cool. The problem is solving it for the other AMs, particularly btree.

Yeah..

>>> | http://archives.postgresql.org/pgsql-hackers/2004-03/msg00957.php
>>> | Maintain 2 bits per block that tell if the block has been vaccumed of all
>>> | dead tuples since the last time it was dirtied, and if all its tuples are
>>> | completely frozen.
>>>
>>> We use 1 bit per block, so we cannot separate pages that need either
>>> vacuum or freeze only. The reason is that we cannot determine where to
>>> record before/after updated tuples. If the transaction is commited,
>>> before-version should be recorded into needs-vacuum bitmap and
>>> after-version into needs-freeze bitmap. But on rollback, it is inverted.
>>> We cannot judge which we should do until the end of transaction.
>> Yeah, that makes the DSM less useful. Are you thinking of freezing more
>> aggressively because of that? Or doing a full-scanning vacuum every now
>> and then to freeze?
> 
> I don't see any problem with moving to two status bits per block in the
> DSM-in-heap model, so we can track this better. What do you think?

Well, the obvious drawback is that two bits take more space than one 
bit. But it feels ok to me as well.

--   Heikki Linnakangas  EnterpriseDB   http://www.enterprisedb.com


pgsql-hackers by date:

Previous
From: Stephen Frost
Date:
Subject: Re: TODO: GNU TLS
Next
From: Stephen Frost
Date:
Subject: Re: TODO: GNU TLS