Re: 8.3.0 Core with concurrent vacuum fulls - Mailing list pgsql-hackers

From Heikki Linnakangas
Subject Re: 8.3.0 Core with concurrent vacuum fulls
Date
Msg-id 47D03DC5.8010301@enterprisedb.com
Whole thread Raw
In response to Re: 8.3.0 Core with concurrent vacuum fulls  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: 8.3.0 Core with concurrent vacuum fulls
List pgsql-hackers
Tom Lane wrote:
> "Pavan Deolasee" <pavan.deolasee@gmail.com> writes:
>> On Wed, Mar 5, 2008 at 9:29 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>>> [ thinks some more... ]  I guess we could use a flag array dimensioned
>>> MaxHeapTuplesPerPage to mark already-processed tuples, so that you
>>> wouldn't need to search the existing arrays but just index into the flag
>>> array with the tuple's offsetnumber.
> 
>> We can actually combine this and the page copying ideas. Instead of copying
>> the entire page, we can just copy the line pointers array and work on the copy.
> 
> I think that just makes things more complex and fragile.  I like
> Heikki's idea, in part because it makes the normal path and the WAL
> recovery path guaranteed to work alike.  I'll attach my work-in-progress
> patch for this --- it doesn't do anything about the invalidation
> semantics problem but it does fix the critical-section-too-big problem.

FWIW, the patch looks fine to me. By inspection; I didn't test it.

I'm glad we got away with a single "marked" array. I was afraid we would 
need to consult the unused/redirected/dead arrays separately.

Do you have a plan for the invalidation problem? I think we could just 
not remove the redirection line pointers in catalog tables.

--   Heikki Linnakangas  EnterpriseDB   http://www.enterprisedb.com


pgsql-hackers by date:

Previous
From: Richard Huxton
Date:
Subject: Re: Behaviour of to_tsquery(stopwords only)
Next
From: Tom Lane
Date:
Subject: Re: 8.3.0 Core with concurrent vacuum fulls