Re: [PATCHES] HOT WIP Patch - version 3.2 - Mailing list pgsql-hackers

From Pavan Deolasee
Subject Re: [PATCHES] HOT WIP Patch - version 3.2
Date
Msg-id 2e78013d0702271023q3b3fe39bja9ef7dbcf2c627e4@mail.gmail.com
Whole thread Raw
In response to Re: [PATCHES] HOT WIP Patch - version 3.2  (Heikki Linnakangas <heikki@enterprisedb.com>)
List pgsql-hackers

On 2/27/07, Heikki Linnakangas <heikki@enterprisedb.com> wrote:
Pavan Deolasee wrote:
> - What do we do with the LP_DELETEd tuples at the VACUUM time ?
> In this patch, we are collecting them and vacuuming like
> any other dead tuples. But is that the best thing to do ?

Since they don't need index cleanups, it's a waste of
maintenance_work_mem to keep track of them in the dead tuples array.
Let's remove them in the 1st phase. That means trading the shared lock
for a vacuum-level lock on pages with LP_DELETEd tuples. Or if we want
to get fancy, we could skip LP_DELETEd tuples in the 1st phase for pages
that had dead tuples on them, and scan and remove them in the 2nd phase
when we have to acquire the vacuum-level lock anyway.

I liked the idea of not collecting the LP_DELETEd tuples in the first
pass. We also prune the HOT-update chains in the page in the first
pass, may be that can also be moved to second pass. We need to
carefully work on the race conditions involved in the VACUUM, pruning
and tuple reuse though.
 

> - While searching for a LP_DELETEd tuple, we start from the
> first offset and return the first slot which is big enough
> to store the tuple. Is there a better search algorithm
> (sorting/randomizing) ? Should we go for best-fit instead
> of first-fit ?

Best-fit seems better to me. It's pretty cheap to scan for LP_DELETEd
line pointers, but wasting space can lead to cold updates and get much
more expensive.

Ok. I will give it a shot once the basic things are ready.
 

You could also prune the chains on the page to make room for the update,
and if you can get a vacuum lock you can also defrag the page.

Yes, thats a good suggestion as well. I am already doing that in the
patch I am working on right now.
 

> - Should we have metadata on the heap page to track the
> number of LP_DELETEd tuples, number of HOT-update chains in the
> page and any other information that can help us optimize
> search/prune operations ?

I don't think the CPU overhead is that significant; we only need to do
the search/prune when a page gets full. We can add flags later if we
feel like it, but let's keep it simple for now.


I am making good progress with the line-pointer redirection stuff.
Its showing tremendous value in keeping the table and index size
in control. But we need to check for the CPU overhead as well
and if required optimize there.



Thanks,
Pavan

--

EnterpriseDB     http://www.enterprisedb.com

pgsql-hackers by date:

Previous
From: Gregory Stark
Date:
Subject: Re: bug in gist hstore?
Next
From: Peter Eisentraut
Date:
Subject: Re: Seeking Google SoC Mentors