Thread: Escaping the ARC patent

Escaping the ARC patent

From
Tom Lane
Date:
I've been doing a bit of research on $subj, and coming to the conclusion
that the ARC patent is a lot narrower than it might appear.  In fact
most of the parts of the algorithm that we actually want have prior art.
I looked in particular at Johnson and Shasha's well-known "2Q" paper,
published in 1994 (http://citeseer.ist.psu.edu/63909.html).  This paper
describes the use of two lists, which they call A1 and Am (as opposed to
ARC's T1 and T2) but the basic principle is the same: a page goes into
A1 on first use, and doesn't get to Am unless used a second time before
aging out of the cache.  2Q also includes a list of pages that have
recently been in the cache but no longer are.  So the actually
patentable parts of ARC are just some rather narrow decisions about the
management of these lists, in particular the use of a target T1len to
dynamically adapt the sizes of the lists.  The 2Q paper proposes using
fixed fractions of the total available space for each list --- and it
includes statistics showing that the algorithm isn't excessively
sensitive to the exact values used, so ARC's claimed "self tuning"
advantage isn't all that great after all.

These conclusions are borne out by a close reading of the patent
application (which is at

http://appft1.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&Sect2=HITOFF&d=PG01&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.html&r=1&f=G&l=50&s1=%2220040098541%22.PGNR.&OS=DN/20040098541&RS=DN/20040098541
if you want to look for yourself).  Claim 1 reads

1. A method for adaptively managing pages in a cache memory with a
variable workload, comprising: maintaining the cache memory into a first
list L1 and a second list L2; wherein the cache memory has a capacity to
store c pages; and adaptively distributing the workload between the
first list L1 and the second list L2, to a total capacity of c pages.

Given the prior art, the critical word in this sentence is "adaptively";
take that out and you have nothing that wasn't published long before.
If we remove the adaptivity --- ie, just use a fixed division of list
sizes --- we escape claim 1 and all the other claims that depend on it.

The only other claim that isn't dependent on claim 1 or a restatement of
it is

45. A method for adaptively managing pages in a memory, comprising:
defining a cache memory; defining a cache directory; organizing the
cache directory into fours disjoint lists of pages: list T1, list T2,
list B1, and list B2; and wherein the cache memory contains pages that
are members of any of the list T1 or the list T2.

So if we use non-variable sizes of T1/T2 and don't use the four-way
list structure to manage remembrance of pages-formerly-in-cache,
we escape the patent.  But we still have scan resistance, which is the
main thing that ARC was going to buy us.  Pages that are scanned only
once don't get out of A1 and so aren't able to swamp out pages
referenced multiple times.

After reading the 2Q paper my inclination is to use exactly Johnson and
Shasha's "simplified 2Q" algorithm, which uses just A1 and Am with no
remembrance of formerly cached pages.  Their "full 2Q" algorithm strikes
me as a tad bizarre because it will only promote a page into Am after it
has fallen out of A1, ie, it takes two physical reads of the page to
get into Am.  That's just weird.  I think that pages should advance from
A1 into Am on second reference.  Given that, you don't need any
remembrance of pages that were formerly in A1, which basically halves
the memory overhead of the ARC algorithm.

An advantage of heading in this direction (as opposed to, say, LRU/k
or other algorithms) is that this represents a direct simplification
of the ARC code we have now.  We can probably implement it almost
entirely by deletions from freelist.c, with little newly written code.
That gives me a whole lot more confidence that the result will be
reliable enough to back-patch into 8.0.*.

Comments?
        regards, tom lane


Re: Escaping the ARC patent

From
"Jonah H. Harris"
Date:
I'm familiar with the 2Q algorithm.  I also remember seeing, I believe, 
a public domain 2Q implementation floating around somewhere.

Tom Lane wrote:

>I've been doing a bit of research on $subj, and coming to the conclusion
>that the ARC patent is a lot narrower than it might appear.  In fact
>most of the parts of the algorithm that we actually want have prior art.
>I looked in particular at Johnson and Shasha's well-known "2Q" paper,
>published in 1994 (http://citeseer.ist.psu.edu/63909.html).  This paper
>describes the use of two lists, which they call A1 and Am (as opposed to
>ARC's T1 and T2) but the basic principle is the same: a page goes into
>A1 on first use, and doesn't get to Am unless used a second time before
>aging out of the cache.  2Q also includes a list of pages that have
>recently been in the cache but no longer are.  So the actually
>patentable parts of ARC are just some rather narrow decisions about the
>management of these lists, in particular the use of a target T1len to
>dynamically adapt the sizes of the lists.  The 2Q paper proposes using
>fixed fractions of the total available space for each list --- and it
>includes statistics showing that the algorithm isn't excessively
>sensitive to the exact values used, so ARC's claimed "self tuning"
>advantage isn't all that great after all.
>
>These conclusions are borne out by a close reading of the patent
>application (which is at

>http://appft1.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&Sect2=HITOFF&d=PG01&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.html&r=1&f=G&l=50&s1=%2220040098541%22.PGNR.&OS=DN/20040098541&RS=DN/20040098541
>if you want to look for yourself).  Claim 1 reads
>
>1. A method for adaptively managing pages in a cache memory with a
>variable workload, comprising: maintaining the cache memory into a first
>list L1 and a second list L2; wherein the cache memory has a capacity to
>store c pages; and adaptively distributing the workload between the
>first list L1 and the second list L2, to a total capacity of c pages.
>
>Given the prior art, the critical word in this sentence is "adaptively";
>take that out and you have nothing that wasn't published long before.
>If we remove the adaptivity --- ie, just use a fixed division of list
>sizes --- we escape claim 1 and all the other claims that depend on it.
>
>The only other claim that isn't dependent on claim 1 or a restatement of
>it is
>
>45. A method for adaptively managing pages in a memory, comprising:
>defining a cache memory; defining a cache directory; organizing the
>cache directory into fours disjoint lists of pages: list T1, list T2,
>list B1, and list B2; and wherein the cache memory contains pages that
>are members of any of the list T1 or the list T2.
>
>So if we use non-variable sizes of T1/T2 and don't use the four-way
>list structure to manage remembrance of pages-formerly-in-cache,
>we escape the patent.  But we still have scan resistance, which is the
>main thing that ARC was going to buy us.  Pages that are scanned only
>once don't get out of A1 and so aren't able to swamp out pages
>referenced multiple times.
>
>After reading the 2Q paper my inclination is to use exactly Johnson and
>Shasha's "simplified 2Q" algorithm, which uses just A1 and Am with no
>remembrance of formerly cached pages.  Their "full 2Q" algorithm strikes
>me as a tad bizarre because it will only promote a page into Am after it
>has fallen out of A1, ie, it takes two physical reads of the page to
>get into Am.  That's just weird.  I think that pages should advance from
>A1 into Am on second reference.  Given that, you don't need any
>remembrance of pages that were formerly in A1, which basically halves
>the memory overhead of the ARC algorithm.
>
>An advantage of heading in this direction (as opposed to, say, LRU/k
>or other algorithms) is that this represents a direct simplification
>of the ARC code we have now.  We can probably implement it almost
>entirely by deletions from freelist.c, with little newly written code.
>That gives me a whole lot more confidence that the result will be
>reliable enough to back-patch into 8.0.*.
>
>Comments?
>
>            regards, tom lane
>
>---------------------------(end of broadcast)---------------------------
>TIP 4: Don't 'kill -9' the postmaster
>  
>



Re: Escaping the ARC patent

From
Tom Lane
Date:
"Jonah H. Harris" <jharris@tvi.edu> writes:
> I'm familiar with the 2Q algorithm.  I also remember seeing, I believe, 
> a public domain 2Q implementation floating around somewhere.

No doubt, but I think the more conservative way to get there is to
proceed by trimming down the working code we already have.  Adapting
someone else's code to fit into our backend infrastructure would involve
a fair amount of modifications (memory management and error handling at
the very least) with consequent risks of introducing bugs.

Still, it wouldn't be bad to have someone else's implementation at hand
as a comparison point.  Do you have a link handy?
        regards, tom lane


Re: Escaping the ARC patent

From
"Jonah H. Harris"
Date:
I'll dive into my bookmarks and see if I can find it.

Tom Lane wrote:

>"Jonah H. Harris" <jharris@tvi.edu> writes:
>  
>
>>I'm familiar with the 2Q algorithm.  I also remember seeing, I believe, 
>>a public domain 2Q implementation floating around somewhere.
>>    
>>
>
>No doubt, but I think the more conservative way to get there is to
>proceed by trimming down the working code we already have.  Adapting
>someone else's code to fit into our backend infrastructure would involve
>a fair amount of modifications (memory management and error handling at
>the very least) with consequent risks of introducing bugs.
>
>Still, it wouldn't be bad to have someone else's implementation at hand
>as a comparison point.  Do you have a link handy?
>
>            regards, tom lane
>  
>



Re: Escaping the ARC patent

From
"Simon Riggs"
Date:
>Tom Lane wrote
> I've been doing a bit of research on $subj, and coming to the
> conclusion
> that the ARC patent is a lot narrower than it might appear.  In fact
> most of the parts of the algorithm that we actually want have
> prior art.

Yes, it appears that way to me also.

> The 2Q paper proposes using
> fixed fractions of the total available space for each list --- and it
> includes statistics showing that the algorithm isn't excessively
> sensitive to the exact values used, so ARC's claimed "self tuning"
> advantage isn't all that great after all.

Well, after a few months of watching performance numbers fly by, I have
observed that the ARC algorithm produces a very swift  reduction to a
fairly static situation, for a static workload. Looking at the ARC paper
more closely shows IMHO that the ARC algorithm is no better for handling
general workloads, but its swift adaptation to strange workloads is what
gives it the slight edge it has over those earlier techniques.

Later work than ARC by the same authors shows that holding open the T1
list by a fixed amount is actually better for database workloads anyway
(I believe the technique is called CARS, sorry no link). So, the ARC
authors have further shown that ARC is not the ultimate cache management
algorithm for dbms anyway.

Removing the adaptation code will slightly improve performance anyway.

> An advantage of heading in this direction (as opposed to, say, LRU/k
> or other algorithms) is that this represents a direct simplification
> of the ARC code we have now.  We can probably implement it almost
> entirely by deletions from freelist.c, with little newly written code.
> That gives me a whole lot more confidence that the result will be
> reliable enough to back-patch into 8.0.*.
>
> Comments?

That sounds like a very good approach.

I'd be inclined to move towards that quickly, so we can return to other
issues which can only be addressed when these code changes occur, such
as BufMgrLock contention and bgwriter tuning - neither of which ARC
addresses anyway,

Best Regards, Simon Riggs



Re: Escaping the ARC patent

From
"Jonah H. Harris"
Date:
I found the reference I had seen.  The engine was the Multicache 
Simulation Environment written in C++.  I can't find the code to it 
anymore but I've contacted the author for a copy.

Jonah H. Harris wrote:

> I'll dive into my bookmarks and see if I can find it.
>
> Tom Lane wrote:
>
>> "Jonah H. Harris" <jharris@tvi.edu> writes:
>>  
>>
>>> I'm familiar with the 2Q algorithm.  I also remember seeing, I 
>>> believe, a public domain 2Q implementation floating around somewhere.
>>>   
>>
>>
>> No doubt, but I think the more conservative way to get there is to
>> proceed by trimming down the working code we already have.  Adapting
>> someone else's code to fit into our backend infrastructure would involve
>> a fair amount of modifications (memory management and error handling at
>> the very least) with consequent risks of introducing bugs.
>>
>> Still, it wouldn't be bad to have someone else's implementation at hand
>> as a comparison point.  Do you have a link handy?
>>
>>             regards, tom lane
>>  
>>
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 2: you can get off all lists at once with the unregister command
>    (send "unregister YourEmailAddressHere" to majordomo@postgresql.org)




Re: Escaping the ARC patent

From
Bruce Momjian
Date:
Tom Lane wrote:
> Given the prior art, the critical word in this sentence is "adaptively";
> take that out and you have nothing that wasn't published long before.
> If we remove the adaptivity --- ie, just use a fixed division of list
> sizes --- we escape claim 1 and all the other claims that depend on it.
> 
> The only other claim that isn't dependent on claim 1 or a restatement of
> it is
> 
> 45. A method for adaptively managing pages in a memory, comprising:
> defining a cache memory; defining a cache directory; organizing the
> cache directory into fours disjoint lists of pages: list T1, list T2,
> list B1, and list B2; and wherein the cache memory contains pages that
> are members of any of the list T1 or the list T2.
> 
> So if we use non-variable sizes of T1/T2 and don't use the four-way
> list structure to manage remembrance of pages-formerly-in-cache,
> we escape the patent.  But we still have scan resistance, which is the
> main thing that ARC was going to buy us.  Pages that are scanned only
> once don't get out of A1 and so aren't able to swamp out pages
> referenced multiple times.

So are you saying you are making T1, T2, B1, and B2 a fixed percentage
of the buffer cache rather than making them adjust over time?

--  Bruce Momjian                        |  http://candle.pha.pa.us pgman@candle.pha.pa.us               |  (610)
359-1001+  If your life is a hard drive,     |  13 Roberts Road +  Christ can be your backup.        |  Newtown Square,
Pennsylvania19073
 


Re: Escaping the ARC patent

From
Tom Lane
Date:
Bruce Momjian <pgman@candle.pha.pa.us> writes:
> So are you saying you are making T1, T2, B1, and B2 a fixed percentage
> of the buffer cache rather than making them adjust over time?

B2 goes away entirely (if we keep four lists we violate claim 45) and
the other lists become fixed length, yes.

We could also contemplate making them variable length according to some
other set of rules than ARC's, but then you get into having to parse the
other sixty-odd claims of the patent and decide what is a "different
enough" rule.

At the moment I'm not seeing evidence that a variable policy beats a
fixed policy anyway.  Unless someone comes up with a benchmark showing a
substantial advantage for ARC over 2Q, I think we should just declare
victory over this problem.  We have plenty of other tasks on our plates.
        regards, tom lane


Re: Escaping the ARC patent

From
Bruce Momjian
Date:
Tom Lane wrote:
> Bruce Momjian <pgman@candle.pha.pa.us> writes:
> > So are you saying you are making T1, T2, B1, and B2 a fixed percentage
> > of the buffer cache rather than making them adjust over time?
> 
> B2 goes away entirely (if we keep four lists we violate claim 45) and
> the other lists become fixed length, yes.
> 
> We could also contemplate making them variable length according to some
> other set of rules than ARC's, but then you get into having to parse the
> other sixty-odd claims of the patent and decide what is a "different
> enough" rule.
> 
> At the moment I'm not seeing evidence that a variable policy beats a
> fixed policy anyway.  Unless someone comes up with a benchmark showing a
> substantial advantage for ARC over 2Q, I think we should just declare
> victory over this problem.  We have plenty of other tasks on our plates.

Agreed.

--  Bruce Momjian                        |  http://candle.pha.pa.us pgman@candle.pha.pa.us               |  (610)
359-1001+  If your life is a hard drive,     |  13 Roberts Road +  Christ can be your backup.        |  Newtown Square,
Pennsylvania19073
 


Re: Escaping the ARC patent

From
"Jim C. Nasby"
Date:
On Fri, Feb 04, 2005 at 11:27:40AM -0500, Tom Lane wrote:
> Bruce Momjian <pgman@candle.pha.pa.us> writes:
> > So are you saying you are making T1, T2, B1, and B2 a fixed percentage
> > of the buffer cache rather than making them adjust over time?
> 
> B2 goes away entirely (if we keep four lists we violate claim 45) and
> the other lists become fixed length, yes.
> 
> We could also contemplate making them variable length according to some
> other set of rules than ARC's, but then you get into having to parse the
> other sixty-odd claims of the patent and decide what is a "different
> enough" rule.
> 
> At the moment I'm not seeing evidence that a variable policy beats a
> fixed policy anyway.  Unless someone comes up with a benchmark showing a
> substantial advantage for ARC over 2Q, I think we should just declare
> victory over this problem.  We have plenty of other tasks on our plates.

I think it would be useful to have a means to adjust the queue sizes
dynamically from a database connection. If the optimum queue sizes
depend on the workload this would allow things like batch processes to
tweak the queue sizes for better performance when they're running. It
would also facilitate performing the testing you mention.
-- 
Jim C. Nasby, Database Consultant               decibel@decibel.org 
Give your computer some brain candy! www.distributed.net Team #1828

Windows: "Where do you want to go today?"
Linux: "Where do you want to go tomorrow?"
FreeBSD: "Are you guys coming, or what?"


Re: Escaping the ARC patent

From
Tom Lane
Date:
"Jim C. Nasby" <decibel@decibel.org> writes:
> I think it would be useful to have a means to adjust the queue sizes
> dynamically from a database connection. If the optimum queue sizes
> depend on the workload this would allow things like batch processes to
> tweak the queue sizes for better performance when they're running.

That strikes me as a bad idea --- what will cause the queue size to
revert to normal, if the batch process fails before resetting it?

In any case, the only mechanism we have available for such things is
modifying postgresql.conf and then SIGHUPping the postmaster; there is
no other way to adjust a parameter that all backends must see as having
the same value.  So even if we invent a GUC parameter for this, it's not
going to be something your batch process can lightly fool around with.
        regards, tom lane


Re: Escaping the ARC patent

From
Philip Warner
Date:
At 09:02 AM 5/02/2005, Tom Lane wrote:
>That strikes me as a bad idea --- what will cause the queue size to
>revert to normal, if the batch process fails before resetting it?

Just an idle thought, but each connection to the DB could add a fixed 
amount to some queueing parameter. The amount added to be set per backend, 
and the client could use a SET variable to adjust the standard amount for 
it's own backend. When the client dies/disconnects, the queueing parameter 
(whatever it is) would be reduced appropriately.


>

----------------------------------------------------------------
Philip Warner                    |     __---_____
Albatross Consulting Pty. Ltd.   |----/       -  \
(A.B.N. 75 008 659 498)          |          /(@)   ______---_
Tel: (+61) 0500 83 82 81         |                 _________  \
Fax: (+61) 03 5330 3172          |                 ___________ |
Http://www.rhyme.com.au          |                /           \|                                 |    --________--
PGP key available upon request,  |  /
and from pgp.mit.edu:11371       |/ 



Re: Escaping the ARC patent

From
"Bort, Paul"
Date:
<p><font size="2">> </font><br /><font size="2">> Just an idle thought, but each connection to the DB could add a
fixed</font><br /><font size="2">> amount to some queueing parameter. The amount added to be set </font><br /><font
size="2">>per backend, </font><br /><font size="2">> and the client could use a SET variable to adjust the
</font><br/><font size="2">> standard amount for </font><br /><font size="2">> it's own backend. When the client
dies/disconnects,the </font><br /><font size="2">> queueing parameter </font><br /><font size="2">> (whatever it
is)would be reduced appropriately.</font><br /><font size="2">> </font><p><font size="2">Wouldn't that require a
SIGHUPon the postmaster with every connection?</font><br /><font size="2">(Because all of the backends need to know
aboutthe new buffer count)</font><p><font size="2">Wouldn't that be bad? </font>