Thread: ARC Memory Usage analysis

ARC Memory Usage analysis

From
Simon Riggs
Date:
I've been using the ARC debug options to analyse memory usage on the
PostgreSQL 8.0 server. This is a precursor to more complex performance
analysis work on the OSDL test suite.

I've simplified some of the ARC reporting into a single log line, which
is enclosed here as a patch on freelist.c. This includes reporting of:
- the total memory in use, which wasn't previously reported
- the cache hit ratio, which was slightly incorrectly calculated
- a useful-ish value for looking at the "B" lists in ARC
(This is a patch against cvstip, but I'm not sure whether this has
potential for inclusion in 8.0...)

The total memory in use is useful because it allows you to tell whether
shared_buffers is set too high. If it is set too high, then memory usage
will continue to grow slowly up to the max, without any corresponding
increase in cache hit ratio. If shared_buffers is too small, then memory
usage will climb quickly and linearly to its maximum.

The last one I've called "turbulence" in an attempt to ascribe some
useful meaning to B1/B2 hits - I've tried a few other measures though
without much success. Turbulence is the hit ratio of B1+B2 lists added
together. By observation, this is zero when ARC gives smooth operation,
and goes above zero otherwise. Typically, turbulence occurs when
shared_buffers is too small for the working set of the database/workload
combination and ARC repeatedly re-balances the lengths of T1/T2 as a
result of "near-misses" on the B1/B2 lists. Turbulence doesn't usually
cut in until the cache is fully utilized, so there is usually some delay
after startup.

We also recently discussed that I would add some further memory analysis
features for 8.1, so I've been trying to figure out how.

The idea that B1, B2 represent something really useful doesn't seem to
have been borne out - though I'm open to persuasion there.

I originally envisaged a "shadow list" operating in extension of the
main ARC list. This will require some re-coding, since the variables and
macros are all hard-coded to a single set of lists. No complaints, just
it will take a little longer than we all thought (for me, that is...)

My proposal is to alter the code to allow an array of memory linked
lists. The actual list would be [0] - other additional lists would be
created dynamically as required i.e. not using IFDEFs, since I want this
to be controlled by a SIGHUP GUC to allow on-site tuning, not just lab
work. This will then allow reporting against the additional lists, so
that cache hit ratios can be seen with various other "prototype"
shared_buffer settings.

Any thoughts?

--
Best Regards, Simon Riggs

Attachment

Re: [HACKERS] ARC Memory Usage analysis

From
Jan Wieck
Date:
On 10/22/2004 2:50 PM, Simon Riggs wrote:

> I've been using the ARC debug options to analyse memory usage on the
> PostgreSQL 8.0 server. This is a precursor to more complex performance
> analysis work on the OSDL test suite.
>
> I've simplified some of the ARC reporting into a single log line, which
> is enclosed here as a patch on freelist.c. This includes reporting of:
> - the total memory in use, which wasn't previously reported
> - the cache hit ratio, which was slightly incorrectly calculated
> - a useful-ish value for looking at the "B" lists in ARC
> (This is a patch against cvstip, but I'm not sure whether this has
> potential for inclusion in 8.0...)
>
> The total memory in use is useful because it allows you to tell whether
> shared_buffers is set too high. If it is set too high, then memory usage
> will continue to grow slowly up to the max, without any corresponding
> increase in cache hit ratio. If shared_buffers is too small, then memory
> usage will climb quickly and linearly to its maximum.
>
> The last one I've called "turbulence" in an attempt to ascribe some
> useful meaning to B1/B2 hits - I've tried a few other measures though
> without much success. Turbulence is the hit ratio of B1+B2 lists added
> together. By observation, this is zero when ARC gives smooth operation,
> and goes above zero otherwise. Typically, turbulence occurs when
> shared_buffers is too small for the working set of the database/workload
> combination and ARC repeatedly re-balances the lengths of T1/T2 as a
> result of "near-misses" on the B1/B2 lists. Turbulence doesn't usually
> cut in until the cache is fully utilized, so there is usually some delay
> after startup.
>
> We also recently discussed that I would add some further memory analysis
> features for 8.1, so I've been trying to figure out how.
>
> The idea that B1, B2 represent something really useful doesn't seem to
> have been borne out - though I'm open to persuasion there.
>
> I originally envisaged a "shadow list" operating in extension of the
> main ARC list. This will require some re-coding, since the variables and
> macros are all hard-coded to a single set of lists. No complaints, just
> it will take a little longer than we all thought (for me, that is...)
>
> My proposal is to alter the code to allow an array of memory linked
> lists. The actual list would be [0] - other additional lists would be
> created dynamically as required i.e. not using IFDEFs, since I want this
> to be controlled by a SIGHUP GUC to allow on-site tuning, not just lab
> work. This will then allow reporting against the additional lists, so
> that cache hit ratios can be seen with various other "prototype"
> shared_buffer settings.

All the existing lists live in shared memory, so that dynamic approach
suffers from the fact that the memory has to be allocated during ipc_init.

What do you think about my other theory to make C actually 2x effective
cache size and NOT to keep T1 in shared buffers but to assume T1 lives
in the OS buffer cache?


Jan

>
> Any thoughts?
>
>
>
> ------------------------------------------------------------------------
>
> Index: freelist.c
> ===================================================================
> RCS file: /projects/cvsroot/pgsql/src/backend/storage/buffer/freelist.c,v
> retrieving revision 1.48
> diff -d -c -r1.48 freelist.c
> *** freelist.c    16 Sep 2004 16:58:31 -0000    1.48
> --- freelist.c    22 Oct 2004 18:15:38 -0000
> ***************
> *** 126,131 ****
> --- 126,133 ----
>       if (StrategyControl->stat_report + DebugSharedBuffers < now)
>       {
>           long        all_hit,
> +                     buf_used,
> +                     b_hit,
>                       b1_hit,
>                       t1_hit,
>                       t2_hit,
> ***************
> *** 155,161 ****
>           }
>
>           if (StrategyControl->num_lookup == 0)
> !             all_hit = b1_hit = t1_hit = t2_hit = b2_hit = 0;
>           else
>           {
>               b1_hit = (StrategyControl->num_hit[STRAT_LIST_B1] * 100 /
> --- 157,163 ----
>           }
>
>           if (StrategyControl->num_lookup == 0)
> !             all_hit = buf_used = b_hit = b1_hit = t1_hit = t2_hit = b2_hit = 0;
>           else
>           {
>               b1_hit = (StrategyControl->num_hit[STRAT_LIST_B1] * 100 /
> ***************
> *** 166,181 ****
>                         StrategyControl->num_lookup);
>               b2_hit = (StrategyControl->num_hit[STRAT_LIST_B2] * 100 /
>                         StrategyControl->num_lookup);
> !             all_hit = b1_hit + t1_hit + t2_hit + b2_hit;
>           }
>
>           errcxtold = error_context_stack;
>           error_context_stack = NULL;
> !         elog(DEBUG1, "ARC T1target=%5d B1len=%5d T1len=%5d T2len=%5d B2len=%5d",
>                T1_TARGET, B1_LENGTH, T1_LENGTH, T2_LENGTH, B2_LENGTH);
> !         elog(DEBUG1, "ARC total   =%4ld%% B1hit=%4ld%% T1hit=%4ld%% T2hit=%4ld%% B2hit=%4ld%%",
>                all_hit, b1_hit, t1_hit, t2_hit, b2_hit);
> !         elog(DEBUG1, "ARC clean buffers at LRU       T1=   %5d T2=   %5d",
>                t1_clean, t2_clean);
>           error_context_stack = errcxtold;
>
> --- 168,187 ----
>                         StrategyControl->num_lookup);
>               b2_hit = (StrategyControl->num_hit[STRAT_LIST_B2] * 100 /
>                         StrategyControl->num_lookup);
> !             all_hit = t1_hit + t2_hit;
> !                b_hit = b1_hit + b2_hit;
> !             buf_used = T1_LENGTH + T2_LENGTH;
>           }
>
>           errcxtold = error_context_stack;
>           error_context_stack = NULL;
> !         elog(DEBUG1, "shared_buffers used=%8ld cache hits=%4ld%% turbulence=%4ld%%",
> !              buf_used, all_hit, b_hit);
> !         elog(DEBUG2, "ARC T1target=%5d B1len=%5d T1len=%5d T2len=%5d B2len=%5d",
>                T1_TARGET, B1_LENGTH, T1_LENGTH, T2_LENGTH, B2_LENGTH);
> !         elog(DEBUG2, "ARC total   =%4ld%% B1hit=%4ld%% T1hit=%4ld%% T2hit=%4ld%% B2hit=%4ld%%",
>                all_hit, b1_hit, t1_hit, t2_hit, b2_hit);
> !         elog(DEBUG2, "ARC clean buffers at LRU       T1=   %5d T2=   %5d",
>                t1_clean, t2_clean);
>           error_context_stack = errcxtold;
>
>
>
> ------------------------------------------------------------------------
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org


--
#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.                                  #
#================================================== JanWieck@Yahoo.com #

Re: [HACKERS] ARC Memory Usage analysis

From
Simon Riggs
Date:
On Fri, 2004-10-22 at 20:35, Jan Wieck wrote:
> On 10/22/2004 2:50 PM, Simon Riggs wrote:
>
> >
> > My proposal is to alter the code to allow an array of memory linked
> > lists. The actual list would be [0] - other additional lists would be
> > created dynamically as required i.e. not using IFDEFs, since I want this
> > to be controlled by a SIGHUP GUC to allow on-site tuning, not just lab
> > work. This will then allow reporting against the additional lists, so
> > that cache hit ratios can be seen with various other "prototype"
> > shared_buffer settings.
>
> All the existing lists live in shared memory, so that dynamic approach
> suffers from the fact that the memory has to be allocated during ipc_init.
>

[doh] - dreaming again. Yes of course, server startup it is then. [That
way, we can include the memory for it at server startup, then allow the
GUC to be turned off after a while to avoid another restart?]

> What do you think about my other theory to make C actually 2x effective
> cache size and NOT to keep T1 in shared buffers but to assume T1 lives
> in the OS buffer cache?

Summarised like that, I understand it.

My observation is that performance varies significantly between startups
of the database, which does indicate that the OS cache is working well.
So, yes it does seem as if we have a 3 tier cache. I understand you to
be effectively suggesting that we go back to having just a 2-tier cache.

I guess we've got two options:
1. Keep ARC as it is, but just allocate much of the available physical
memory to shared_buffers, so you know that effective_cache_size is low
and that its either in T1 or its on disk.
2. Alter ARC so that we experiment with the view that T1 is in the OS
and T2 is in shared_buffers, we don't bother keeping T1. (as you say)

Hmmm...I think I'll pass on trying to judge its effectiveness -
simplifying things is likely to make it easier to understand and predict
behaviour. It's well worth trying, and it seems simple enough to make a
patch that keeps T1target at zero.

i.e. Scientific method: conjecture + experimental validation = theory

If you make up a patch, probably against BETA4, Josh and I can include it in the performance testing that I'm hoping we
cando over the next few weeks. 

Whatever makes 8.0 a high performance release is well worth it.

Best Regards,

Simon Riggs


Re: [HACKERS] ARC Memory Usage analysis

From
Jan Wieck
Date:
On 10/22/2004 4:21 PM, Simon Riggs wrote:

> On Fri, 2004-10-22 at 20:35, Jan Wieck wrote:
>> On 10/22/2004 2:50 PM, Simon Riggs wrote:
>>
>> >
>> > My proposal is to alter the code to allow an array of memory linked
>> > lists. The actual list would be [0] - other additional lists would be
>> > created dynamically as required i.e. not using IFDEFs, since I want this
>> > to be controlled by a SIGHUP GUC to allow on-site tuning, not just lab
>> > work. This will then allow reporting against the additional lists, so
>> > that cache hit ratios can be seen with various other "prototype"
>> > shared_buffer settings.
>>
>> All the existing lists live in shared memory, so that dynamic approach
>> suffers from the fact that the memory has to be allocated during ipc_init.
>>
>
> [doh] - dreaming again. Yes of course, server startup it is then. [That
> way, we can include the memory for it at server startup, then allow the
> GUC to be turned off after a while to avoid another restart?]
>
>> What do you think about my other theory to make C actually 2x effective
>> cache size and NOT to keep T1 in shared buffers but to assume T1 lives
>> in the OS buffer cache?
>
> Summarised like that, I understand it.
>
> My observation is that performance varies significantly between startups
> of the database, which does indicate that the OS cache is working well.
> So, yes it does seem as if we have a 3 tier cache. I understand you to
> be effectively suggesting that we go back to having just a 2-tier cache.

Effectively yes, just with the difference that we keep a pseudo T1 list
and hope that what we are tracking there is what the OS is caching. As
said before, if the effective cache size is set properly, that is what
should happen.

>
> I guess we've got two options:
> 1. Keep ARC as it is, but just allocate much of the available physical
> memory to shared_buffers, so you know that effective_cache_size is low
> and that its either in T1 or its on disk.
> 2. Alter ARC so that we experiment with the view that T1 is in the OS
> and T2 is in shared_buffers, we don't bother keeping T1. (as you say)
>
> Hmmm...I think I'll pass on trying to judge its effectiveness -
> simplifying things is likely to make it easier to understand and predict
> behaviour. It's well worth trying, and it seems simple enough to make a
> patch that keeps T1target at zero.

Not keeping T1target at zero, because that would keep T2 at the size of
shared_buffers. What I suspect is that in the current calculation the
T1target is underestimated. It is incremented on B1 hits, but B1 is only
of T2 size. What it currently tells is what got pushed from T1 into the
OS cache. It could well be that it would work much more effective if it
would fuzzily tell what got pushed out of the OS cache to disk.


Jan

>
> i.e. Scientific method: conjecture + experimental validation = theory
>
> If you make up a patch, probably against BETA4, Josh and I can include it in the performance testing that I'm hoping
wecan do over the next few weeks. 
>
> Whatever makes 8.0 a high performance release is well worth it.
>
> Best Regards,
>
> Simon Riggs


--
#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.                                  #
#================================================== JanWieck@Yahoo.com #

Re: [HACKERS] ARC Memory Usage analysis

From
Tom Lane
Date:
Jan Wieck <JanWieck@Yahoo.com> writes:
> What do you think about my other theory to make C actually 2x effective
> cache size and NOT to keep T1 in shared buffers but to assume T1 lives
> in the OS buffer cache?

What will you do when initially fetching a page?  It's not supposed to
go directly into T2 on first use, but we're going to have some
difficulty accessing a page that's not in shared buffers.  I don't think
you can equate the T1/T2 dichotomy to "is in shared buffers or not".

You could maybe have a T3 list of "pages that aren't in shared buffers
anymore but we think are still in OS buffer cache", but what would be
the point?  It'd be a sufficiently bad model of reality as to be pretty
much useless for stats gathering, I'd think.

            regards, tom lane

Re: [HACKERS] ARC Memory Usage analysis

From
Simon Riggs
Date:
On Fri, 2004-10-22 at 21:45, Tom Lane wrote:
> Jan Wieck <JanWieck@Yahoo.com> writes:
> > What do you think about my other theory to make C actually 2x effective
> > cache size and NOT to keep T1 in shared buffers but to assume T1 lives
> > in the OS buffer cache?
>
> What will you do when initially fetching a page?  It's not supposed to
> go directly into T2 on first use, but we're going to have some
> difficulty accessing a page that's not in shared buffers.  I don't think
> you can equate the T1/T2 dichotomy to "is in shared buffers or not".
>

Yes, there are issues there. I want Jan to follow his thoughts through.
This is important enough that its worth it - there's only a few even
attempting this.

> You could maybe have a T3 list of "pages that aren't in shared buffers
> anymore but we think are still in OS buffer cache", but what would be
> the point?  It'd be a sufficiently bad model of reality as to be pretty
> much useless for stats gathering, I'd think.
>

The OS cache is in many ways a wild horse, I agree. Jan is trying to
think of ways to harness it, whereas I had mostly ignored it - but its
there. Raw disk usage never allowed this opportunity.

For high performance systems, we can assume that the OS cache is ours to
play with - what will we do with it? We need to use it for some
purposes, yet would like to ignore it for others.

--
Best Regards, Simon Riggs


Re: [HACKERS] ARC Memory Usage analysis

From
Jan Wieck
Date:
On 10/22/2004 4:09 PM, Kenneth Marshall wrote:

> On Fri, Oct 22, 2004 at 03:35:49PM -0400, Jan Wieck wrote:
>> On 10/22/2004 2:50 PM, Simon Riggs wrote:
>>
>> >I've been using the ARC debug options to analyse memory usage on the
>> >PostgreSQL 8.0 server. This is a precursor to more complex performance
>> >analysis work on the OSDL test suite.
>> >
>> >I've simplified some of the ARC reporting into a single log line, which
>> >is enclosed here as a patch on freelist.c. This includes reporting of:
>> >- the total memory in use, which wasn't previously reported
>> >- the cache hit ratio, which was slightly incorrectly calculated
>> >- a useful-ish value for looking at the "B" lists in ARC
>> >(This is a patch against cvstip, but I'm not sure whether this has
>> >potential for inclusion in 8.0...)
>> >
>> >The total memory in use is useful because it allows you to tell whether
>> >shared_buffers is set too high. If it is set too high, then memory usage
>> >will continue to grow slowly up to the max, without any corresponding
>> >increase in cache hit ratio. If shared_buffers is too small, then memory
>> >usage will climb quickly and linearly to its maximum.
>> >
>> >The last one I've called "turbulence" in an attempt to ascribe some
>> >useful meaning to B1/B2 hits - I've tried a few other measures though
>> >without much success. Turbulence is the hit ratio of B1+B2 lists added
>> >together. By observation, this is zero when ARC gives smooth operation,
>> >and goes above zero otherwise. Typically, turbulence occurs when
>> >shared_buffers is too small for the working set of the database/workload
>> >combination and ARC repeatedly re-balances the lengths of T1/T2 as a
>> >result of "near-misses" on the B1/B2 lists. Turbulence doesn't usually
>> >cut in until the cache is fully utilized, so there is usually some delay
>> >after startup.
>> >
>> >We also recently discussed that I would add some further memory analysis
>> >features for 8.1, so I've been trying to figure out how.
>> >
>> >The idea that B1, B2 represent something really useful doesn't seem to
>> >have been borne out - though I'm open to persuasion there.
>> >
>> >I originally envisaged a "shadow list" operating in extension of the
>> >main ARC list. This will require some re-coding, since the variables and
>> >macros are all hard-coded to a single set of lists. No complaints, just
>> >it will take a little longer than we all thought (for me, that is...)
>> >
>> >My proposal is to alter the code to allow an array of memory linked
>> >lists. The actual list would be [0] - other additional lists would be
>> >created dynamically as required i.e. not using IFDEFs, since I want this
>> >to be controlled by a SIGHUP GUC to allow on-site tuning, not just lab
>> >work. This will then allow reporting against the additional lists, so
>> >that cache hit ratios can be seen with various other "prototype"
>> >shared_buffer settings.
>>
>> All the existing lists live in shared memory, so that dynamic approach
>> suffers from the fact that the memory has to be allocated during ipc_init.
>>
>> What do you think about my other theory to make C actually 2x effective
>> cache size and NOT to keep T1 in shared buffers but to assume T1 lives
>> in the OS buffer cache?
>>
>>
>> Jan
>>
> Jan,
>
>From the articles that I have seen on the ARC algorithm, I do not think
> that using the effective cache size to set C would be a win. The design
> of the ARC process is to allow the cache to optimize its use in response
> to the actual workload. It may be the best use of the cache in some cases
> to have the entire cache allocated to T1 and similarly for T2. If fact,
> the ability to alter the behavior as needed is one of the key advantages.

Only the "working set" of the database, that is the pages that are very
frequently used, are worth holding in shared memory at all. The rest
should be copied in and out of the OS disc buffers.

The problem is, with a too small directory ARC cannot guesstimate what
might be in the kernel buffers. Nor can it guesstimate what recently was
in the kernel buffers and got pushed out from there. That results in a
way too small B1 list, and therefore we don't get B1 hits when in fact
the data was found in memory. B1 hits is what increases the T1target,
and since we are missing them with a too small directory size, our
implementation of ARC is propably using a T2 size larger than the
working set. That is not optimal.

If we would replace the dynamic T1 buffers with a max_backends*2 area of
shared buffers, use a C value representing the effective cache size and
limit the T1target on the lower bound to effective cache size - shared
buffers, then we basically moved the T1 cache into the OS buffers.

This all only holds water, if the OS is allowed to swap out shared
memory. And that was my initial question, how likely is it to find this
to be true these days?


Jan

--
#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.                                  #
#================================================== JanWieck@Yahoo.com #

Re: [HACKERS] ARC Memory Usage analysis

From
Kenneth Marshall
Date:
On Fri, Oct 22, 2004 at 03:35:49PM -0400, Jan Wieck wrote:
> On 10/22/2004 2:50 PM, Simon Riggs wrote:
>
> >I've been using the ARC debug options to analyse memory usage on the
> >PostgreSQL 8.0 server. This is a precursor to more complex performance
> >analysis work on the OSDL test suite.
> >
> >I've simplified some of the ARC reporting into a single log line, which
> >is enclosed here as a patch on freelist.c. This includes reporting of:
> >- the total memory in use, which wasn't previously reported
> >- the cache hit ratio, which was slightly incorrectly calculated
> >- a useful-ish value for looking at the "B" lists in ARC
> >(This is a patch against cvstip, but I'm not sure whether this has
> >potential for inclusion in 8.0...)
> >
> >The total memory in use is useful because it allows you to tell whether
> >shared_buffers is set too high. If it is set too high, then memory usage
> >will continue to grow slowly up to the max, without any corresponding
> >increase in cache hit ratio. If shared_buffers is too small, then memory
> >usage will climb quickly and linearly to its maximum.
> >
> >The last one I've called "turbulence" in an attempt to ascribe some
> >useful meaning to B1/B2 hits - I've tried a few other measures though
> >without much success. Turbulence is the hit ratio of B1+B2 lists added
> >together. By observation, this is zero when ARC gives smooth operation,
> >and goes above zero otherwise. Typically, turbulence occurs when
> >shared_buffers is too small for the working set of the database/workload
> >combination and ARC repeatedly re-balances the lengths of T1/T2 as a
> >result of "near-misses" on the B1/B2 lists. Turbulence doesn't usually
> >cut in until the cache is fully utilized, so there is usually some delay
> >after startup.
> >
> >We also recently discussed that I would add some further memory analysis
> >features for 8.1, so I've been trying to figure out how.
> >
> >The idea that B1, B2 represent something really useful doesn't seem to
> >have been borne out - though I'm open to persuasion there.
> >
> >I originally envisaged a "shadow list" operating in extension of the
> >main ARC list. This will require some re-coding, since the variables and
> >macros are all hard-coded to a single set of lists. No complaints, just
> >it will take a little longer than we all thought (for me, that is...)
> >
> >My proposal is to alter the code to allow an array of memory linked
> >lists. The actual list would be [0] - other additional lists would be
> >created dynamically as required i.e. not using IFDEFs, since I want this
> >to be controlled by a SIGHUP GUC to allow on-site tuning, not just lab
> >work. This will then allow reporting against the additional lists, so
> >that cache hit ratios can be seen with various other "prototype"
> >shared_buffer settings.
>
> All the existing lists live in shared memory, so that dynamic approach
> suffers from the fact that the memory has to be allocated during ipc_init.
>
> What do you think about my other theory to make C actually 2x effective
> cache size and NOT to keep T1 in shared buffers but to assume T1 lives
> in the OS buffer cache?
>
>
> Jan
>
Jan,

From the articles that I have seen on the ARC algorithm, I do not think
that using the effective cache size to set C would be a win. The design
of the ARC process is to allow the cache to optimize its use in response
to the actual workload. It may be the best use of the cache in some cases
to have the entire cache allocated to T1 and similarly for T2. If fact,
the ability to alter the behavior as needed is one of the key advantages.

--Ken

Re: [HACKERS] ARC Memory Usage analysis

From
Tom Lane
Date:
Jan Wieck <JanWieck@Yahoo.com> writes:
> This all only holds water, if the OS is allowed to swap out shared
> memory. And that was my initial question, how likely is it to find this
> to be true these days?

I think it's more likely that not that the OS will consider shared
memory to be potentially swappable.  On some platforms there is a shmctl
call you can make to lock your shmem in memory, but (a) we don't use it
and (b) it may well require privileges we haven't got anyway.

This has always been one of the arguments against making shared_buffers
really large, of course --- if the buffers aren't all heavily used, and
the OS decides to swap them to disk, you are worse off than you would
have been with a smaller shared_buffers setting.


However, I'm still really nervous about the idea of using
effective_cache_size to control the ARC algorithm.  That number is
usually entirely bogus.  Right now it is only a second-order influence
on certain planner estimates, and I am afraid to rely on it any more
heavily than that.

            regards, tom lane

Re: [HACKERS] ARC Memory Usage analysis

From
Simon Riggs
Date:
On Mon, 2004-10-25 at 16:34, Jan Wieck wrote:
> The problem is, with a too small directory ARC cannot guesstimate what
> might be in the kernel buffers. Nor can it guesstimate what recently was
> in the kernel buffers and got pushed out from there. That results in a
> way too small B1 list, and therefore we don't get B1 hits when in fact
> the data was found in memory. B1 hits is what increases the T1target,
> and since we are missing them with a too small directory size, our
> implementation of ARC is propably using a T2 size larger than the
> working set. That is not optimal.

I think I have seen that the T1 list shrinks "too much", but need more
tests...with some good test results

The effectiveness of ARC relies upon the balance between the often
conflicting requirements of "recency" and "frequency". It seems
possible, even likely, that pgsql's version of ARC may need some subtle
changes to rebalance it - if we are unlikely enough to find cases where
it genuinely is out of balance. Many performance tests are required,
together with a few ideas on extra parameters to include....hence my
support of Jan's ideas.

That's also why I called the B1+B2 hit ratio "turbulence" because it
relates to how much oscillation is happening between T1 and T2. In
physical systems, we expect the oscillations to be damped, but there is
no guarantee that we have a nearly critically damped oscillator. (Note
that the absence of turbulence doesn't imply that T1+T2 is optimally
sized, just that is balanced).

[...and all though the discussion has wandered away from my original
patch...would anybody like to commit, or decline the patch?]

> If we would replace the dynamic T1 buffers with a max_backends*2 area of
> shared buffers, use a C value representing the effective cache size and
> limit the T1target on the lower bound to effective cache size - shared
> buffers, then we basically moved the T1 cache into the OS buffers.

Limiting the minimum size of T1len to be 2* maxbackends sounds like an
easy way to prevent overbalancing of T2, but I would like to follow up
on ways to have T1 naturally stay larger. I'll do a patch with this idea
in, for testing. I'll call this "T1 minimum size" so we can discuss it.

Any other patches are welcome...

It could be that B1 is too small and so we could use a larger value of C
to keep track of more blocks. I think what is being suggested is two
GUCs: shared_buffers (as is), plus another one, larger, which would
allow us to track what is in shared_buffers and what is in OS cache.

I have comments on "effective cache size" below....

On Mon, 2004-10-25 at 17:03, Tom Lane wrote:
> Jan Wieck <JanWieck@Yahoo.com> writes:
> > This all only holds water, if the OS is allowed to swap out shared
> > memory. And that was my initial question, how likely is it to find this
> > to be true these days?
>
> I think it's more likely that not that the OS will consider shared
> memory to be potentially swappable.  On some platforms there is a shmctl
> call you can make to lock your shmem in memory, but (a) we don't use it
> and (b) it may well require privileges we haven't got anyway.

Are you saying we shouldn't, or we don't yet? I simply assumed that we
did use that function - surely it must be at least an option? RHEL
supports this at least....

It may well be that we don't have those privileges, in which case we
turn off the option. Often, we (or I?) will want to install a dedicated
server, so we should have all the permissions we need, in which case...

> This has always been one of the arguments against making shared_buffers
> really large, of course --- if the buffers aren't all heavily used, and
> the OS decides to swap them to disk, you are worse off than you would
> have been with a smaller shared_buffers setting.

Not really, just an argument against making them *too* large. Large
*and* utilised is OK, so we need ways of judging optimal sizing.

> However, I'm still really nervous about the idea of using
> effective_cache_size to control the ARC algorithm.  That number is
> usually entirely bogus.  Right now it is only a second-order influence
> on certain planner estimates, and I am afraid to rely on it any more
> heavily than that.

...ah yes, effective_cache_size.

The manual describes effective_cache_size as if it had something to do
with the OS, and some of this discussion has picked up on that.

effective_cache_size is used in only two places in the code (both in the
planner), as an estimate for calculating the cost of a) nonsequential
access and b) index access, mainly as a way of avoiding overestimates of
access costs for small tables.

There is absolutely no implication in the code that effective_cache_size
measures anything in the OS; what it gives is an estimate of the number
of blocks that will be available from *somewhere* in memory (i.e. in
shared_buffers OR OS cache) for one particular table (the one currently
being considered by the planner).

Crucially, the "size" referred to is the size of the *estimate*, not the
size of the OS cache (nor the size of the OS cache + shared_buffers). So
setting effective_cache_size = total memory available or setting
effective_cache_size = total memory - shared_buffers are both wildly
irrelevant things to do, or any assumption that directly links memory
size to that parameter. So talking about "effective_cache_size" as if it
were the OS cache isn't the right thing to do.

...It could be that we use a very high % of physical memory as
shared_buffers - in which case the effective_cache_size would represent
the contents of shared_buffers.

Note also that the planner assumes that all tables are equally likely to
be in cache. Increasing effective_cache_size in postgresql.conf seems
destined to give the wrong answer in planning unless you absolutely
understand what it does.

I will submit a patch to correct the description in the manual.

Further comments:
The two estimates appear to use effective_cache_size differently:
a) assumes that a table of size effective_cache_size will be 50% in
cache
b) assumes that effective_cache_size blocks are available, so for a
table of size == effective_cache_size, then it will be 100% available

IMHO the GUC should be renamed "estimated_cached_blocks", with the old
name deprecated to force people to re-read the manual description of
what effective_cache_size means and then set accordingly.....all of that
in 8.0....

--
Best Regards, Simon Riggs


Re: [HACKERS] ARC Memory Usage analysis

From
Simon Riggs
Date:
On Tue, 2004-10-26 at 09:49, Simon Riggs wrote:
> On Mon, 2004-10-25 at 16:34, Jan Wieck wrote:
> > The problem is, with a too small directory ARC cannot guesstimate what
> > might be in the kernel buffers. Nor can it guesstimate what recently was
> > in the kernel buffers and got pushed out from there. That results in a
> > way too small B1 list, and therefore we don't get B1 hits when in fact
> > the data was found in memory. B1 hits is what increases the T1target,
> > and since we are missing them with a too small directory size, our
> > implementation of ARC is propably using a T2 size larger than the
> > working set. That is not optimal.
>
> I think I have seen that the T1 list shrinks "too much", but need more
> tests...with some good test results
>
> > If we would replace the dynamic T1 buffers with a max_backends*2 area of
> > shared buffers, use a C value representing the effective cache size and
> > limit the T1target on the lower bound to effective cache size - shared
> > buffers, then we basically moved the T1 cache into the OS buffers.
>
> Limiting the minimum size of T1len to be 2* maxbackends sounds like an
> easy way to prevent overbalancing of T2, but I would like to follow up
> on ways to have T1 naturally stay larger. I'll do a patch with this idea
> in, for testing. I'll call this "T1 minimum size" so we can discuss it.
>

Don't know whether you've seen this latest update on the ARC idea:
Sorav Bansal and Dharmendra S. Modha,
CAR: Clock with Adaptive Replacement,
    in Proceedings of the USENIX Conference on File and Storage Technologies
    (FAST), pages 187--200, March 2004.
[I picked up the .pdf here http://citeseer.ist.psu.edu/bansal04car.html]

In that paper Bansal and Modha introduce an update to ARC called CART
which they say is more appropriate for databases. Their idea is to
introduce a "temporal locality window" as a way of making sure that
blocks called twice within a short period don't fall out of T1, though
don't make it into T2 either. Strangely enough the "temporal locality
window" is made by increasing the size of T1... in an adpative way, of
course.

If we were going to put a limit on the minimum size of T1, then this
would put a minimal "temporal locality window" in place....rather than
the increased complexity they go to in order to make T1 larger. I note
test results from both the ARC and CAR papers that show that T2 usually
represents most of C, so the observations that T1 is very small is not
atypical. That implies that the cost of managing the temporal locality
window in CART is usually wasted, even though it does cut in as an
overall benefit: The results show that CART is better than ARC over the
whole range of cache sizes tested (16MB to 4GB) and workloads (apart
from 1 out 22).

If we were to implement a minimum size of T1, related as suggested to
number of users, then this would provide a reasonable approximation of
the temporal locality window. This wouldn't prevent the adaptation of T1
to be higher than this when required.

Jan has already optimised ARC for PostgreSQL by the addition of a
special lookup on transactionId required to optimise for the double
cache lookup of select/update that occurs on a T1 hit. That seems likely
to be able to be removed as a result of having a larger T1.

I'd suggest limiting T1 to be a value of:
shared_buffers <= 1000        T1limit = max_backends *0.75
shared_buffers <= 2000        T1limit = max_backends
shared_buffers <= 5000        T1limit = max_backends *1.5
shared_buffers > 5000        T1limit = max_backends *2

I'll try some tests with both
- minimum size of T1
- update optimisation removed

Thoughts?

--
Best Regards, Simon Riggs


Re: [HACKERS] ARC Memory Usage analysis

From
Mark Wong
Date:
On Mon, Oct 25, 2004 at 11:34:25AM -0400, Jan Wieck wrote:
> On 10/22/2004 4:09 PM, Kenneth Marshall wrote:
>
> > On Fri, Oct 22, 2004 at 03:35:49PM -0400, Jan Wieck wrote:
> >> On 10/22/2004 2:50 PM, Simon Riggs wrote:
> >>
> >> >I've been using the ARC debug options to analyse memory usage on the
> >> >PostgreSQL 8.0 server. This is a precursor to more complex performance
> >> >analysis work on the OSDL test suite.
> >> >
> >> >I've simplified some of the ARC reporting into a single log line, which
> >> >is enclosed here as a patch on freelist.c. This includes reporting of:
> >> >- the total memory in use, which wasn't previously reported
> >> >- the cache hit ratio, which was slightly incorrectly calculated
> >> >- a useful-ish value for looking at the "B" lists in ARC
> >> >(This is a patch against cvstip, but I'm not sure whether this has
> >> >potential for inclusion in 8.0...)
> >> >
> >> >The total memory in use is useful because it allows you to tell whether
> >> >shared_buffers is set too high. If it is set too high, then memory usage
> >> >will continue to grow slowly up to the max, without any corresponding
> >> >increase in cache hit ratio. If shared_buffers is too small, then memory
> >> >usage will climb quickly and linearly to its maximum.
> >> >
> >> >The last one I've called "turbulence" in an attempt to ascribe some
> >> >useful meaning to B1/B2 hits - I've tried a few other measures though
> >> >without much success. Turbulence is the hit ratio of B1+B2 lists added
> >> >together. By observation, this is zero when ARC gives smooth operation,
> >> >and goes above zero otherwise. Typically, turbulence occurs when
> >> >shared_buffers is too small for the working set of the database/workload
> >> >combination and ARC repeatedly re-balances the lengths of T1/T2 as a
> >> >result of "near-misses" on the B1/B2 lists. Turbulence doesn't usually
> >> >cut in until the cache is fully utilized, so there is usually some delay
> >> >after startup.
> >> >
> >> >We also recently discussed that I would add some further memory analysis
> >> >features for 8.1, so I've been trying to figure out how.
> >> >
> >> >The idea that B1, B2 represent something really useful doesn't seem to
> >> >have been borne out - though I'm open to persuasion there.
> >> >
> >> >I originally envisaged a "shadow list" operating in extension of the
> >> >main ARC list. This will require some re-coding, since the variables and
> >> >macros are all hard-coded to a single set of lists. No complaints, just
> >> >it will take a little longer than we all thought (for me, that is...)
> >> >
> >> >My proposal is to alter the code to allow an array of memory linked
> >> >lists. The actual list would be [0] - other additional lists would be
> >> >created dynamically as required i.e. not using IFDEFs, since I want this
> >> >to be controlled by a SIGHUP GUC to allow on-site tuning, not just lab
> >> >work. This will then allow reporting against the additional lists, so
> >> >that cache hit ratios can be seen with various other "prototype"
> >> >shared_buffer settings.
> >>
> >> All the existing lists live in shared memory, so that dynamic approach
> >> suffers from the fact that the memory has to be allocated during ipc_init.
> >>
> >> What do you think about my other theory to make C actually 2x effective
> >> cache size and NOT to keep T1 in shared buffers but to assume T1 lives
> >> in the OS buffer cache?
> >>
> >>
> >> Jan
> >>
> > Jan,
> >
> >>From the articles that I have seen on the ARC algorithm, I do not think
> > that using the effective cache size to set C would be a win. The design
> > of the ARC process is to allow the cache to optimize its use in response
> > to the actual workload. It may be the best use of the cache in some cases
> > to have the entire cache allocated to T1 and similarly for T2. If fact,
> > the ability to alter the behavior as needed is one of the key advantages.
>
> Only the "working set" of the database, that is the pages that are very
> frequently used, are worth holding in shared memory at all. The rest
> should be copied in and out of the OS disc buffers.
>
> The problem is, with a too small directory ARC cannot guesstimate what
> might be in the kernel buffers. Nor can it guesstimate what recently was
> in the kernel buffers and got pushed out from there. That results in a
> way too small B1 list, and therefore we don't get B1 hits when in fact
> the data was found in memory. B1 hits is what increases the T1target,
> and since we are missing them with a too small directory size, our
> implementation of ARC is propably using a T2 size larger than the
> working set. That is not optimal.
>
> If we would replace the dynamic T1 buffers with a max_backends*2 area of
> shared buffers, use a C value representing the effective cache size and
> limit the T1target on the lower bound to effective cache size - shared
> buffers, then we basically moved the T1 cache into the OS buffers.
>
> This all only holds water, if the OS is allowed to swap out shared
> memory. And that was my initial question, how likely is it to find this
> to be true these days?
>
>
> Jan
>

I've asked our linux kernel guys some quick questions and they say
you can lock mmapped memory and sys v shared memory with mlock and
SHM_LOCK, resp.  Otherwise the OS will swap out memory as it sees
fit, whether or not it's shared.

Mark