On Wed, 2009-03-11 at 22:20 -0400, Jignesh K. Shah wrote:
A tunable does not impact existing behavior
Why not put the tunable parameter into the patch and then show the test results with it in? If there is no overhead, we should then be able to see that.
I did a patch where I define lock_wakeup_algorithm with default value of 0, and range is 0 to 32 It basically handles three types of algorithms and 32 different permutations, such that: When lock_wakeup_algorithm is set to 0 => default logic of wakeup (only 1 exclusive or all sequential shared) 1 => wake up all sequential exclusives or all sequential shared 32>= n >=2 => wake up first n waiters irrespective of exclusive or sequential
I did a quick test with patch. Unfortunately it improves my number even with default setting 0 (not sure whether I should be pleased or sad - Definitely no overhead infact seems to help performance a bit. NOTE: Logic is same, implementation is slightly different for default set)
my Prepatch numbers typically peaked around 136,000 tpm With the patch and settings:
lock_wakeup_algorithm=0 PEAK: 962: 512: Medium Throughput: 161121.000 Avg Medium Resp: 0.051
When lock_wakeup_algorithm=1 Then my PEAK increases to PEAK 1560: 832: Medium Throughput: 176577.000 Avg Medium Resp: 0.086 (Couldn't recreate the 184K+ result.. need to check that)
I still havent tested for the rest 2-32 values but you get the point, the patch is quite flexible with various types of permutations and no overhead.
Do give it a try on your own setup and play with values and compare it with your original builds.