Re: Scaling shared buffer eviction - Mailing list pgsql-hackers
From | Amit Kapila |
---|---|
Subject | Re: Scaling shared buffer eviction |
Date | |
Msg-id | CAA4eK1KX1Jz87WFzYipjwx=GhUL2k3ZK2j=gkYztu3m_5kkAMw@mail.gmail.com Whole thread Raw |
In response to | Re: Scaling shared buffer eviction (Andres Freund <andres@2ndquadrant.com>) |
List | pgsql-hackers |
On Tue, Oct 14, 2014 at 3:32 PM, Andres Freund <andres@2ndquadrant.com> wrote:
> On 2014-10-14 15:24:57 +0530, Amit Kapila wrote:
> > After that I observed that contention for LW_SHARED has reduced
> > for this load, but it didn't help much in terms of performance, so I again
> > rechecked the profile and this time most of the contention is moved
> > to spinlock used in dynahash for buf mapping tables, please refer
> > the profile (for 128 client count; Read only load) below:
> >
> > bgreclaimer patch + wait free lw_shared acquisition patches -
> > ------------------------------------------------------------------------------------------
>
> This profile is without -O2 again. I really don't think it makes much
> sense to draw much inference from an unoptimized build.
> On 2014-10-14 15:24:57 +0530, Amit Kapila wrote:
> > After that I observed that contention for LW_SHARED has reduced
> > for this load, but it didn't help much in terms of performance, so I again
> > rechecked the profile and this time most of the contention is moved
> > to spinlock used in dynahash for buf mapping tables, please refer
> > the profile (for 128 client count; Read only load) below:
> >
> > bgreclaimer patch + wait free lw_shared acquisition patches -
> > ------------------------------------------------------------------------------------------
>
> This profile is without -O2 again. I really don't think it makes much
> sense to draw much inference from an unoptimized build.
Profile data with -O2 is below. This shows that top
contributors are calls for BufTableLookup and spin lock caused
by BufTableInsert and BufTableDelete. To resolve spin lock
contention, patch like above might prove to be useful (although
I have to still evaluate the same). I would like to once take
LWLOCK_STATS data as well before proceeding further.
Do you have any other ideas?
11.17% swapper [unknown] [H] 0x00000000011e0328
+ 4.62% postgres postgres [.] hash_search_with_hash_value
+ 4.35% pgbench [kernel.kallsyms] [k] .find_busiest_group
+ 3.71% postgres postgres [.] s_lock
2.56% postgres [unknown] [H] 0x0000000001500120
+ 2.23% pgbench [kernel.kallsyms] [k] .idle_cpu
+ 1.97% postgres postgres [.] LWLockAttemptLock
+ 1.73% postgres postgres [.] LWLockRelease
+ 1.47% postgres [kernel.kallsyms] [k] .__copy_tofrom_user_power7
+ 1.44% postgres postgres [.] GetSnapshotData
+ 1.28% postgres postgres [.] _bt_compare
+ 1.04% swapper [kernel.kallsyms] [k] .int_sqrt
+ 1.04% postgres postgres [.] AllocSetAlloc
+ 0.97% pgbench [kernel.kallsyms] [k] .default_wake_function
Detailed Data
----------------
- 4.62% postgres postgres [.] hash_search_with_hash_value
- hash_search_with_hash_value
- 2.19% BufTableLookup
- 2.15% BufTableLookup
ReadBuffer_common
- ReadBufferExtended
- 1.32% _bt_relandgetbuf
- 0.73% BufTableDelete
- 0.71% BufTableDelete
ReadBuffer_common
ReadBufferExtended
- 0.69% BufTableInsert
- 0.68% BufTableInsert
ReadBuffer_common
ReadBufferExtended
0.66% hash_search_with_hash_value
- 4.35% pgbench [kernel.kallsyms] [k] .find_busiest_group
- .find_busiest_group
- 4.28% .find_busiest_group
- 4.26% .load_balance
- 4.26% .idle_balance
- .__schedule
- 4.26% .schedule_hrtimeout_range_clock
.do_select
.core_sys_select
- 3.71% postgres postgres [.] s_lock
- s_lock
- 3.19% hash_search_with_hash_value
- 3.18% hash_search_with_hash_value
- 1.60% BufTableInsert
ReadBuffer_common
- ReadBufferExtended
- 1.57% BufTableDelete
ReadBuffer_common
- ReadBufferExtended
- 0.93% index_fetch_heap
pgsql-hackers by date: