Re: ExclusiveLock - Mailing list pgsql-hackers
From | Simon Riggs |
---|---|
Subject | Re: ExclusiveLock |
Date | |
Msg-id | 1099999131.3979.259.camel@localhost.localdomain Whole thread Raw |
In response to | Re: ExclusiveLock (Tom Lane <tgl@sss.pgh.pa.us>) |
List | pgsql-hackers |
On Mon, 2004-11-08 at 21:37, Tom Lane wrote: > Simon Riggs <simon@2ndquadrant.com> writes: > > Recent runs of DBT-2 show very occasional ExclusiveLock (s) being held > > by transactions, sometimes waiting to be granted. > > I think you are right that these reflect heap or btree-index extension > operations. Those do not actually take locks on the *table* however, > but locks on a single page within it (which are completely orthogonal to > table locks and don't conflict). The pg_locks output leaves something > to be desired, because you can't tell the difference between table and > page locks. Good. Thought it was worth discussion... > It's odd that your example does not appear to show someone else holding > a conflicting lock. There is....I didn't copy the whole lock table output...here it is... relname | pid | mode | granted ---------------+-------+------------------+---------new_order | 21735 | AccessShareLock | tnew_order | 21735 | RowExclusiveLock| torders | 21715 | AccessShareLock | torders | 21715 | RowExclusiveLock | tpg_class | 23254 | AccessShareLock | torder_line | 21715 | AccessShareLock | torder_line | 21715 | RowExclusiveLock | torder_line | 21735 | ExclusiveLock | fnew_order | 21715 | AccessShareLock | tnew_order | 21715 | RowExclusiveLock| tcustomer | 21715 | AccessShareLock | tpk_order_line | 21735 | AccessShareLock | tpk_order_line| 21735 | RowExclusiveLock | titem | 21715 | AccessShareLock | torders | 21735 | AccessShareLock | torders | 21735 | RowExclusiveLock | torder_line | 21735 | AccessShareLock | torder_line | 21735 | RowExclusiveLock | tstock | 21715 | AccessShareLock | tstock | 21715 | RowExclusiveLock | torder_line | 21715 | ExclusiveLock | tpk_order_line | 21715 | RowExclusiveLock | tpg_locks | 23254 | AccessShareLock | tdistrict | 21715 | AccessShareLock | tdistrict | 21715 | RowShareLock | tdistrict | 21715 | RowExclusiveLock | twarehouse | 21715 | AccessShareLock | tcustomer | 21735 | AccessShareLock | tcustomer | 21735 | RowExclusiveLock | t (29 rows) Pids 21715 and 21735 are conflicting. There's also another example where the lock table output is > 1400 rows, with two lock requests pending. The oprofile for this run looks like this: (but is not of course a snapshot at a point in time, like the lock list) CPU: CPU with timer interrupt, speed 0 MHz (estimated) Profiling through timer interrupt samples % app name symbol name 170746 42.7220 vmlinux-2.6.8.1-osdl2 ia64_pal_call_static 18934 4.7374 libc-2.3.2.so (no symbols) 10691 2.6750 postgres FunctionCall2 9814 2.4555 postgres hash_seq_search 8654 2.1653 postgres SearchCatCache 7389 1.8488 postgres AllocSetAlloc 6122 1.5318 postgres hash_search 5707 1.4279 postgres OpernameGetCandidates 4901 1.2263 postgres StrategyDirtyBufferList 4627 1.1577 postgres XLogInsert 4424 1.1069 postgres pglz_decompress 4371 1.0937 vmlinux-2.6.8.1-osdl2 __copy_user 3796 0.9498 vmlinux-2.6.8.1-osdl2 finish_task_switch 3483 0.8715 postgres LWLockAcquire 3458 0.8652 postgres eqjoinsel 3001 0.7509 vmlinux-2.6.8.1-osdl2 get_exec_dcookie 2824 0.7066 postgres AtEOXact_CatCache 2745 0.6868 postgres _bt_compare 2730 0.6831 postgres nocachegetattr 2715 0.6793 postgres SearchCatCacheList 2659 0.6653 postgres MemoryContextAllocZeroAligned 2604 0.6515 postgres yyparse 2553 0.6388 postgres eqsel 2127 0.5322 postgres deconstruct_array 1921 0.4806 postgres hash_any 1919 0.4801 postgres int4eq 1855 0.4641 postgres LWLockRelease 1839 0.4601 postgres StrategyBufferLookup 1777 0.4446 postgres GetSnapshotData 1729 0.4326 postgres heap_getsysattr 1595 0.3991 postgres DLMoveToFront 1586 0.3968 postgres MemoryContextAlloc 1485 0.3716 vmlinux-2.6.8.1-osdl2 try_atomic_semop 1455 0.3641 postgres anonymous symbol from section .plt 1409 0.3525 postgres lappend 1352 0.3383 postgres heap_release_fetch 1270 0.3178 postgres PinBuffer 1141 0.2855 postgres DirectFunctionCall1 1132 0.2832 postgres base_yylex 982 0.2457 postgres pgstat_initstats 957 0.2394 vmlinux-2.6.8.1-osdl2 __make_request 926 0.2317 postgres AllocSetFree 892 0.2232 vmlinux-2.6.8.1-osdl2 try_to_wake_up 874 0.2187 postgres _bt_checkkeys 870 0.2177 postgres fmgr_isbuiltin 853 0.2134 postgres ReadBufferInternal 852 0.2132 postgres pfree 850 0.2127 postgres _bt_moveright 848 0.2122 vmlinux-2.6.8.1-osdl2 do_cciss_request 766 0.1917 postgres ExecTargetList 734 0.1837 postgres SearchSysCache 730 0.1827 postgres PGSemaphoreLock 706 0.1766 postgres expression_tree_walker 684 0.1711 postgres ExecEvalVar 674 0.1686 postgres StrategyGetBuffer 669 0.1674 postgres ResourceOwnerForgetCatCacheRef 660 0.1651 postgres lcons 614 0.1536 vmlinux-2.6.8.1-osdl2 find_get_page 586 0.1466 postgres _bt_restscan 582 0.1456 postgres MemoryContextAllocZero 551 0.1379 postgres LockRelease 551 0.1379 postgres heap_formtuple 540 0.1351 postgres OidFunctionCall3 537 0.1344 postgres check_stack_depth 527 0.1319 postgres ExecutePlan 521 0.1304 postgres CatalogCacheComputeHashValue 510 0.1276 postgres buildRelationAliases 508 0.1271 vmlinux-2.6.8.1-osdl2 find_get_pages_tag 504 0.1261 postgres btgettuple 499 0.1249 postgres IndexNext 454 0.1136 postgres ExecInitExpr 453 0.1133 postgres ExecProcNode 447 0.1118 postgres LockAcquire I note that an important one has dropped down the list: 1 2.5e-04 postgres AtEOXact_Buffers and this is nowhere now... UnlockBuffers StrategyDirtyBufferList is too high, so we can change that. As a follow-on: We've got freelists for reuse of space. Do freelists work for index/heap extension also, or does everybody read the same info to get the next block....i.e. we are space conservative, rather than emphasising concurrency? It would be good to have one freelist per CPU.... -- Best Regards, Simon Riggs
pgsql-hackers by date: