RE: Postgres is not able to handle more than 4k tables!? - Mailing list pgsql-hackers
From | tsunakawa.takay@fujitsu.com |
---|---|
Subject | RE: Postgres is not able to handle more than 4k tables!? |
Date | |
Msg-id | TYAPR01MB2990DFF5F45BA2A008A759ECFE640@TYAPR01MB2990.jpnprd01.prod.outlook.com Whole thread Raw |
In response to | Re: Postgres is not able to handle more than 4k tables!? (Konstantin Knizhnik <k.knizhnik@postgrespro.ru>) |
List | pgsql-hackers |
From: Konstantin Knizhnik <k.knizhnik@postgrespro.ru> > Looks like it is not true (at lest for PG9.6): > > #0 0x00007fa6d30da087 in semop () from /lib64/libc.so.6 > #1 0x0000000000682241 in PGSemaphoreLock > (sema=sema@entry=0x7fa66f5655d8) at pg_sema.c:387 > #2 0x00000000006ec6eb in LWLockAcquire > (lock=lock@entry=0x7f23b544f800, > mode=mode@entry=LW_EXCLUSIVE) at lwlock.c:1338 > #3 0x00000000006e5560 in LockAcquireExtended > (locktag=locktag@entry=0x7ffd94883fa0, lockmode=lockmode@entry=1, > sessionLock=sessionLock@entry=0 '\000', dontWait=dontWait@entry=0 > '\000', reportMemoryError=reportMemoryError@entry=1 '\001', > locallockp=locallockp@entry=0x7ffd94883f98) at lock.c:962 > #4 0x00000000006e29f6 in LockRelationOid (relid=87103837, lockmode=1) > at lmgr.c:113 > #5 0x00000000004a9f55 in relation_open (relationId=87103837, > lockmode=lockmode@entry=1) at heapam.c:1131 > #6 0x00000000004bdc66 in index_open (relationId=<optimized out>, > lockmode=lockmode@entry=1) at indexam.c:151 > #7 0x000000000067be58 in get_relation_info (root=root@entry=0x3a1a758, > relationObjectId=72079078, inhparent=<optimized out>, > rel=rel@entry=0x3a2d460) at plancat.c:183 > #8 0x000000000067ef45 in build_simple_rel (root=root@entry=0x3a1a758, > relid=2, reloptkind=reloptkind@entry=RELOPT_BASEREL) at relnode.c:148 > > Please notice lockmode=1 (AccessShareLock) Ouch, there exists another sad hardcoded value: the number of maximum locks that can be acquired by the fast-path mechanism. [LockAcquireExtended] /* * Attempt to take lock via fast path, if eligible. But if we remember * having filled up the fast path array, we don't attempt to make any * further use of it until we release some locks. It's possible that some * other backend has transferred some of those locks to the shared hash * table, leaving space free, but it's not worth acquiring the LWLock just * to check. It's also possible that we're acquiring a second or third * lock type on a relation we have already locked using the fast-path, but * for now we don't worry about that case either. */ if (EligibleForRelationFastPath(locktag, lockmode) && FastPathLocalUseCount < FP_LOCK_SLOTS_PER_BACKEND) { /* * We allow a small number of "weak" relation locks (AccessShareLock, * RowShareLock, RowExclusiveLock) to be recorded in the PGPROC structure * rather than the main lock table. This eases contention on the lock * manager LWLocks. See storage/lmgr/README for additional details. */ #define FP_LOCK_SLOTS_PER_BACKEND 16 16 looks easily exceeded even in a not-long OLTP transaction... especially the table is partitioned. I wonder if we're caughtin the hell of lock manager partition lock contention without knowing it. I'm afraid other pitfalls are lurking whenthere are many relations. Regards Takayuki Tsunakawa
pgsql-hackers by date: