Re: [HACKERS] Open 6.5 items - Mailing list pgsql-hackers

From Vadim Mikheev
Subject Re: [HACKERS] Open 6.5 items
Date
Msg-id 374E7069.CA880C1@krs.ru
Whole thread Raw
In response to Re: [HACKERS] Open 6.5 items  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: [HACKERS] Open 6.5 items
List pgsql-hackers
Tom Lane wrote:
> 
> If I recall the dynahash.c code correctly, a null return value
> indicates either damage to the structure of the table (ie someone
> stomped on memory that didn't belong to them) or running out of memory
> to add entries to the table.  The latter should be impossible if we

Quite different cases and should result in different reactions.
If structure is corrupted then only abort() is proper thing.
If running out of memory then elog(ERROR) is enough.

> sized shared memory correctly.  Perhaps the table size estimation code
> has been obsoleted by recent changes?

lock.h:

/* ----------------------* The following defines are used to estimate how much shared * memory the lock manager is
goingto require.* See LockShmemSize() in lock.c.** NLOCKS_PER_XACT - The number of unique locks acquired in a
transaction * NLOCKENTS - The maximum number of lock entries in the lock table.* ----------------------*/
 
#define NLOCKS_PER_XACT         40                               ^^
Isn't it too low?

#define NLOCKENTS(maxBackends)  (NLOCKS_PER_XACT*(maxBackends))

And now - LockShmemSize() in lock.c:
   /* lockHash table */   size += hash_estimate_size(NLOCKENTS(maxBackends),
^^^^^^^^^^^^^^^^^^^^^^                             SHMEM_LOCKTAB_KEYSIZE,
SHMEM_LOCKTAB_DATASIZE);
   /* xidHash table */   size += hash_estimate_size(maxBackends,                              ^^^^^^^^^^^
              SHMEM_XIDTAB_KEYSIZE,                              SHMEM_XIDTAB_DATASIZE);
 

Why just maxBackends is here? NLOCKENTS should be used too
(each transaction lock requieres own xidhash entry).

Vadim


pgsql-hackers by date:

Previous
From: "D'Arcy" "J.M." Cain
Date:
Subject: Re: [HACKERS] Open 6.5 items
Next
From: David Sauer
Date:
Subject: pg_dump doesn't work well with large object ...