On 3 June 2013 19:41, Jim Nasby <jim@nasby.net> wrote:
> On 6/2/13 4:45 AM, Simon Riggs wrote:
>>>
>>> >Will this add too much cost where it doesn't help? I don't know what to
>>> >predict there. There's the obvious case of trivial transactions with no
>>> > more
>>> >than one referential integrity check per FK, but there's also the case
>>> > of a
>>> >transaction with many FK checks all searching different keys. If the
>>> > hash hit
>>> >rate (key duplication rate) is low, the hash can consume considerably
>>> > more
>>> >memory than the trigger queue without preventing many RI queries. What
>>> > sort
>>> >of heuristic could we use to avoid pessimizing such cases?
>>
>> I've struggled with that for a while now. Probably all we can say is
>> that there might be one, and if there is not, then manual decoration
>> of the transaction will be the way to go.
>
>
> Just an idea... each backend could keep a store that indicates what FKs this
> would help with. For example, any time we hit a transaction that exercises
> the same FK more than once, we stick the OID of the FK constraint (or maybe
> of the two tables) into a hash that's in that backend's top memory context.
> (Or if we want to be real fancy, shared mem).
Yes, that principle would work. We could just store that on the
relcache entry for a table.
It requires a little bookkeeping to implement that heuristic. I'm sure
other ways exist as well.
--Simon Riggs http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Training & Services