Rafal Pietrak wrote:
> On Tue, 2006-05-23 at 15:56 +0000, Chris Browne wrote:
>>> The use that I have for this at the moment, and I can think of many
>>> other uses, is that I want to populate a statistics table each time
>>> that a table is updated. But the code to populate the table takes 10
>>> seconds to run. I don't want the user to have to wait 10 seconds to
>>> add a record.
>> This seems a case for using NOTIFY/LISTEN.
>>
>> - You have a process connected to the database that runs LISTEN,
>> causing it to listen for a particular message.
>>
>> LISTEN regen_statistics;
>>
>> - Your trigger submits a notification:
>>
>> NOTIFY regen_statistics;
>
> Some time ago I fell into quite similair problem. But LISTEN/NOTIFY was
> not a solution.
>
> In that case I needed to *disconnect* and never bother about the outcome
> of a long running background trigger.
You can "disconnect" after doing NOTIFY, as long as you commit the transaction
you called NOTIFY in. Otherwise your notification gets rolles back, just like
your other database updates. But since the updates never happened then, your
aggregate table won't need rebuilding in that case either...
> So if I may re-phrase the question: "is there a way to have a trigger,
> that, when launched, can check if it's already running in backgroung for
> some other INSERT, return imediately if so, but turn into background for
> a long-lasting job if not".
Not easily, because according the transaction isolation rules, one transaction
has not way of finding out what another concurrent transaction might be doing.
But your background daemon can check for this. Before starting a "aggregate run",
it'd need store the currently-visible transaction-ids. If a new queue-entry is created
while the first job is still running, it either starts a new job directly after the first
one finished (if the transaction that created the queue entry wasn't visible at the time
the first job was started), or just deletes the new queue entry (If the previous run already
saw the changes introduced by the transaction that created the queue entry)
> The functionality required is "lazy" update of agregate tables, which
> need not be 100% acurate, but should be kept in-sync whenever possible.
Why isn't the solution outlines above sufficient?
greetings, FLorian Pflug