Re: table as log (multiple writers and readers) - Mailing list pgsql-general

From David Wilson
Subject Re: table as log (multiple writers and readers)
Date
Msg-id e7f9235d0804161227o65d1b3eap23e70dda6854415c@mail.gmail.com
Whole thread Raw
In response to Re: table as log (multiple writers and readers)  ("Vance Maverick" <vmaverick@pgp.com>)
Responses Re: table as log (multiple writers and readers)  (Joris Dobbelsteen <joris@familiedobbelsteen.nl>)
List pgsql-general
(I originally missed replying to all here; sorry about the duplicate,
Vance, but figured others might be interested.

On Wed, Apr 16, 2008 at 1:55 PM, Vance Maverick <vmaverick@pgp.com> wrote:
>
>  Another approach would be to queue the log entries in a "staging" table,
>  so that a single process could move them into the log.  This is fairly
>  heavyweight, but it would guarantee the consistent sequencing of the log
>  as seen by a reader (even if the order of entries in the log didn't
>  always reflect the true commit sequence in the staging table).  I'm
>  hoping someone knows a cleverer trick.


Consider a loop like the following

advisory lock staging table
if (entries in table)
   copy entries to main log table as a single transaction
release advisory lock on staging table
read out and handle most recent log entries from main table

The advisory lock is automatically released on client disconnect, and
doing the whole thing within one transaction should prevent any
partial-copies on failures.

It doesn't matter that there are concurrent inserts to the staging
table because the staging table is always wiped all at once and
transferred in a synchronous fashion to the main table. You also can't
lose data, because it's always in one of the two tables.

--
- David T. Wilson
david.t.wilson@gmail.com

pgsql-general by date:

Previous
From: "Jimmy Choi"
Date:
Subject: Re: "vacuum" and "cluster"
Next
From: Alvaro Herrera
Date:
Subject: Re: "vacuum" and "cluster"