Can Postgres Not Do This Safely ?!? - Mailing list pgsql-general

From Karl Pickett
Subject Can Postgres Not Do This Safely ?!?
Date
Msg-id AANLkTi=rRkR-EgPkMLTVVukJV3X-eUCbk8fkva6X0HZY@mail.gmail.com
Whole thread Raw
Responses Re: Can Postgres Not Do This Safely ?!?  (Peter Geoghegan <peter.geoghegan86@gmail.com>)
Re: Can Postgres Not Do This Safely ?!?  (Craig Ringer <craig@postnewspapers.com.au>)
Re: Can Postgres Not Do This Safely ?!?  (Adrian Klaver <adrian.klaver@gmail.com>)
Re: Can Postgres Not Do This Safely ?!?  (Merlin Moncure <mmoncure@gmail.com>)
List pgsql-general
Hello Postgres Hackers,

We have a simple 'event log' table that is insert only (by multiple
concurrent clients).  It has an integer primary key.  We want to do
incremental queries of this table every 5 minutes or so, i.e. "select
* from events where id > LAST_ID_I_GOT" to insert into a separate
reporting database.  The problem is, this simple approach has a race
that will forever skip uncommitted events.  I.e., if 5000 was
committed sooner than 4999, and we get 5000, we will never go back and
get 4999 when it finally commits.  How can we solve this?  Basically
it's a phantom row problem but it spans transactions.

I looked at checking the internal 'xmin' column but the docs say that
is 32 bit, and something like 'txid_current_snapshot' returns a 64 bit
value.  I don't get it.   All I want to is make sure I skip over any
rows that are newer than the oldest currently running transaction.
Has nobody else run into this before?

Thank you very much.

--
Karl Pickett

pgsql-general by date:

Previous
From: Alex Hunsaker
Date:
Subject: Re: MySQL -> PostgreSQL conversion issue
Next
From: Gregory Machin
Date:
Subject: Adivice on master - master replication.