On 2013-02-07 11:15:46 -0800, Jeff Janes wrote:
> On Thu, Feb 7, 2013 at 10:09 AM, Pavan Deolasee
> <pavan.deolasee@gmail.com> wrote:
> >
> > Right. I don't have the database handy at this moment, but earlier in
> > the day I ran some queries against it and found that most of the
> > duplicates which are not accessible via indexes have xmin very close
> > to 2100345903. In fact, many of them are from a consecutive range.
>
> Does anyone have suggestions on how to hack the system to make it
> fast-forward the current transaction id? It would certainly make
> testing this kind of thing faster if I could make transaction id
> increment by 100 each time a new one is generated. Then wrap-around
> could be approached in minutes rather than hours.
I had various plpgsql functions to do that, but those still took quite
some time. As I needed it before I just spent some minutes hacking up a
contrib module to do the job.
I doubt it really think it makes sense as a contrib module on its own
though?
postgres=# select * FROM burnxids(500000);select * FROM
burnxids(500000);
burnxids
----------
5380767
(1 row)
Time: 859.807 ms
burnxids
----------
5880767
(1 row)
Time: 717.700 ms
It doesn't really work in a nice way:
if (GetTopTransactionIdIfAny() != InvalidTransactionId)
elog(ERROR, "can't burn xids in a transaction with xid");
for (i = 0; i < nxids; i++)
{
last = GetNewTransactionId(false);
}
/* don't keep xid as assigned */
MyPgXact->xid = InvalidTransactionId;
but it seems to work ;)
Greetings,
Andres Freund
--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services