Re: Transaction ID wraparound: problem and proposed solution - Mailing list pgsql-hackers

From Vadim Mikheev
Subject Re: Transaction ID wraparound: problem and proposed solution
Date
Msg-id 016601c046ed$db6819c0$b87a30d0@sectorbase.com
Whole thread Raw
In response to RE: Transaction ID wraparound: problem and proposed sol ution  ("Mikheev, Vadim" <vmikheev@SECTORBASE.COM>)
List pgsql-hackers
> > So, we'll have to abort some long running transaction.
> 
> Well, yes, some transaction that continues running while ~ 500 million
> other transactions come and go might give us trouble.  I wasn't really
> planning to worry about that case ;-)

Agreed, I just don't like to rely on assumptions -:)

> > Required frequency of *successful* vacuum over *all* tables.
> > We would have to remember something in pg_class/pg_database
> > and somehow force vacuum over "too-long-unvacuumed-tables"
> > *automatically*.
> 
> I don't think this is a problem now; in practice you couldn't possibly
> go for half a billion transactions without vacuuming, I'd think.

Why not?
And once again - assumptions are not good for transaction area.

> If your plans to eliminate regular vacuuming become reality, then this
> scheme might become less reliable, but at present I think there's plenty
> of safety margin.
>
> > If undo would be implemented then we could delete pg_log between
> > postmaster startups - startup counter is remembered in pages, so
> > seeing old startup id in a page we would know that there are only
> > long ago committed xactions (ie only visible changes) there
> > and avoid xid comparison. But ... there will be no undo in 7.1.
> > And I foresee problems with WAL based BAR implementation if we'll
> > follow proposed solution: redo restores original xmin/xmax - how
> > to "freeze" xids while restoring DB?
> 
> So, we might eventually have a better answer from WAL, but not for 7.1.
> I think my idea is reasonably non-invasive and could be removed without
> much trouble once WAL offers a better way.  I'd really like to have some
> answer for 7.1, though.  The sort of numbers John Scott was quoting to
> me for Verizon's paging network throughput make it clear that we aren't
> going to survive at that level with a limit of 4G transactions per
> database reload.  Having to vacuum everything on at least a
> 1G-transaction cycle is salable, dump/initdb/reload is not ...

Understandable. And probably we can get BAR too but require full
backup every WRAPLIMIT/2 (or better /4) transactions.

Vadim




pgsql-hackers by date:

Previous
From: The Hermit Hacker
Date:
Subject: Re: Re: [COMMITTERS] pgsql/contrib/pg_dumpaccounts (Makefile README pg_dumpaccounts.sh)
Next
From: "Vadim Mikheev"
Date:
Subject: Re: Alternative database locations are broken