On 07/31/2015 02:46 PM, Heikki Linnakangas wrote:
> On 07/31/2015 12:29 AM, Josh Berkus wrote:
>> On 07/30/2015 07:24 AM, Heikki Linnakangas wrote:
>>>
>>> I think we should move to 64-bit XIDs in in-memory structs snapshots,
>>> proc array etc. And expand clog to handle 64-bit XIDs. But keep the
>>> xmin/xmax fields on heap pages at 32-bits, and add an epoch-like field
>>> to the page header so that logically the xmin/xmax fields on the page
>>> are 64 bits wide, but physically stored in 32 bits. That's possible as
>>> long as no two XIDs on the same page are more than 2^31 XIDs apart. So
>>> you still need to freeze old tuples on the page when that's about to
>>> happen, but it would make it possible to have more than 2^32 XID
>>> transactions in the clog. You'd never be forced to do anti-wraparound
>>> vacuums, you could just let the clog grow arbitrarily large
>>
>> When I introduced the same idea a few years back, having the clog get
>> arbitrarily large was cited as a major issue. I was under the
>> impression that clog size had some major performance impacts.
>
> Well, sure, if you don't want the clog to grow arbitrarily large, then
> you need to freeze. And most people would want to freeze regularly, to
> keep the clog size in check. The point is that you wouldn't *have* to do
> so at any particular time. You would never be up against the wall, in
> the "you must freeze now or your database will shut down" situation.
Well, we still have to freeze *eventually*. Just not for 122,000 years
at current real transaction rates. In 2025, though, we'll be having
this conversation again because of people doing 100 billion transactions
per second. ;-)
> I'm not sure what performance impact a very large clog might have. It
> takes some disk space (1 GB per 4 billion XIDs), and caching it takes
> some memory. And there is a small fixed number of CLOG buffers in shared
> memory. But I don't think there's any particularly nasty problem there.
Well, one way to find out, clearly.
--
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com