Thread: [MASSMAIL]CSN snapshots in hot standby

[MASSMAIL]CSN snapshots in hot standby

From
Heikki Linnakangas
Date:
You cannot run queries on a Hot Standby server until the standby has 
seen a running-xacts record. Furthermore if the subxids cache had 
overflowed, you also need to wait for those transactions to finish. That 
is usually not a problem, because we write a running-xacts record after 
each checkpoint, and most systems don't use so many subtransactions that 
the cache would overflow. Still, you can run into it if you're unlucky, 
and it's annoying when you do.

It occurred to me that we could replace the known-assigned-xids 
machinery with CSN snapshots. We've talked about CSN snapshots many 
times in the past, and I think it would make sense on the primary too, 
but for starters, we could use it just during Hot Standby.

With CSN-based snapshots, you don't have the limitation with the 
fixed-size known-assigned-xids array, and overflowed sub-XIDs are not a 
problem either. You can always enter Hot Standby and start accepting 
queries as soon as the standby is in a physically consistent state.

I dusted up and rebased the last CSN patch that I found on the mailing 
list [1], and modified it so that it's only used during recovery. That 
makes some things simpler and less scary. There are no changes to how 
transaction commit happens in the primary, the CSN log is only kept 
up-to-date in the standby, when commit/abort records are replayed. The 
CSN of each transaction is the LSN of its commit record.

The CSN approach is much simpler than the existing known-assigned-XIDs 
machinery, as you can see from "git diff --stat" with this patch:

  32 files changed, 773 insertions(+), 1711 deletions(-)

With CSN snapshots, we don't need the known-assigned-XIDs machinery, and 
we can get rid of the xact-assignment records altogether. We no longer 
need the running-xacts records for Hot Standby either, but I wasn't able 
to remove that because it's still used by logical replication, in 
snapbuild.c. I have a feeling that that could somehow be simplified too, 
but didn't look into it.

This is obviously v18 material, so I'll park this at the July commitfest 
for now. There are a bunch of little FIXMEs in the code, and needs 
performance testing, but overall I was surprised how easy this was.

(We ran into this issue particularly hard with Neon, because with Neon 
you don't need to perform WAL replay at standby startup. However, when 
you don't perform WAL replay, you don't get to see the running-xact 
record after the checkpoint either. If the primary is idle, it doesn't 
generate new running-xact records, and the standby cannot start Hot 
Standby until the next time something happens in the primary. It's 
always a potential problem with overflowed sub-XIDs cache, but the lack 
of WAL replay made it happen even when there are no subtransactions 
involved.)

[1] https://www.postgresql.org/message-id/2020081009525213277261%40highgo.ca

-- 
Heikki Linnakangas
Neon (https://neon.tech)
Attachment

Re: CSN snapshots in hot standby

From
Kirill Reshke
Date:
Hi,

On Thu, 4 Apr 2024 at 22:21, Heikki Linnakangas <hlinnaka@iki.fi> wrote:
You cannot run queries on a Hot Standby server until the standby has
seen a running-xacts record. Furthermore if the subxids cache had
overflowed, you also need to wait for those transactions to finish. That
is usually not a problem, because we write a running-xacts record after
each checkpoint, and most systems don't use so many subtransactions that
the cache would overflow. Still, you can run into it if you're unlucky,
and it's annoying when you do.

It occurred to me that we could replace the known-assigned-xids
machinery with CSN snapshots. We've talked about CSN snapshots many
times in the past, and I think it would make sense on the primary too,
but for starters, we could use it just during Hot Standby.

With CSN-based snapshots, you don't have the limitation with the
fixed-size known-assigned-xids array, and overflowed sub-XIDs are not a
problem either. You can always enter Hot Standby and start accepting
queries as soon as the standby is in a physically consistent state.

I dusted up and rebased the last CSN patch that I found on the mailing
list [1], and modified it so that it's only used during recovery. That
makes some things simpler and less scary. There are no changes to how
transaction commit happens in the primary, the CSN log is only kept
up-to-date in the standby, when commit/abort records are replayed. The
CSN of each transaction is the LSN of its commit record.

The CSN approach is much simpler than the existing known-assigned-XIDs
machinery, as you can see from "git diff --stat" with this patch:

  32 files changed, 773 insertions(+), 1711 deletions(-)

With CSN snapshots, we don't need the known-assigned-XIDs machinery, and
we can get rid of the xact-assignment records altogether. We no longer
need the running-xacts records for Hot Standby either, but I wasn't able
to remove that because it's still used by logical replication, in
snapbuild.c. I have a feeling that that could somehow be simplified too,
but didn't look into it.

This is obviously v18 material, so I'll park this at the July commitfest
for now. There are a bunch of little FIXMEs in the code, and needs
performance testing, but overall I was surprised how easy this was.

(We ran into this issue particularly hard with Neon, because with Neon
you don't need to perform WAL replay at standby startup. However, when
you don't perform WAL replay, you don't get to see the running-xact
record after the checkpoint either. If the primary is idle, it doesn't
generate new running-xact records, and the standby cannot start Hot
Standby until the next time something happens in the primary. It's
always a potential problem with overflowed sub-XIDs cache, but the lack
of WAL replay made it happen even when there are no subtransactions
involved.)

[1] https://www.postgresql.org/message-id/2020081009525213277261%40highgo.ca

--
Heikki Linnakangas
Neon (https://neon.tech)

Great. I really like the idea of vanishing KnownAssignedXids instead of optimizing it (if optimizations are even possible).

> + /*
> + * TODO: We must mark CSNLOG first
> + */
> + CSNLogSetCSN(xid, parsed->nsubxacts, parsed->subxacts, lsn);
> +

As far as I understand we simply use the current Wal Record LSN as its XID CSN number. Ok.
This seems to work for standbys snapshots, but this patch may be really useful for distributed postgresql solutions, that use CSN for working
with distributed database snapshot (across multiple shards). These solutions need to set CSN to some other value (time from True time/ClockSI or whatever).
So, maybe we need some hooks here? Or maybe, we can take CSN here from extension somehow. For example, we can define
some interface and extend it. Does this sound reasonable for you?

Also, I attached a patch which adds some more todos.

Attachment

Re: CSN snapshots in hot standby

From
"Andrey M. Borodin"
Date:

> On 5 Apr 2024, at 02:08, Kirill Reshke <reshkekirill@gmail.com> wrote:
>
> maybe we need some hooks here? Or maybe, we can take CSN here from extension somehow.

I really like the idea of CSN-provider-as-extension.
But it's very important to move on with CSN, at least on standby, to make CSN actually happen some day.
So, from my perspective, having LSN-as-CSN is already huge step forward.


Best regards, Andrey Borodin.


Re: CSN snapshots in hot standby

From
Heikki Linnakangas
Date:
On 05/04/2024 13:49, Andrey M. Borodin wrote:
>> On 5 Apr 2024, at 02:08, Kirill Reshke <reshkekirill@gmail.com> wrote:

Thanks for taking a look, Kirill!

>> maybe we need some hooks here? Or maybe, we can take CSN here from extension somehow.
> 
> I really like the idea of CSN-provider-as-extension.
> But it's very important to move on with CSN, at least on standby, to make CSN actually happen some day.
> So, from my perspective, having LSN-as-CSN is already huge step forward.

Yeah, I really don't want to expand the scope of this.

Here's a new version. Rebased, and lots of comments updated.

I added a tiny cache of the CSN lookups into SnapshotData, which can 
hold the values of 4 XIDs that are known to be visible to the snapshot, 
and 4 invisible XIDs. This is pretty arbitrary, but the idea is to have 
something very small to speed up the common cases that 1-2 XIDs are 
repeatedly looked up, without adding too much overhead.


I did some performance testing of the visibility checks using these CSN 
snapshots. The tests run SELECTs with a SeqScan in a standby, over a 
table where all the rows have xmin/xmax values that are still 
in-progress in the primary.

Three test scenarios:

1. large-xact: one large transaction inserted all the rows. All rows 
have the same XMIN, which is still in progress

2. many-subxacts: one large transaction inserted each row in a separate 
subtransaction. All rows have a different XMIN, but they're all 
subtransactions of the same top-level transaction. (This causes the 
subxids cache in the proc array to overflow)

3. few-subxacts: All rows are inserted, committed, and vacuum frozen. 
Then, using 10 in separate subtransactions, DELETE the rows, in an 
interleaved fashion. The XMAX values cycle like this "1, 2, 3, 4, 5, 6, 
7, 8, 9, 10, 1, 2, 3, 4, 5, ...". The point of this is that these 
sub-XIDs fit in the subxids cache in the procarray, but the pattern 
defeats the simple 4-element cache that I added.

The test script I used is attached. I repeated it a few times with 
master and the patches here, and picked the fastest runs for each. Just 
eyeballing the results, there's about ~10% variance in these numbers. 
Smaller is better.

Master:

large-xact: 4.57732510566711
many-subxacts: 18.6958119869232
few-subxacts: 16.467698097229

Patched:

large-xact: 10.2999930381775
many-subxacts: 11.6501438617706
few-subxacts: 19.8457028865814

With cache:

large-xact: 3.68792295455933
many-subxacts: 13.3662350177765
few-subxacts: 21.4426419734955

The 'large-xacts' results show that the CSN lookups are slower than the 
binary search on the 'xids' array. Not a surprise. The 4-element cache 
fixes the regression, which is also not a surprise.

The 'many-subxacts' results show that the CSN lookups are faster than 
the current method in master, when the subxids cache has overflowed. 
That makes sense: on master, we always perform a lookup in pg_subtrans, 
if the suxids cache has overflowed, which is more or less the same 
overhead as the CSN lookup. But we avoid the binary search on the xids 
array after that.

The 'few-subxacts' shows a regression, when the 4-element cache is not 
effective. I think that's acceptable, the CSN approach has many 
benefits, and I don't think this is a very common scenario. But if 
necessary, it could perhaps be alleviated with more caching, or by 
trying to compensate by optimizing elsewhere.

-- 
Heikki Linnakangas
Neon (https://neon.tech)

Attachment

Re: CSN snapshots in hot standby

From
Kirill Reshke
Date:
On Wed, 14 Aug 2024 at 01:13, Heikki Linnakangas <hlinnaka@iki.fi> wrote:
>
> On 05/04/2024 13:49, Andrey M. Borodin wrote:
> >> On 5 Apr 2024, at 02:08, Kirill Reshke <reshkekirill@gmail.com> wrote:
>
> Thanks for taking a look, Kirill!
>
> >> maybe we need some hooks here? Or maybe, we can take CSN here from extension somehow.
> >
> > I really like the idea of CSN-provider-as-extension.
> > But it's very important to move on with CSN, at least on standby, to make CSN actually happen some day.
> > So, from my perspective, having LSN-as-CSN is already huge step forward.
>
> Yeah, I really don't want to expand the scope of this.
>
> Here's a new version. Rebased, and lots of comments updated.
>
> I added a tiny cache of the CSN lookups into SnapshotData, which can
> hold the values of 4 XIDs that are known to be visible to the snapshot,
> and 4 invisible XIDs. This is pretty arbitrary, but the idea is to have
> something very small to speed up the common cases that 1-2 XIDs are
> repeatedly looked up, without adding too much overhead.
>
>
> I did some performance testing of the visibility checks using these CSN
> snapshots. The tests run SELECTs with a SeqScan in a standby, over a
> table where all the rows have xmin/xmax values that are still
> in-progress in the primary.
>
> Three test scenarios:
>
> 1. large-xact: one large transaction inserted all the rows. All rows
> have the same XMIN, which is still in progress
>
> 2. many-subxacts: one large transaction inserted each row in a separate
> subtransaction. All rows have a different XMIN, but they're all
> subtransactions of the same top-level transaction. (This causes the
> subxids cache in the proc array to overflow)
>
> 3. few-subxacts: All rows are inserted, committed, and vacuum frozen.
> Then, using 10 in separate subtransactions, DELETE the rows, in an
> interleaved fashion. The XMAX values cycle like this "1, 2, 3, 4, 5, 6,
> 7, 8, 9, 10, 1, 2, 3, 4, 5, ...". The point of this is that these
> sub-XIDs fit in the subxids cache in the procarray, but the pattern
> defeats the simple 4-element cache that I added.
>
> The test script I used is attached. I repeated it a few times with
> master and the patches here, and picked the fastest runs for each. Just
> eyeballing the results, there's about ~10% variance in these numbers.
> Smaller is better.
>
> Master:
>
> large-xact: 4.57732510566711
> many-subxacts: 18.6958119869232
> few-subxacts: 16.467698097229
>
> Patched:
>
> large-xact: 10.2999930381775
> many-subxacts: 11.6501438617706
> few-subxacts: 19.8457028865814
>
> With cache:
>
> large-xact: 3.68792295455933
> many-subxacts: 13.3662350177765
> few-subxacts: 21.4426419734955
>
> The 'large-xacts' results show that the CSN lookups are slower than the
> binary search on the 'xids' array. Not a surprise. The 4-element cache
> fixes the regression, which is also not a surprise.
>
> The 'many-subxacts' results show that the CSN lookups are faster than
> the current method in master, when the subxids cache has overflowed.
> That makes sense: on master, we always perform a lookup in pg_subtrans,
> if the suxids cache has overflowed, which is more or less the same
> overhead as the CSN lookup. But we avoid the binary search on the xids
> array after that.
>
> The 'few-subxacts' shows a regression, when the 4-element cache is not
> effective. I think that's acceptable, the CSN approach has many
> benefits, and I don't think this is a very common scenario. But if
> necessary, it could perhaps be alleviated with more caching, or by
> trying to compensate by optimizing elsewhere.
>
> --
> Heikki Linnakangas
> Neon (https://neon.tech)

Thanks for the update.  I will try to find time for perf-testing this.
Firstly, random suggestions. Sorry for being too nit-picky

1) in 0002
> +/*
> + * Number of shared CSNLog buffers.
> + */
> +static Size
> +CSNLogShmemBuffers(void)
> +{
> + return Min(32, Max(16, NBuffers / 512));
> +}

Should we GUC this?

2) In 0002 CSNLogShmemInit:

> + //SlruPagePrecedesUnitTests(CsnlogCtl, SUBTRANS_XACTS_PER_PAGE);

remove this?

3) In 0002 InitCSNLogPage:

> + SimpleLruZeroPage(CsnlogCtl, pageno);
we can use ZeroCSNLogPage here. This will justify existance of this
function a little bit more.

4) In 0002:
> +++ b/src/backend/replication/logical/snapbuild.c
> @@ -27,7 +27,7 @@
>  * removed. This is achieved by using the replication slot mechanism.
>  *
>  * As the percentage of transactions modifying the catalog normally is fairly
> - * small in comparisons to ones only manipulating user data, we keep track of
> + * small in comparison to ones only manipulating user data, we keep track of
>  * the committed catalog modifying ones inside [xmin, xmax) instead of keeping
>  * track of all running transactions like it's done in a normal snapshot. Note
>  * that we're generally only looking at transactions that have acquired an

This change is unrelated to 0002 patch, let's just push it as a separate change.


Overall, 0002 looks straightforward, though big. I however wonder how
we can test that this change does not lead to any unpleasant problem,
like observing uncommitted changes on replicas, corruption, and other
stuff? Maybe some basic injection-point-based TAP test here is
desirable?


-- 
Best regards,
Kirill Reshke



Re: CSN snapshots in hot standby

From
Andres Freund
Date:
Hi,

On 2024-08-13 23:13:39 +0300, Heikki Linnakangas wrote:
> I added a tiny cache of the CSN lookups into SnapshotData, which can hold
> the values of 4 XIDs that are known to be visible to the snapshot, and 4
> invisible XIDs. This is pretty arbitrary, but the idea is to have something
> very small to speed up the common cases that 1-2 XIDs are repeatedly looked
> up, without adding too much overhead.
> 
> 
> I did some performance testing of the visibility checks using these CSN
> snapshots. The tests run SELECTs with a SeqScan in a standby, over a table
> where all the rows have xmin/xmax values that are still in-progress in the
> primary.
> 
> Three test scenarios:
> 
> 1. large-xact: one large transaction inserted all the rows. All rows have
> the same XMIN, which is still in progress
> 
> 2. many-subxacts: one large transaction inserted each row in a separate
> subtransaction. All rows have a different XMIN, but they're all
> subtransactions of the same top-level transaction. (This causes the subxids
> cache in the proc array to overflow)
> 
> 3. few-subxacts: All rows are inserted, committed, and vacuum frozen. Then,
> using 10 in separate subtransactions, DELETE the rows, in an interleaved
> fashion. The XMAX values cycle like this "1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 1,
> 2, 3, 4, 5, ...". The point of this is that these sub-XIDs fit in the
> subxids cache in the procarray, but the pattern defeats the simple 4-element
> cache that I added.

I'd like to see some numbers for a workload with many overlapping top-level
transactions. I contrast to 2) HEAD wouldn't need to do subtrans lookups,
whereas this patch would need to do csn lookups. And a four entry cache
probably wouldn't help very much.


> +/*
> + * Record commit LSN of a transaction and its subtransaction tree.
> + *
> + * xid is a single xid to set status for. This will typically be the top level
> + * transaction ID for a top level commit.
> + *
> + * subxids is an array of xids of length nsubxids, representing subtransactions
> + * in the tree of xid. In various cases nsubxids may be zero.
> + *
> + * commitLsn is the LSN of the commit record.  This is currently never called
> + * for aborted transactions.
> + */
> +void
> +CSNLogSetCSN(TransactionId xid, int nsubxids, TransactionId *subxids,
> +             XLogRecPtr commitLsn)
> +{
> +    int            pageno;
> +    int            i = 0;
> +    int            offset = 0;
> +
> +    Assert(TransactionIdIsValid(xid));
> +
> +    pageno = TransactionIdToPage(xid);    /* get page of parent */
> +    for (;;)
> +    {
> +        int            num_on_page = 0;
> +
> +        while (i < nsubxids && TransactionIdToPage(subxids[i]) == pageno)
> +        {
> +            num_on_page++;
> +            i++;
> +        }

Hm - is there any guarantee / documented requirement that subxids is sorted?


> +        CSNLogSetPageStatus(xid,
> +                            num_on_page, subxids + offset,
> +                            commitLsn, pageno);
> +        if (i >= nsubxids)
> +            break;
> +
> +        offset = i;
> +        pageno = TransactionIdToPage(subxids[offset]);
> +        xid = InvalidTransactionId;
> +    }
> +}

Hm. Maybe I'm missing something, but what prevents a concurrent transaction to
check the visibility of a subtransaction between marking the subtransaction
committed and marking the main transaction committed? If subtransaction and
main transaction are on the same page that won't be possible, but if they are
on different ones it does seem possible?

Today XidInMVCCSnapshot() will use pg_subtrans to find the top transaction in
case of a suboverflowed snapshot, but with this patch that's not the case
anymore.  Which afaict will mean that repeated snapshot computations could
give different results for the same query?



Greetings,

Andres Freund



Re: CSN snapshots in hot standby

From
Heikki Linnakangas
Date:
On 29/10/2024 18:33, Heikki Linnakangas wrote:
> I added two tests to the test suite:
>                                  master     patched
> insert-all-different-xids:     0.00027    0.00019 s / iteration
> insert-all-different-subxids:  0.00023    0.00020 s / iteration
> 
> insert-all-different-xids: Open 1000 connections, insert one row in 
> each, and leave the transactions open. In the replica, select all the rows
> 
> insert-all-different-subxids: The same, but with 1 transaction with 1000 
> subxids.
> 
> The point of these new tests is to test the scenario where the cache 
> doesn't help and just adds overhead, because each XID is looked up only 
> once. Seems to be fine. Surprisingly good actually; I'll do some more 
> profiling on that to understand why it's even faster than 'master'.

Ok, I did some profiling and it makes sense:

In the insert-all-different-xids test on 'master', we spend about 60& of 
CPU time in XidInMVCCSnapshot(), doing pg_lfind32() over the subxip 
array. We should probably sort the array and use a binary search if it's 
large or something...

With these patches, instead of the pg_lfind32() over subxip array, we 
perform one CSN SLRU lookup instead, and the page is cached. There's 
locking overhead etc. with that, but it's still cheaper than the 
pg_lfind32().

In the insert-all-different-subxids test on 'master', the subxip array 
is overflowed, so we call SubTransGetTopmostTransaction() on each XID. 
That's performs two pg_subtrans lookups for each XID, first for the 
subxid, then for the parent. With these patches, we perform just one 
SLRU lookup, in pg_csnlog, which is faster.

> Now the downside of this new cache: Since it has no size limit, if you 
> keep looking up different XIDs, it will keep growing until it holds all 
> the XIDs between the snapshot's xmin and xmax. That can take a lot of 
> memory in the worst case. Radix tree is pretty memory efficient, but 
> holding, say 1 billion XIDs would probably take something like 500 MB of 
> RAM (the radix tree stores 64-bit words with 2 bits per XID, plus the 
> radix tree nodes). That's per snapshot, so if you have a lot of 60&
> connections, maybe even with multiple snapshots each, that can add up.
> 
> I'm inclined to accept that memory usage. If we wanted to limit the size 
> of the cache, would need to choose a policy on how to truncate it 
> (delete random nodes?), what the limit should be etc. But I think it'd 
> be rare to hit those cases in practice. If you have a one billion XID 
> old transaction running in the primary, you probably have bigger 
> problems already.

I'd love to hear some thoughts on this caching behavior. Is it 
acceptable to let the cache grow, potentially to very large sizes in the 
worst cases? Or do we need to make it more complicated and implement 
some eviction policy?

-- 
Heikki Linnakangas
Neon (https://neon.tech)



Re: CSN snapshots in hot standby

From
John Naylor
Date:
On Tue, Oct 29, 2024 at 11:34 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:
>                              master     patched
> few-xacts:                 0.0041      0.0041 s / iteration
> many-xacts:                0.0042      0.0042 s / iteration
> many-xacts-wide-apart:     0.0043      0.0045 s / iteration

Hi Heikki,

I have some thoughts about behavior of the cache that might not be
apparent in this test:

The tree is only as tall as need be to store the highest non-zero
byte. On a newly initialized cluster, the current txid is small. The
first two test cases here will result in a tree with height of 2. The
last one will have a height of 3, and its runtime looks a bit higher,
although that could be just noise or touching more cache lines. It
might be worth it to try a test run while forcing the upper byte of
the keys to be non-zero (something like "key | (1<<30), so that the
tree always has a height of 4. That would match real-world conditions
more closely. If need be, there are a couple things we can do to
optimize node dispatch and touch fewer cache lines.

> I added two tests to the test suite:
>                                  master     patched
> insert-all-different-xids:     0.00027    0.00019 s / iteration
> insert-all-different-subxids:  0.00023    0.00020 s / iteration

> The point of these new tests is to test the scenario where the cache
> doesn't help and just adds overhead, because each XID is looked up only
> once. Seems to be fine. Surprisingly good actually; I'll do some more
> profiling on that to understand why it's even faster than 'master'.

These tests use a sequential scan. For things like primary key
lookups, I wonder if the overhead of creating and destroying the
tree's memory contexts for the (not used again) cache would be
noticeable. If so, it wouldn't be too difficult to teach radix tree to
create the larger contexts lazily.

> Now the downside of this new cache: Since it has no size limit, if you
> keep looking up different XIDs, it will keep growing until it holds all
> the XIDs between the snapshot's xmin and xmax. That can take a lot of
> memory in the worst case. Radix tree is pretty memory efficient, but
> holding, say 1 billion XIDs would probably take something like 500 MB of
> RAM (the radix tree stores 64-bit words with 2 bits per XID, plus the
> radix tree nodes). That's per snapshot, so if you have a lot of
> connections, maybe even with multiple snapshots each, that can add up.
>
> I'm inclined to accept that memory usage. If we wanted to limit the size
> of the cache, would need to choose a policy on how to truncate it
> (delete random nodes?), what the limit should be etc. But I think it'd
> be rare to hit those cases in practice. If you have a one billion XID
> old transaction running in the primary, you probably have bigger
> problems already.

I don't have a good sense of whether it needs a limit or not, but if
we decide to add one as a precaution, maybe it's enough to just blow
the cache away when reaching some limit? Being smarter than that would
need some work.

--
John Naylor
Amazon Web Services



Re: CSN snapshots in hot standby

From
John Naylor
Date:
On Tue, Dec 3, 2024 at 9:25 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:
>
> On 20/11/2024 15:33, John Naylor wrote:
> I did find one weird thing that makes a big difference: I originally
> used AllocSetContextCreate(..., ALLOCSET_DEFAULT_SIZES) for the radix
> tree's memory context. With that, XidInMVCCSnapshot() takes about 19% of
> the CPU time in that test. When I changed that to ALLOCSET_SMALL_SIZES,
> it falls down to the 4% figure. And weird enough, in both cases the time
> seems to be spent in the malloc() call from SlabContextCreate(), not
> AllocSetContextCreate(). I think doing this particular mix of large and
> small allocations with malloc() somehow poisons its free list or
> something. So this is probably heavily dependent on the malloc()
> implementation. In any case, ALLOCSET_SMALL_SIZES is clearly a better
> choice here, even without that effect.

Hmm, interesting. That passed context is needed for 4 things:
1. allocated values (not used here for 64-bit, and 32-bit could be
made to work the same way)
2. iteration state (not used here)
3. a convenient place to put slab child contexts so we can free them easily
4. a place to put the "control object" -- this is really only needed
for shared memory and I have a personal todo to  embed it rather than
allocate it for the local memory case.

Removing the need for a passed context for callers that don't need it
is additional possible future work.

Anyway, 0005 looks good to me.

--
John Naylor
Amazon Web Services