On Mon, Aug 16, 2021 at 05:15:36PM -0700, Peter Geoghegan wrote:
> It doesn't make sense to have a local cache for a shared resource --
> that's the problem. You actually need some kind of basic locking or
> lease system, so that 10 backends don't all decide at the same time
> that one particular heap block is fully empty, and therefore a good
> target block for that backend alone. It's as if the backends believe
> that they're special little snowflakes, and that no other backend
> could possibly be thinking the same thing at the same time about the
> same heap page. And so when TPC-C does its initial bulk insert,
> distinct orders are already shuffled together in a hodge-podge, just
> because concurrent bulk inserters all insert on the same heap pages.
OK, I am trying to think of something simple we could test to see the
benefit, with few downsides. I assume the case you are considering is
that you have a 10 8k-page table, and one page is 80% full and the
others are 81% full, and if several backends start adding rows at the
same time, they will all choose the 80%-full page.
What if we change how we select pages with this:
1. find the page with the most free space
2. find all pages with up to 10% less free space than page #1
3. count the number of pages in #2
4. compute the proc_id modulus step #3 and use that page's offset from
step #2
For example:
1. page with most freespace is 95% free
2. pages 2,4,6,8,10 have between 86%-95% free
3. five pages
4. proc id 14293 % 5 = 3 so use the third page from #2, page 6
This should spread out page usage to be more even, but still favor pages
with more freespace. Yes, this is simplistic, but it would seem to have
few downsides and I would be interested to see how much it helps.
--
Bruce Momjian <bruce@momjian.us> https://momjian.us
EDB https://enterprisedb.com
If only the physical world exists, free will is an illusion.