Hi,
since there are known benefits to using smaller
blocks for OLTP and larger blocks for OLAP), and if that ever happens
I guess those regression tests will probably still need to run with
the traditional size.
Yes, In Greenplum and Apache Cloudberry, we've been using a default block size of 32KB for many years.
On Feb 12, 2026 at 05:05 +0800, Thomas Munro <thomas.munro@gmail.com>, wrote:
On Thu, Feb 12, 2026 at 7:19 AM Yasir <yasir.hussain.shah@gmail.com> wrote:
I recently configured PostgreSQL with a custom blocksize:
./configure --with-blocksize=32
make && make check
OR
make && make check-world
This produced so many regression failures. I'm wondering, are such failures typical/expected when altering the default block size?
Our experience shows that when changing the block size, most of the regression test differences are expected — they often reflect output variations (like buffer counts, cost estimates, or physical storage details) rather than functional bugs.
That said, it really needs to be examined case by case.
If the change causes a server crash, wrong query results, or any kind of data corruption, that’s definitely a red flag and worth investigating seriously.
So while many of the failures you saw are likely harmless, it's still good practice to verify the ones that look suspicious.
--
Zhang Mingli
HashData