Re: Regression failures after changing PostgreSQL blocksize - Mailing list pgsql-hackers

From Zhang Mingli
Subject Re: Regression failures after changing PostgreSQL blocksize
Date
Msg-id 54d9a790-9da1-490f-9e44-8e76bb60aa9b@Spark
Whole thread Raw
In response to Re: Regression failures after changing PostgreSQL blocksize  (Thomas Munro <thomas.munro@gmail.com>)
Responses Re: Regression failures after changing PostgreSQL blocksize
List pgsql-hackers
Hi,

since there are known benefits to using smaller
blocks for OLTP and larger blocks for OLAP), and if that ever happens
I guess those regression tests will probably still need to run with
the traditional size.
Yes,  In Greenplum and Apache Cloudberry, we've been using a default block size of 32KB for many years.

On Feb 12, 2026 at 05:05 +0800, Thomas Munro <thomas.munro@gmail.com>, wrote:
On Thu, Feb 12, 2026 at 7:19 AM Yasir <yasir.hussain.shah@gmail.com> wrote:
I recently configured PostgreSQL with a custom blocksize:

./configure --with-blocksize=32
make && make check
OR
make && make check-world

This produced so many regression failures. I'm wondering, are such failures typical/expected when altering the default block size?
Our experience shows that when changing the block size, most of the regression test differences are expected — they often reflect output variations (like buffer counts, cost estimates, or physical storage details) rather than functional bugs.

That said, it really needs to be examined case by case. 
If the change causes a server crash, wrong query results, or any kind of data corruption, that’s definitely a red flag and worth investigating seriously.
So while many of the failures you saw are likely harmless, it's still good practice to verify the ones that look suspicious.


--
Zhang Mingli
HashData

pgsql-hackers by date:

Previous
From: Andres Freund
Date:
Subject: Re: Odd usage of errmsg_internal in bufmgr.c
Next
From: "Xiangyu Liang"
Date:
Subject: Re:add warning upon successful md5 password auth