Thread: BLCKSZ fun facts

BLCKSZ fun facts

From
Peter Eisentraut
Date:
The smallest BLCKSZ that you can compile is 256.  But ...

The smallest BLCKSZ that actually works is 1024, because of this code in 
guc.c:
   case GUC_UNIT_BLOCKS:       val /= (BLCKSZ / 1024);

Maybe it's worth adding an #error here to prevent smaller sizes being 
used?

The smallest BLCKSZ that passes the regression tests is 4096.  With 
smaller settings your get half a dozen ordering differences, which 
seems OK.

The shared memory configuration code in initdb doesn't know about 
BLCKSZ, so with smaller sizes you get less shared buffers.  Maybe that 
is worth fixing sometime.

Aside from that my pgbench testing clearly shows that block sizes larger 
than 2048 become progressively slower.  Go figure.

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/


Re: BLCKSZ fun facts

From
Tom Lane
Date:
Peter Eisentraut <peter_e@gmx.net> writes:
> Aside from that my pgbench testing clearly shows that block sizes larger 
> than 2048 become progressively slower.  Go figure.

I believe that pgbench only stresses the "small writes" case, so
perhaps this result isn't too surprising.  You'd want to look at a mix
of small and bulk updates before drawing any final conclusions.
        regards, tom lane


Re: BLCKSZ fun facts

From
Kenneth Marshall
Date:
On Tue, Nov 28, 2006 at 12:08:59PM -0500, Tom Lane wrote:
> Peter Eisentraut <peter_e@gmx.net> writes:
> > Aside from that my pgbench testing clearly shows that block sizes larger 
> > than 2048 become progressively slower.  Go figure.
> 
> I believe that pgbench only stresses the "small writes" case, so
> perhaps this result isn't too surprising.  You'd want to look at a mix
> of small and bulk updates before drawing any final conclusions.
> 
>             regards, tom lane
> 
It has certainly been the case in every benchmark that I have ever seen
from RAID controllers to filesystem layouts that the sweet spot in the
trade-offs between small and large blocksizes was 8k. Other reasons
such as the need to cover a very large filespace of support many small
<< 1024 byte files, could tip the scales towards larger or smaller
blocksizes. 

Ken