I set my BLCKSZ to 30720 (30k -- so as to not be absolute bleeding edge) and
everything has been running perfect. It's on my development machine but I
hit it hard for a straight week just to make sure it was going to be stable.
I just used some scripts of my own making to do enormous amounts of INSERT
UPDATE DELETE queries. I think over the week I did several hundred thousand
(it's been a few months back).. Still, it worked fine for me..
I see the following in config.h, have you read this?
/*
* RELSEG_SIZE is the maximum number of blocks allowed in one disk file.
* Thus, the maximum size of a single file is RELSEG_SIZE * BLCKSZ;
* relations bigger than that are divided into multiple files.
*
* CAUTION: RELSEG_SIZE * BLCKSZ must be less than your OS' limit on file
* size. This is typically 2Gb or 4Gb in a 32-bit operating system. By
* default, we make the limit 1Gb to avoid any possible integer-overflow
* problems within the OS. A limit smaller than necessary only means we
* divide a large relation into more chunks than necessary, so it seems
* best to err in the direction of a small limit. (Besides, a power-of-2
* value saves a few cycles in md.c.)
*
* CAUTION: you had best do an initdb if you change either BLCKSZ or
* RELSEG_SIZE.
*/
-Mitch
----- Original Message -----
From: "Steve Wolfe" <steve@iboats.com>
To: "general-help postgresql" <pgsql-general@postgresql.org>
Sent: Wednesday, September 27, 2000 11:12 AM
Subject: Re: [GENERAL] Increased BLKSZ, but now pgsql seg-faults?
> > I've increased the BLKSZ to 32K to support large tuples and then I did a
> full
> > re-compile (gmake clean, configure, gmake all, gmake install) and the
> database
> > accepts my large tuples. But now when I come in through psql, I get a
> > segmentation fault inside of psql when I try to display the table with
the
> large
> > tuples (i.e. \d largetable).
>
> The developpers have said several times here that increaseing BLKSZ has
> bad side-effects. : )
>
> steve
>
>
>