blocksize problem - Mailing list pgsql-general

From Palle Girgensohn
Subject blocksize problem
Date
Msg-id 387BF673.EDC3BBF6@partitur.se
Whole thread Raw
List pgsql-general
Hi!

I have a strange problem with FreeBSD 3.4-RELEASE, and postgresql 6.5.2. It sometimes produces signal 11 (see below).

We have a table, bigtext, which is used if some text is too large to store in one single tuple. This is done our
application'sC++ code, and seems to work fine. Recently, I realized that the upper 
limit in recent versions of postgres was far below 8000 bytes, and rather like just above 5000. I believe I reported
thisa month or so ago, but haven't heard much about it. Can't find it in the 
archives, so maybe I never came around to it...

Anyway; Here's an example. The file bigtext.sql contains some tuples and has been \copied from a database that has
problems.The largest tuples are approx 5000 bytes. I start by loading it into a 
temporary db:

girgen=> create table bigtext (id int, seqid int , "text" text);
girgen=> \copy bigtext from bigtext.sql
girgen=> select id,seqid,length("text") as len from bigtext order by len desc;

  id|seqid| len
----+-----+----
2347|    0|4890
2357|    0|4817
2357|    2|4797
2357|    1|4789
2347|    1|3917
2357|    3|1428
(6 rows)

girgen=> vacuum;
VACUUM
girgen=> vacuum analyze;
ERROR:  Tuple is too big: size 9736

girgen=> vacuum verbose analyze bigtext;
NOTICE:  CreatePortal: portal <vacuum> already exists
NOTICE:  --Relation bigtext--
NOTICE:  Pages 5: Changed 0, Reapped 0, Empty 0, New 0; Tup 6: Vac 0, Keep/VTL 0/0, Crash 0, UnUsed 0, MinLen 1472,
MaxLen4934; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0. Elapsed 0/3 
sec.
ERROR:  Tuple is too big: size 9736

As you can see, the size is twice as big as the tuple. How come? What is the real limit? How come it isn't the same
limitas the filesystem blocksize, 8192? This seem to happen for pg_dump, vacuum and 
probably some other commands.

Now, if I run this on certain machines the frontend crashes with signal 11. This might be only on machines using vinum,
asoftware disk raid system (see <http://www.lemis.com/vinum.html>) I have not 
verified this yet, though, and it seems unlikely, imho.


BTW. Here's another example. Same table, but on another occasion:

foobar=> vacuum analyze;
ERROR:  Tuple is too big: size 8144, max size 8140
foobar=> select id,seqid,length("text") as len from bigtext order by len desc;
  id|seqid| len
----+-----+----
9592|    1|5149
9590|    1|2939
9590|    1|2909
9593|    1|2693
9595|    1|2600
9595|    1|1285
9595|    1|1285
9567|    2| 293
(8 rows)

<modify database...>

foobar=> select id,seqid,length("text") as len from bigtext order by len desc;
  id|seqid| len
----+-----+----
9592|    1|5148
9590|    1|2939
9590|    1|2909
9593|    1|2693
9595|    1|2600
9595|    1|1285
9595|    1|1285
9567|    2| 293
(8 rows)

foobar=> vacuum analyze;
VACUUM


So... What shall I set as the upper limit for this table? After running this last example, I set it to 5000, but
accordingto the new results at beginning of this text, I should go even lower? Why can 
I insert stuff into the db, if it can't handle it? I read everywhere that the limit is the blocksize, 8192 default...
Seems,this is not quite true in all cases. Is postgres doing some copying that 
limits the size of individual fields?

Regards,
Palle Girgensohn

pgsql-general by date:

Previous
From: Peter Eisentraut
Date:
Subject: Re: [GENERAL] Where is my database gone ?
Next
From: admin
Date:
Subject: Re: [GENERAL] How do you live without OUTER joins?