Robert Bruccoleri (bruc@stone.congen.com) reports a bug with a severity of 3
The lower the number the more severe it is.
Short Description
Large data field causes a backend crash.
Long Description
In testing TOAST in PostgreSQL 7.1beta4, I was curious to see
how big a field could actually be handled. I created a simple table
with one text field, seq, and tried using the COPY command to
fill it with a value of length 194325306 characters. It crashed
the system with the following messages:
test=# copy test from '/stf/bruc/RnD/genscan/foo.test';
TRAP: Too Large Allocation Request("!(0 < (size) && (size) <= ((Size) 0xfffffff)):size=268435456 [0x10000000]", File:
"mcxt.c",Line: 478)
!(0 < (size) && (size) <= ((Size) 0xfffffff)) (0) [No such file or directory]
pqReadData() -- backend closed the channel unexpectedly.
This probably means the backend terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Server process (pid 2109589) exited with status 134 at Mon Feb
5 15:20:42 2001
Terminating any active server processes...
The Data Base System is in recovery mode
----------------------------------------------------------------------
I have tried a field of length 52000000 characters, and that worked
fine (very impressive!).
The system should gracefully exit from an oversize record.
Sample Code
No file was uploaded with this report