Hi, all
I have found out what the problem is, although not (yet) the solution.
Executive summary:
------------------
The scan.l code is not flexing as intended. This means that, for most
production installations, the max token size is around 64kB.
Technical summary:
------------------
The problem is that scan.l is compiling to scan.c with YY_USES_REJECT being
defined. When YY_USES_REJECT is defined, the token buffer is NOT
expandable, and the parser will fail if expansion is attempted. However,
YY_USES_REJECT should not be defined, and I'm trying to work out why it is.
I have posted to the flex mailing list, and expect a reply within the next
day or so.
The bottom line:
------------------
The token limit seems to be effectively the size of YY_BUF_SIZE in scan.l,
until I submit a patch which should make it unlimited.
MikeA
>> -----Original Message-----
>> From: Natalya S. Makushina [mailto:mak@rtsoft.msk.ru]
>> Sent: Tuesday, August 17, 1999 3:15 PM
>> To: 'pgsql-hackers@postgreSQL.org'
>> Subject: [HACKERS] Problem with query length
>>
>>
>> -------------------------------------------------------------
>> ----------------------------------------------------------------
>> I have posted this mail to psql-general. But i didn't get
>> any answer yet.
>> -------------------------------------------------------------
>> ----------------------------------------------------------------
>>
>> When i had tried to insert into text field text (length
>> about 4000 chars), the backend have crashed with status 139.
>> This error is happened when the query length ( SQL query) is
>> more than 4095 chars. I am using PostgreSQL 6.4.2 on Linux.
>>
>> My questions are:
>> 1. Is there problem with text field or with length of SQL query?
>> 2. Would postgresql have any limits for SQL query length?
>> I checked the archives but only found references to the 8K
>> limit. Any help would be greatly appreciated.
>> Thanks for help
>> Natalya Makushina
>> mak@rtsoft.msk.ru
>>
>>