I have finished another round of work for indefinitely-long queries.
We can now do things like SELECT textlen(' ... 200K string here ... ')
--- and get the right answer :-). Still can't actually *store* that
200K string in a table though.
Here are the other loose ends I'm aware of:
pg_dump has a whole bunch of fixed-size buffers, which means it will
fail to dump extremely complex table definitions &etc. This is
definitely a "must fix" item. Michael Ansley is working on it.
ecpg's lexer still causes YY_USES_REJECT to be defined, even though the
main lexer does not. Per previous discussions, this means it's unable
to deal with individual lexical tokens exceeding 16K or so. I am not
sure this is worth worrying about. For example, if you break up a
string constant into multiple lines,'here is a'' really really'' really really long string'
then the 16K limit only applies to each line individually (I think).
And data values that you aren't writing literally in the ECPG source
code aren't constrained either. Still, if it's easy to alter the ECPG
lexical definition to avoid using REJECT, it might be worth doing.
The ODBC interface contains a lot of apparently-no-longer-valid
assumptions about maximum query length; these need to be looked at
by someone who's familar with ODBC, which I am not. Note that some
of its limits are associated with maximum tuple length, which means
they're not broken quite yet --- but it would be a good idea to
flag the changes that will be needed when we have long tuples.
These symbols in ODBC need to be looked at and possibly eliminated:
SQL_PACKET_SIZE MAX_MESSAGE_LEN MAX_QUERY_SIZE ERROR_MESSAGE_LENGTH
MAX_STATEMENT_LEN TEXT_FIELD_SIZE MAX_VARCHAR_SIZE DRV_VARCHAR_SIZE
DRV_LONGVARCHAR_SIZE MAX_CONNECT_STRING MAX_FIELDS
The Python interface needs to eliminate its fixed-size query buffers
(look for MAX_BUFFER_SIZE). I'm not touching this since I don't
have Python installed to test with.
And that's about it. Hard limits on query length are history!
regards, tom lane