Hi Tom,
> Write your query some other way, for example using "x IN (list)" or other shortcut syntaxes.
As a user, how can I know that 10000 list entries in "x in (A, B, ...)" will not also hit an arbitrary parser
implementationdetail hardcoded limit?
How shall the user have confidence that the parser is better at handling multiple commas than multiple parens?
None of that seems to be documented anywhere (`max_stack_depth` is, but the code doesn't even get that far, and
`YYMAXDEPTH`isn't).
In absence of such docs, one must build systems that later fail at arbitrary limits when e.g. the user clicks some
largernumber of checkboxes that construct a batch query.
There might be a different query that works for that number, but a user of postgres cannot know which construct will
workunless they read GNU Bison's source code.
If I build some workaround today, e.g. splitting the query into multiple ones of max length N, how do I know it will
stillwork in the future, e.g. if Postgres changes the Bison version or switches to a different parser?
The hardcodedness of arbitrary small limits that don't scale itself is also a problem:
One cannot configure postgres to allow queries that the hardware is perfectly able of handling.
It would be a bit like GCC giving up to compile a file if it contains more than 10000 words.
> Limits are a fact of life.
I agree limits can be a fact of life, but life is better if you know what those limits are, or if you can set them,
versushaving to guess and hope (it's especially difficult to build robust systems with "hope-based programming").