Our application INSERTs data from external sources, and infrequently UPDATEs
the previously-inserted data (currently, it first SELECTs to determine whether
to UPDATE).
I'm implementing unique indices to allow "upsert" (and pg_repack and..), but
running into a problem when the table has >830 columns (we have some tables
which are at the 1600 column limit, and have previously worked around that
limit using arrays or multiple tables).
I tried to work around the upsert problem by using pygresql inline=True
(instead of default PREPAREd statements) but both have the same issue.
I created a test script which demonstrates the problem (attached).
It seems to me that there's currently no way to "upsert" such a wide table?
I see:
./src/include/access/htup_details.h:#define MaxTupleAttributeNumber 1664 /* 8 * 208 */
./src/backend/parser/parse_node.c- /*
./src/backend/parser/parse_node.c- * Check that we did not produce too many resnos; at the very least we
./src/backend/parser/parse_node.c- * cannot allow more than 2^16, since that would exceed the range of a
./src/backend/parser/parse_node.c: * AttrNumber. It seems safest to use MaxTupleAttributeNumber.
./src/backend/parser/parse_node.c- */
./src/backend/parser/parse_node.c: if (pstate->p_next_resno - 1 > MaxTupleAttributeNumber)
./src/backend/parser/parse_node.c- ereport(ERROR,
./src/backend/parser/parse_node.c- (errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
./src/backend/parser/parse_node.c: errmsg("target lists can have at most %d entries",
./src/backend/parser/parse_node.c: MaxTupleAttributeNumber)));
Thanks in advance for any suggestions.
Justin