Thread: BUG #17561: Server crashes on executing row() with very long argument list
BUG #17561: Server crashes on executing row() with very long argument list
From
PG Bug reporting form
Date:
The following bug has been logged on the website: Bug reference: 17561 Logged by: Egor Chindyaskin Email address: kyzevan23@mail.ru PostgreSQL version: 14.4 Operating system: Ubuntu 22.04 Description: When executing the following query: (echo "SELECT row("; for ((i=1;i<100001;i++)); do echo "'$i',$i,"; done; echo "'0',0);"; ) | psql I got server crash with the following backtrace Core was generated by `postgres: egorchin egorchin [local] SELECT '. Program terminated with signal SIGABRT, Aborted. #0 __pthread_kill_implementation (no_tid=0, signo=6, threadid=139924478532480) at ./nptl/pthread_kill.c:44 44 ./nptl/pthread_kill.c: No such file or directory. (gdb) bt #0 __pthread_kill_implementation (no_tid=0, signo=6, threadid=139924478532480) at ./nptl/pthread_kill.c:44 #1 __pthread_kill_internal (signo=6, threadid=139924478532480) at ./nptl/pthread_kill.c:78 #2 __GI___pthread_kill (threadid=139924478532480, signo=signo@entry=6) at ./nptl/pthread_kill.c:89 #3 0x00007f42b4dad476 in __GI_raise (sig=sig@entry=6) at ../sysdeps/posix/raise.c:26 #4 0x00007f42b4d937f3 in __GI_abort () at ./stdlib/abort.c:79 #5 0x0000557e1694f850 in ExceptionalCondition (conditionName=conditionName@entry=0x557e169b2e62 "attributeNumber >= 1", errorType=errorType@entry=0x557e169b0e7f "BadArgument", fileName=fileName@entry=0x557e169b2d7c "tupdesc.c", lineNumber=lineNumber@entry=598) at assert.c:69 #6 0x0000557e1642790a in TupleDescInitEntry (desc=desc@entry=0x7f42a4c8b050, attributeNumber=attributeNumber@entry=-32768, attributeName=attributeName@entry=0x0, oidtypeid=23, typmod=typmod@entry=-1, attdim=attdim@entry=0) at tupdesc.c:598 #7 0x0000557e1664c509 in ExecTypeFromExprList (exprList=0x7f42a7830cf0) at execTuples.c:2009 #8 0x0000557e1662e8ad in ExecInitExprRec (node=node@entry=0x7f42a7830c40, state=state@entry=0x557e17ab7dc8, resv=resv@entry=0x557e17ab7dd0, resnull=resnull@entry=0x557e17ab7dcd) at execExpr.c:1915 #9 0x0000557e1662cd36 in ExecInitExprInternal (node=node@entry=0x7f42a7830c40, parent=parent@entry=0x0, ext_params=ext_params@entry=0x0, caseval=caseval@entry=0x0, casenull=casenull@entry=0x0) at execExpr.c:114 #10 0x0000557e1662cda0 in ExecInitExpr (node=node@entry=0x7f42a7830c40, parent=parent@entry=0x0) at execExpr.c:162 #11 0x0000557e1672b2aa in evaluate_expr (expr=expr@entry=0x7f42a7830c40, result_type=2249, result_typmod=result_typmod@entry=-1, result_collation=result_collation@entry=0) at clauses.c:4890 #12 0x0000557e1672c45f in eval_const_expressions_mutator (node=0x7f42a7830c40, context=<optimized out>) at clauses.c:3152 #13 0x0000557e166b9717 in expression_tree_mutator (node=0x7f42a7830588, mutator=mutator@entry=0x557e1672b4f8 <eval_const_expressions_mutator>, context=context@entry=0x7ffe21d656a0) at nodeFuncs.c:3343 #14 0x0000557e1672dab9 in simplify_function (funcid=3155, result_type=114, result_typmod=-1, result_collid=result_collid@entry=0, input_collid=input_collid@entry=0, args_p=args_p@entry=0x7ffe21d654a0, funcvariadic=false, process_args=true, allow_non_const=true, context=0x7ffe21d656a0) at clauses.c:3976 #15 0x0000557e1672b77a in eval_const_expressions_mutator (node=0x7f42a7830948, context=0x7ffe21d656a0) at clauses.c:2481 #16 0x0000557e166b94cd in expression_tree_mutator (node=node@entry=0x7f42a78309a0, mutator=mutator@entry=0x557e1672b4f8 <eval_const_expressions_mutator>, context=context@entry=0x7ffe21d656a0) at nodeFuncs.c:3258 #17 0x0000557e1672cbcd in eval_const_expressions_mutator (node=0x7f42a78309a0, context=0x7ffe21d656a0) at clauses.c:3604 #18 0x0000557e166b9717 in expression_tree_mutator (node=node@entry=0x7f42a78309f8, mutator=mutator@entry=0x557e1672b4f8 <eval_const_expressions_mutator>, context=context@entry=0x7ffe21d656a0) at nodeFuncs.c:3343 #19 0x0000557e1672cbcd in eval_const_expressions_mutator (node=0x7f42a78309f8, context=context@entry=0x7ffe21d656a0) at clauses.c:3604 #20 0x0000557e1672cdaa in eval_const_expressions (root=root@entry=0x557e179ce3f8, node=<optimized out>) at clauses.c:2162 #21 0x0000557e1670b211 in preprocess_expression (root=root@entry=0x557e179ce3f8, expr=<optimized out>, kind=kind@entry=1) at planner.c:1124 #22 0x0000557e167140a2 in subquery_planner (glob=glob@entry=0x557e179cec50, parse=parse@entry=0x7f42a9fb2838, parent_root=parent_root@entry=0x0, hasRecursion=hasRecursion@entry=false, tuple_fraction=tuple_fraction@entry=0) at planner.c:792 #23 0x0000557e16714da6 in standard_planner (parse=0x7f42a9fb2838, query_string=<optimized out>, cursorOptions=2048, boundParams=<optimized out>) at planner.c:406 #24 0x0000557e1671535b in planner (parse=parse@entry=0x7f42a9fb2838, query_string=query_string@entry=0x7f42aace4050 "SELECT row_to_json(row(\n'1',1,\n'2',2,\n'3',3,\n'4',4,\n'5',5,\n'6',6,\n'7',7,\n'8',8,\n'9',9,\n'10',10,\n'11',11,\n'12',12,\n'13',13,\n'14',14,\n'15',15,\n'16',16,\n'17',17,\n'18',18,\n'19',19,\n'20',20,\n'21',21,\n'22',"..., cursorOptions=cursorOptions@entry=2048, boundParams=boundParams@entry=0x0) at planner.c:277 #25 0x0000557e16804c20 in pg_plan_query (querytree=querytree@entry=0x7f42a9fb2838, query_string=query_string@entry=0x7f42aace4050 "SELECT row_to_json(row(\n'1',1,\n'2',2,\n'3',3,\n'4',4,\n'5',5,\n'6',6,\n'7',7,\n'8',8,\n'9',9,\n'10',10,\n'11',11,\n'12',12,\n'13',13,\n'14',14,\n'15',15,\n'16',16,\n'17',17,\n'18',18,\n'19',19,\n'20',20,\n'21',21,\n'22',"..., cursorOptions=cursorOptions@entry=2048, boundParams=boundParams@entry=0x0) at postgres.c:883 #26 0x0000557e16804cdd in pg_plan_queries (querytrees=0x7f42a7830aa8, query_string=query_string@entry=0x7f42aace4050 "SELECT row_to_json(row(\n'1',1,\n'2',2,\n'3',3,\n'4',4,\n'5',5,\n'6',6,\n'7',7,\n'8',8,\n'9',9,\n'10',10,\n'11',11,\n'12',12,\n'13',13,\n'14',14,\n'15',15,\n'16',16,\n'17',17,\n'18',18,\n'19',19,\n'20',20,\n'21',21,\n'22',"..., cursorOptions=cursorOptions@entry=2048, boundParams=boundParams@entry=0x0) at postgres.c:975 #27 0x0000557e168051c1 in exec_simple_query ( query_string=query_string@entry=0x7f42aace4050 "SELECT row_to_json(row(\n'1',1,\n'2',2,\n'3',3,\n'4',4,\n'5',5,\n'6',6,\n'7',7,\n'8',8,\n'9',9,\n'10',10,\n'11',11,\n'12',12,\n'13',13,\n'14',14,\n'15',15,\n'16',16,\n'17',17,\n'18',18,\n'19',19,\n'20',20,\n'21',21,\n'22',"...) at postgres.c:1169 #28 0x0000557e1680711f in PostgresMain (dbname=<optimized out>, username=<optimized out>) at postgres.c:4544 #29 0x0000557e1675a808 in BackendRun (port=port@entry=0x557e179f9f20) at postmaster.c:4504 #30 0x0000557e1675d887 in BackendStartup (port=port@entry=0x557e179f9f20) at postmaster.c:4232 #31 0x0000557e1675dac0 in ServerLoop () at postmaster.c:1806 #32 0x0000557e1675f08f in PostmasterMain (argc=argc@entry=3, argv=argv@entry=0x557e179c8370) at postmaster.c:1478 #33 0x0000557e1669e9b5 in main (argc=3, argv=0x557e179c8370) at main.c:202
Re: BUG #17561: Server crashes on executing row() with very long argument list
From
Francisco Olarte
Date:
Egor: On Fri, 29 Jul 2022 at 11:42, PG Bug reporting form <noreply@postgresql.org> wrote: ... Your query generating script. > When executing the following query: > (echo "SELECT row("; for ((i=1;i<100001;i++)); do echo "'$i',$i,"; done; > echo "'0',0);"; ) | psql > I got server crash with the following backtrace .. Does not seem to match the query in your BT... ..... > query_string=query_string@entry=0x7f42aace4050 "SELECT > row_to_json(row(\n'1',1,\n'2',2,\n'3',3,\n'4',4,\n'5',5,\n'6',6,\n'7',7,\n'8',8,\n'9',9,\n'10',10,\n'11',11,\n'12',12,\n'13',13,\n'14',14,\n'15',15,\n'16',16,\n'17',17,\n'18',18,\n'19',19,\n'20',20,\n'21',21,\n'22',"..., Although it seems it the extra row_to_json should not matter. FOS.
Re: BUG #17561: Server crashes on executing row() with very long argument list
From
Alvaro Herrera
Date:
On 2022-Jul-29, PG Bug reporting form wrote: > When executing the following query: > (echo "SELECT row("; for ((i=1;i<100001;i++)); do echo "'$i',$i,"; done; > echo "'0',0);"; ) | psql > I got server crash with the following backtrace ... > #5 0x0000557e1694f850 in ExceptionalCondition > (conditionName=conditionName@entry=0x557e169b2e62 "attributeNumber >= 1", > errorType=errorType@entry=0x557e169b0e7f "BadArgument", > fileName=fileName@entry=0x557e169b2d7c "tupdesc.c", > lineNumber=lineNumber@entry=598) at assert.c:69 > #6 0x0000557e1642790a in TupleDescInitEntry > (desc=desc@entry=0x7f42a4c8b050, > attributeNumber=attributeNumber@entry=-32768, > attributeName=attributeName@entry=0x0, oidtypeid=23, typmod=typmod@entry=-1, > attdim=attdim@entry=0) > at tupdesc.c:598 > #7 0x0000557e1664c509 in ExecTypeFromExprList (exprList=0x7f42a7830cf0) at > execTuples.c:2009 Hah, of course. I suppose we'd need something like this ... haven't looked for other problem spots. -- Álvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/
Attachment
Re: BUG #17561: Server crashes on executing row() with very long argument list
From
Richard Guo
Date:
On Fri, Jul 29, 2022 at 6:14 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:
Hah, of course. I suppose we'd need something like this ... haven't
looked for other problem spots.
Yeah, that's what we need to do. I think the check condition should be
something like:
if (cur_resno - 1 > MaxAttrNumber)
Thanks
Richard
something like:
if (cur_resno - 1 > MaxAttrNumber)
Thanks
Richard
Alvaro Herrera <alvherre@alvh.no-ip.org> writes: > On 2022-Jul-29, PG Bug reporting form wrote: >> When executing the following query: >> (echo "SELECT row("; for ((i=1;i<100001;i++)); do echo "'$i',$i,"; done; >> echo "'0',0);"; ) | psql >> I got server crash with the following backtrace > Hah, of course. I suppose we'd need something like this ... haven't > looked for other problem spots. I think the parser should've prevented this. It's in charge of rejecting overlength SELECT lists, for example. Also, the limit probably needs to be just MaxTupleAttributeNumber. regards, tom lane
I wrote: > I think the parser should've prevented this. It's in charge of > rejecting overlength SELECT lists, for example. Also, the limit > probably needs to be just MaxTupleAttributeNumber. Concretely, about as attached. In the existing code, if you just supply 10000 or so columns you reach this error in heaptuple.c: if (numberOfAttributes > MaxTupleAttributeNumber) ereport(ERROR, (errcode(ERRCODE_TOO_MANY_COLUMNS), errmsg("number of columns (%d) exceeds limit (%d)", numberOfAttributes, MaxTupleAttributeNumber))); I borrowed the errcode from that, but the wording from parse_node.c: if (pstate->p_next_resno - 1 > MaxTupleAttributeNumber) ereport(ERROR, (errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED), errmsg("target lists can have at most %d entries", MaxTupleAttributeNumber))); I'm a bit inclined to adjust parse_node.c to also use TOO_MANY_COLUMNS (54011) instead of the generic PROGRAM_LIMIT_EXCEEDED (54000). regards, tom lane diff --git a/src/backend/parser/parse_expr.c b/src/backend/parser/parse_expr.c index 9f567f4bf4..059cb7097c 100644 --- a/src/backend/parser/parse_expr.c +++ b/src/backend/parser/parse_expr.c @@ -2140,6 +2140,14 @@ transformRowExpr(ParseState *pstate, RowExpr *r, bool allowDefault) newr->args = transformExpressionList(pstate, r->args, pstate->p_expr_kind, allowDefault); + /* Disallow more columns than will fit in a tuple */ + if (list_length(newr->args) > MaxTupleAttributeNumber) + ereport(ERROR, + (errcode(ERRCODE_TOO_MANY_COLUMNS), + errmsg("ROW expressions can have at most %d entries", + MaxTupleAttributeNumber), + parser_errposition(pstate, r->location))); + /* Barring later casting, we consider the type RECORD */ newr->row_typeid = RECORDOID; newr->row_format = COERCE_IMPLICIT_CAST;
Re: BUG #17561: Server crashes on executing row() with very long argument list
From
Richard Guo
Date:
On Fri, Jul 29, 2022 at 9:56 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
Alvaro Herrera <alvherre@alvh.no-ip.org> writes:
> On 2022-Jul-29, PG Bug reporting form wrote:
>> When executing the following query:
>> (echo "SELECT row("; for ((i=1;i<100001;i++)); do echo "'$i',$i,"; done;
>> echo "'0',0);"; ) | psql
>> I got server crash with the following backtrace
> Hah, of course. I suppose we'd need something like this ... haven't
> looked for other problem spots.
I think the parser should've prevented this. It's in charge of
rejecting overlength SELECT lists, for example. Also, the limit
probably needs to be just MaxTupleAttributeNumber.
At the very least we cannot exceed MaxAttrNumber, so that we can
reference any columns with an AttrNumber (int16). But if there are morethan MaxTupleAttributeNumber columns, we would end up error out when
constructing the tuple in heap_form_tuple().
Thanks
Richard
Re: BUG #17561: Server crashes on executing row() with very long argument list
From
Richard Guo
Date:
On Fri, Jul 29, 2022 at 10:41 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
I wrote:
> I think the parser should've prevented this. It's in charge of
> rejecting overlength SELECT lists, for example. Also, the limit
> probably needs to be just MaxTupleAttributeNumber.
Concretely, about as attached.
In the existing code, if you just supply 10000 or so columns you
reach this error in heaptuple.c:
if (numberOfAttributes > MaxTupleAttributeNumber)
ereport(ERROR,
(errcode(ERRCODE_TOO_MANY_COLUMNS),
errmsg("number of columns (%d) exceeds limit (%d)",
numberOfAttributes, MaxTupleAttributeNumber)));
I borrowed the errcode from that, but the wording from parse_node.c:
if (pstate->p_next_resno - 1 > MaxTupleAttributeNumber)
ereport(ERROR,
(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
errmsg("target lists can have at most %d entries",
MaxTupleAttributeNumber)));
I'm a bit inclined to adjust parse_node.c to also use TOO_MANY_COLUMNS
(54011) instead of the generic PROGRAM_LIMIT_EXCEEDED (54000).
The patch looks good to me. Just wondering if there are any other types
of expressions that need to check for MaxTupleAttributeNumber in
parse_expr.c.
Thanks
Richard
of expressions that need to check for MaxTupleAttributeNumber in
parse_expr.c.
Thanks
Richard
Richard Guo <guofenglinux@gmail.com> writes: > The patch looks good to me. Just wondering if there are any other types > of expressions that need to check for MaxTupleAttributeNumber in > parse_expr.c. As far as I can think, sub-SELECTs and ROW constructs are the only SQL that can produce composites of non-pre-determined types. For constructs producing named composite types, the limit on the number of columns in a table should take care of it. What I'm more troubled by is whether there are any ways to produce a wide tuple that don't come through either the parser or a table definition. Not sure what that could look like, other than C code randomly constructing a RowExpr or some such. regards, tom lane
Re[2]: BUG #17561: Server crashes on executing row() with very long argument list
From
Егор Чиндяскин
Date:
Thank you, Tom! The fix works for that case, but there is another one.
I got server crashed while executing the following script:
(echo "SELECT * FROM json_to_record('{\"0\":0 ";for((i=1;i<100001;i++));do echo ",\"$i\":$i";done; echo "}') as x("; echo "\"0\" int";for((i=1;i<100001;i++));do echo ",\"$i\" int";done;echo ")") | psql
Core was generated by `postgres: postgres postgres [local] SELECT '.
Program terminated with signal SIGABRT, Aborted.
#0 __pthread_kill_implementation (no_tid=0, signo=6, threadid=139778293300096) at ./nptl/pthread_kill.c:44
44 ./nptl/pthread_kill.c: Нет такого файла или каталога.
(gdb) bt
#0 __pthread_kill_implementation (no_tid=0, signo=6, threadid=139778293300096) at ./nptl/pthread_kill.c:44
#1 __pthread_kill_internal (signo=6, threadid=139778293300096) at ./nptl/pthread_kill.c:78
#2 __GI___pthread_kill (threadid=139778293300096, signo=signo@entry=6) at ./nptl/pthread_kill.c:89
#3 0x00007f20ab893476 in __GI_raise (sig=sig@entry=6) at ../sysdeps/posix/raise.c:26
#4 0x00007f20ab8797f3 in __GI_abort () at ./stdlib/abort.c:79
#5 0x0000561dac149915 in ExceptionalCondition (conditionName=conditionName@entry=0x561dac1ad8e2 "attributeNumber >= 1", errorType=errorType@entry=0x561dac1ab917 "BadArgument", fileName=fileName@entry=0x561dac1ad7fc "tupdesc.c", lineNumber=lineNumber@entry=598)
at assert.c:69
#6 0x0000561dabbfa25f in TupleDescInitEntry (desc=0x7f209d380050, attributeNumber=attributeNumber@entry=-32768, attributeName=attributeName@entry=0x7f20a0579bd8 "32767", oidtypeid=23, typmod=-1, attdim=attdim@entry=0) at tupdesc.c:598
#7 0x0000561dabd59688 in addRangeTableEntryForFunction (pstate=pstate@entry=0x7f209e02a450, funcnames=funcnames@entry=0x7f209e02aec0, funcexprs=funcexprs@entry=0x7f209e02ae68, coldeflists=coldeflists@entry=0x7f209e02af70, rangefunc=rangefunc@entry=0x561dacc7dc70,
lateral=<optimized out>, inFromCl=true) at parse_relation.c:1866
#8 0x0000561dabd3ac10 in transformRangeFunction (pstate=pstate@entry=0x7f209e02a450, r=r@entry=0x561dacc7dc70) at parse_clause.c:669
#9 0x0000561dabd3b488 in transformFromClauseItem (pstate=pstate@entry=0x7f209e02a450, n=0x561dacc7dc70, top_nsitem=top_nsitem@entry=0x7ffc69d1e738, namespace=namespace@entry=0x7ffc69d1e740) at parse_clause.c:1092
#10 0x0000561dabd3c32e in transformFromClause (pstate=pstate@entry=0x7f209e02a450, frmList=0x7f209e02a378) at parse_clause.c:132
#11 0x0000561dabd196a4 in transformSelectStmt (pstate=0x7f209e02a450, stmt=stmt@entry=0x561dacd4f948) at analyze.c:1313
#12 0x0000561dabd1a19e in transformStmt (pstate=pstate@entry=0x7f209e02a450, parseTree=parseTree@entry=0x561dacd4f948) at analyze.c:365
#13 0x0000561dabd1b455 in transformOptionalSelectInto (pstate=pstate@entry=0x7f209e02a450, parseTree=0x561dacd4f948) at analyze.c:305
#14 0x0000561dabd1b48a in transformTopLevelStmt (pstate=pstate@entry=0x7f209e02a450, parseTree=parseTree@entry=0x7f20a0df7fe0) at analyze.c:255
#15 0x0000561dabd1b4f2 in parse_analyze_fixedparams (parseTree=parseTree@entry=0x7f20a0df7fe0,
sourceText=sourceText@entry=0x7f20a15ca050 "SELECT * FROM json_to_record('{\"0\":0 \n,\"1\":1\n,\"2\":2\n,\"3\":3\n,\"4\":4\n,\"5\":5\n,\"6\":6\n,\"7\":7\n,\"8\":8\n,\"9\":9\n,\"10\":10\n,\"11\":11\n,\"12\":12\n,\"13\":13\n,\"14\":14\n,\"15\":15\n,\"16\":16\n,\"17\":17\n,\"18\":18\n,\"19\":19\n,\"20\":20\n"..., paramTypes=paramTypes@entry=0x0, numParams=numParams@entry=0, queryEnv=queryEnv@entry=0x0) at analyze.c:123
#16 0x0000561dabffea49 in pg_analyze_and_rewrite_fixedparams (parsetree=parsetree@entry=0x7f20a0df7fe0,
query_string=query_string@entry=0x7f20a15ca050 "SELECT * FROM json_to_record('{\"0\":0 \n,\"1\":1\n,\"2\":2\n,\"3\":3\n,\"4\":4\n,\"5\":5\n,\"6\":6\n,\"7\":7\n,\"8\":8\n,\"9\":9\n,\"10\":10\n,\"11\":11\n,\"12\":12\n,\"13\":13\n,\"14\":14\n,\"15\":15\n,\"16\":16\n,\"17\":17\n,\"18\":18\n,\"19\":19\n,\"20\":20\n"..., paramTypes=paramTypes@entry=0x0, numParams=numParams@entry=0, queryEnv=queryEnv@entry=0x0) at postgres.c:650
#17 0x0000561dabfff1a9 in exec_simple_query (
query_string=query_string@entry=0x7f20a15ca050 "SELECT * FROM json_to_record('{\"0\":0 \n,\"1\":1\n,\"2\":2\n,\"3\":3\n,\"4\":4\n,\"5\":5\n,\"6\":6\n,\"7\":7\n,\"8\":8\n,\"9\":9\n,\"10\":10\n,\"11\":11\n,\"12\":12\n,\"13\":13\n,\"14\":14\n,\"15\":15\n,\"16\":16\n,\"17\":17\n,\"18\":18\n,\"19\":19\n,\"20\":20\n"...) at postgres.c:1159
#18 0x0000561dac001138 in PostgresMain (dbname=<optimized out>, username=<optimized out>) at postgres.c:4505
#19 0x0000561dabf55610 in BackendRun (port=port@entry=0x561daccab0e0) at postmaster.c:4490
#20 0x0000561dabf5868f in BackendStartup (port=port@entry=0x561daccab0e0) at postmaster.c:4218
#21 0x0000561dabf588c8 in ServerLoop () at postmaster.c:1808
#22 0x0000561dabf59e8e in PostmasterMain (argc=argc@entry=3, argv=argv@entry=0x561dacc774b0) at postmaster.c:1480
#23 0x0000561dabe7acc1 in main (argc=3, argv=0x561dacc774b0) at main.c:197
Best wishes, Egor Chindyaskin
Пятница, 29 июля 2022, 23:57 +07:00 от Tom Lane <tgl@sss.pgh.pa.us>:
Richard Guo <guofenglinux@gmail.com> writes:> The patch looks good to me. Just wondering if there are any other types
> of expressions that need to check for MaxTupleAttributeNumber in
> parse_expr.c.
As far as I can think, sub-SELECTs and ROW constructs are the only
SQL that can produce composites of non-pre-determined types.
For constructs producing named composite types, the limit on the
number of columns in a table should take care of it.
What I'm more troubled by is whether there are any ways to produce
a wide tuple that don't come through either the parser or a table
definition. Not sure what that could look like, other than C code
randomly constructing a RowExpr or some such.
regards, tom lane
Re: Re[2]: BUG #17561: Server crashes on executing row() with very long argument list
From
Richard Guo
Date:
On Mon, Aug 1, 2022 at 3:17 PM Егор Чиндяскин <kyzevan23@mail.ru> wrote:
Thank you, Tom! The fix works for that case, but there is another one.I got server crashed while executing the following script:(echo "SELECT * FROM json_to_record('{\"0\":0 ";for((i=1;i<100001;i++));do echo ",\"$i\":$i";done; echo "}') as x("; echo "\"0\" int";for((i=1;i<100001;i++));do echo ",\"$i\" int";done;echo ")") | psql
Thanks for the report! This is another place that we construct a tupdesc
with more than MaxAttrNumber attributes, via RangeFunctions this time.
Regarding the fix, how about we check the length of coldeflist against
MaxTupleAttributeNumber in transformRangeFunction()?
Thanks
Richard
with more than MaxAttrNumber attributes, via RangeFunctions this time.
Regarding the fix, how about we check the length of coldeflist against
MaxTupleAttributeNumber in transformRangeFunction()?
Thanks
Richard
Re: Re[2]: BUG #17561: Server crashes on executing row() with very long argument list
From
Richard Guo
Date:
On Mon, Aug 1, 2022 at 6:03 PM Richard Guo <guofenglinux@gmail.com> wrote:
On Mon, Aug 1, 2022 at 3:17 PM Егор Чиндяскин <kyzevan23@mail.ru> wrote:Thank you, Tom! The fix works for that case, but there is another one.I got server crashed while executing the following script:(echo "SELECT * FROM json_to_record('{\"0\":0 ";for((i=1;i<100001;i++));do echo ",\"$i\":$i";done; echo "}') as x("; echo "\"0\" int";for((i=1;i<100001;i++));do echo ",\"$i\" int";done;echo ")") | psqlThanks for the report! This is another place that we construct a tupdesc
with more than MaxAttrNumber attributes, via RangeFunctions this time.
Regarding the fix, how about we check the length of coldeflist against
MaxTupleAttributeNumber in transformRangeFunction()?
I mean something like this:
diff --git a/src/backend/parser/parse_clause.c b/src/backend/parser/parse_clause.c
index 5a18107e79..a74a07667d 100644
--- a/src/backend/parser/parse_clause.c
+++ b/src/backend/parser/parse_clause.c
@@ -629,6 +629,15 @@ transformRangeFunction(ParseState *pstate, RangeFunction *r)
*/
if (r->coldeflist)
{
+ /* Disallow more columns than will fit in a tuple */
+ if (list_length(r->coldeflist) > MaxTupleAttributeNumber)
+ ereport(ERROR,
+ (errcode(ERRCODE_TOO_MANY_COLUMNS),
+ errmsg("Function returning RECORD can have at most %d entries",
+ MaxTupleAttributeNumber),
+ parser_errposition(pstate,
+ exprLocation((Node *) r->coldeflist))));
+
if (list_length(funcexprs) != 1)
{
if (r->is_rowsfrom)
index 5a18107e79..a74a07667d 100644
--- a/src/backend/parser/parse_clause.c
+++ b/src/backend/parser/parse_clause.c
@@ -629,6 +629,15 @@ transformRangeFunction(ParseState *pstate, RangeFunction *r)
*/
if (r->coldeflist)
{
+ /* Disallow more columns than will fit in a tuple */
+ if (list_length(r->coldeflist) > MaxTupleAttributeNumber)
+ ereport(ERROR,
+ (errcode(ERRCODE_TOO_MANY_COLUMNS),
+ errmsg("Function returning RECORD can have at most %d entries",
+ MaxTupleAttributeNumber),
+ parser_errposition(pstate,
+ exprLocation((Node *) r->coldeflist))));
+
if (list_length(funcexprs) != 1)
{
if (r->is_rowsfrom)
Thanks
Richard
Richard
Re: Re[2]: BUG #17561: Server crashes on executing row() with very long argument list
From
Richard Guo
Date:
On Mon, Aug 1, 2022 at 6:33 PM Richard Guo <guofenglinux@gmail.com> wrote:
On Mon, Aug 1, 2022 at 6:03 PM Richard Guo <guofenglinux@gmail.com> wrote:On Mon, Aug 1, 2022 at 3:17 PM Егор Чиндяскин <kyzevan23@mail.ru> wrote:Thank you, Tom! The fix works for that case, but there is another one.I got server crashed while executing the following script:(echo "SELECT * FROM json_to_record('{\"0\":0 ";for((i=1;i<100001;i++));do echo ",\"$i\":$i";done; echo "}') as x("; echo "\"0\" int";for((i=1;i<100001;i++));do echo ",\"$i\" int";done;echo ")") | psqlThanks for the report! This is another place that we construct a tupdesc
with more than MaxAttrNumber attributes, via RangeFunctions this time.
Regarding the fix, how about we check the length of coldeflist against
MaxTupleAttributeNumber in transformRangeFunction()?I mean something like this:diff --git a/src/backend/parser/parse_clause.c b/src/backend/parser/parse_clause.c
index 5a18107e79..a74a07667d 100644
--- a/src/backend/parser/parse_clause.c
+++ b/src/backend/parser/parse_clause.c
@@ -629,6 +629,15 @@ transformRangeFunction(ParseState *pstate, RangeFunction *r)
*/
if (r->coldeflist)
{
+ /* Disallow more columns than will fit in a tuple */
+ if (list_length(r->coldeflist) > MaxTupleAttributeNumber)
+ ereport(ERROR,
+ (errcode(ERRCODE_TOO_MANY_COLUMNS),
+ errmsg("Function returning RECORD can have at most %d entries",
+ MaxTupleAttributeNumber),
+ parser_errposition(pstate,
+ exprLocation((Node *) r->coldeflist))));
+
if (list_length(funcexprs) != 1)
{
if (r->is_rowsfrom)
against MaxHeapAttributeNumber. Maybe we should use this as the limit?
Thanks
Richard
Re: Re[2]: BUG #17561: Server crashes on executing row() with very long argument list
From
Tom Lane
Date:
Richard Guo <guofenglinux@gmail.com> writes: > Just noticed that CheckAttributeNamesTypes will check on column count > against MaxHeapAttributeNumber. Maybe we should use this as the limit? Yeah, otherwise you'll get a very confusing message about too many columns in a *table*. Also, there are two levels we have to check at, per-function and then the merged tupdesc for the whole RTE. I should have thought of function RTEs when asserting that there were no other holes to plug :-(. Will fix, thanks for the report! regards, tom lane