Thread: Why no CONSTANT for row variables in plpgsql?
Is there a particular reason why row and record variables can't be CONSTANT in plpgsql? -- Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX Experts in Analytics, Data Architecture and PostgreSQL Data in Trouble? Get it in Treble! http://BlueTreble.com
Jim Nasby <Jim.Nasby@BlueTreble.com> writes: > Is there a particular reason why row and record variables can't be > CONSTANT in plpgsql? Well, you can't usefully do anything with such a variable unless it can be initialized, which isn't currently supported either: regression=# do $$ declare x int8_tbl := row(1,2); begin end $$; ERROR: default value for row or record variable is not supported LINE 1: do $$ declare x int8_tbl := row(1,2); begin end $$; ^ I have a vague recollection of having looked at this a few years ago and realizing it wasn't quite as trivial as one could wish. Don't remember why, though. In any case, I'm sure it's fixable if someone wants to put in enough effort. regards, tom lane
On 10/18/15 10:16 PM, Tom Lane wrote: > Jim Nasby <Jim.Nasby@BlueTreble.com> writes: >> Is there a particular reason why row and record variables can't be >> CONSTANT in plpgsql? > > Well, you can't usefully do anything with such a variable unless > it can be initialized, which isn't currently supported either: > > regression=# do $$ declare x int8_tbl := row(1,2); begin end $$; > ERROR: default value for row or record variable is not supported > LINE 1: do $$ declare x int8_tbl := row(1,2); begin end $$; Yeah, I assumed the two were related. We also don't allow NOT NULL. This is all checked in the production in pl_gram.y, but there's nothing indicating why this is the case. :/ > I have a vague recollection of having looked at this a few years > ago and realizing it wasn't quite as trivial as one could wish. > Don't remember why, though. In any case, I'm sure it's fixable > if someone wants to put in enough effort. Yeah, was hoping someone knew offhand why this was a problem. Guess I'll rip the checks out and see what explodes. :) -- Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX Experts in Analytics, Data Architecture and PostgreSQL Data in Trouble? Get it in Treble! http://BlueTreble.com
On 10/19/15 5:16 PM, Jim Nasby wrote: > Yeah, was hoping someone knew offhand why this was a problem. Guess I'll > rip the checks out and see what explodes. ... and what blows up is exec_eval_datum(): > case PLPGSQL_DTYPE_ROW: > { > PLpgSQL_row *row = (PLpgSQL_row *) datum; > HeapTuple tup; > > if (!row->rowtupdesc) /* should not happen */ > elog(ERROR, "row variable has no tupdesc"); > /* Make sure we have a valid type/typmod setting */ > BlessTupleDesc(row->rowtupdesc); > oldcontext = MemoryContextSwitchTo(estate->eval_econtext->ecxt_per_tuple_memory); > tup = make_tuple_from_row(estate, row, row->rowtupdesc); > if (tup == NULL) /* should not happen */ > elog(ERROR, "row not compatible with its own tupdesc"); running this: create type tt as (a int, b int); do $$ declare c CONSTANT tt := '(1,2)'::tt; begin raise notice 'c = %', c; end $$; ERROR: row not compatible with its own tupdesc CONTEXT: PL/pgSQL function inline_code_block line 5 at RAISE STATEMENT: do $$ declare c CONSTANT tt := '(1,2)'::tt; begin raise notice 'c = %', c; end $$; ERROR: row not compatible with its own tupdesc CONTEXT: PL/pgSQL function inline_code_block line 5 at RAISE row.tupledesc looked normal to me. What did seem odd is that while processing the DECLARE section there were plpgsql datums for tt.a and tt.b. I would have expected the assignment to produce a row datum of type tt. When exec_stmt_block is finally called, the initialization for loop initializes tt.a and tt.b, but does nothing with c. -- Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX Experts in Analytics, Data Architecture and PostgreSQL Data in Trouble? Get it in Treble! http://BlueTreble.com
Attachment
Jim Nasby <Jim.Nasby@bluetreble.com> writes: > What did seem odd is that while processing the DECLARE section there > were plpgsql datums for tt.a and tt.b. I would have expected the > assignment to produce a row datum of type tt. Yeah, that's the thing that's weird about plpgsql's ROW datums. What the row datum mechanism is actually good for IMO is representing multiple targets for FOR and INTO constructs, ie SELECT ... INTO a,b,c; If you look at the representation of INTO, it only allows one target datum, and the reason that's OK is it uses a row datum for cases like this. The row member datums are just the scalar variables a,b,c, which can also be accessed directly. IMO, we ought to get rid of the use of that representation for composite-type variables and use the RECORD code paths for them, whether they are declared as type record or as named composite types. That would probably make it easier to handle this case, and it'd definitely make it easier to deal with some other weak spots like ALTER TYPE changes to composite types. However, last time I proposed that, it was shot down on the grounds that it might hurt performance in some cases. (Which is likely true, although that argument ignores the fact that other cases might get better.) regards, tom lane
On 10/19/15 7:12 PM, Tom Lane wrote: > Jim Nasby <Jim.Nasby@bluetreble.com> writes: >> What did seem odd is that while processing the DECLARE section there >> were plpgsql datums for tt.a and tt.b. I would have expected the >> assignment to produce a row datum of type tt. > > Yeah, that's the thing that's weird about plpgsql's ROW datums. > > What the row datum mechanism is actually good for IMO is representing > multiple targets for FOR and INTO constructs, ie > SELECT ... INTO a,b,c; > If you look at the representation of INTO, it only allows one target > datum, and the reason that's OK is it uses a row datum for cases like > this. The row member datums are just the scalar variables a,b,c, > which can also be accessed directly. > > IMO, we ought to get rid of the use of that representation for > composite-type variables and use the RECORD code paths for them, > whether they are declared as type record or as named composite > types. That would probably make it easier to handle this case, > and it'd definitely make it easier to deal with some other weak > spots like ALTER TYPE changes to composite types. However, last > time I proposed that, it was shot down on the grounds that it might > hurt performance in some cases. (Which is likely true, although > that argument ignores the fact that other cases might get better.) That also means there would only need to be changes to RECORD to allow CONSTANT, default, etc. Do you know offhand what the starting point for changing that would be? build_datatype()? -- Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX Experts in Analytics, Data Architecture and PostgreSQL Data in Trouble? Get it in Treble! http://BlueTreble.com
Jim Nasby <Jim.Nasby@bluetreble.com> writes: > On 10/19/15 7:12 PM, Tom Lane wrote: >> IMO, we ought to get rid of the use of that representation for >> composite-type variables and use the RECORD code paths for them, > That also means there would only need to be changes to RECORD to allow > CONSTANT, default, etc. > Do you know offhand what the starting point for changing that would be? > build_datatype()? Well, definitely build_datatype would want to select PLPGSQL_TTYPE_REC not PLPGSQL_TTYPE_ROW when seeing TYPTYPE_COMPOSITE. I suspect that's just a small tip of a large iceberg, though. regards, tom lane