Re: Inspection of row types in pl/pgsql and pl/sql - Mailing list pgsql-hackers

From Florian G. Pflug
Subject Re: Inspection of row types in pl/pgsql and pl/sql
Date
Msg-id 4AFF11BC.8050200@phlo.org
Whole thread Raw
In response to Re: Inspection of row types in pl/pgsql and pl/sql  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: Inspection of row types in pl/pgsql and pl/sql
List pgsql-hackers
Tom Lane wrote:
> "Florian G. Pflug" <fgp@phlo.org> writes:
>> Tom Lane wrote:
>>> Perhaps it would help if we looked at some specific use-cases
>>> that people need, rather than debating abstractly.  What do you
>>> need your generic trigger to *do*?
> 
>> I need to build a global index table of all values of a certain
>> type together with a pointer to the row and table that contains
>> them. Since all involved tables have an "id" column, storing that
>> pointer is the easy part. The hard part is collecting all those
>> values in an insert/update/delete trigger so that I can update the
>> global index accordingly.
> 
> So in this case it seems like you don't actually need any
> polymorphism at all; the target columns are always of a known
> datatype.  You just don't want to commit to their names.  I wonder
> though why you're willing to pin down the name of the "id" column but
> not the name of the data column.

There might be more than one (or none at all) columns of the type to be
indexed. I need to process all such columns (each of them produces a
seperate record in the index table). Plus, this schema is relatively
volatile - new fields are added about once a month or so.

>> Currently, a set of plpgsql functions generate a seperate trigger 
>> function for each table. Yuck!
> 
> Would you be happy with an approach similar to what Andrew mentioned,
>  ie, you generate CREATE TRIGGER commands that list the names of the 
> target column(s) as TG_ARGV arguments?  The alternative to that seems
>  to be that you iterate at runtime through all the table columns to 
> see which ones are of the desired type.  Which might be less trouble 
> to set up, but the performance penalty of figuring out 
> basically-unchanging information again on every single tuple update 
> seems awful high.

Hm.. I had hoped to get away without any need to modify the trigger
definitions if the schema changes. But having a function that does "DROP
TRIGGER; CREATE TRIGGER..." is already a huge improvement over having
one that does "CREATE FUNCTION...".

I've now played around with the
EXECUTE 'select $1.' || quote_ident(fieldname)' USING NEW/OLD
trick, and simply look up the existing field with
SELECT attname
FROM pg_attribute
WHEREattrelid = TG_RELID ANDatttypeid IN (...) ANDattname NOT IN ('referenced_by', 'self') ANDattnum > 0 AND NOT
attisdropped
This at least gives me a working proof-of-concept implementation of the
trigger.

Still, doing that SELECT seems rather silly since NEW and OLD already
contain the required information. So I still believe that having
something like record_name() and record_types() would be useful. And at
least these functions have less of an issue with the type system...

best regards,
Florian Pflug


pgsql-hackers by date:

Previous
From: u235sentinel
Date:
Subject: Postgres and likewise authentication
Next
From: Andrew Dunstan
Date:
Subject: Re: next CommitFest