Thread: Inspection of row types in pl/pgsql and pl/sql
Hi I'm currently working on a project where we need to build a global cache table containing all values of certain types found in any of the other tables. Currently, a seperate insert, update and delete (plpgsql) trigger function exists for each table in the database which is auto-generated by a (plpgsql) function which queries the system catalogs to find all fields with a certain type, and then generates the appropriate plpgsql function using EXECUTE '...'. I'd like to replace this function-generating function by a generic trigger function that works for all tables. Due to the lack of any way to inspect the *structure* of a record type, however, I'd have to use a C language function for that, which induces quite some maintenance headaches (especially if deployed on windows). I'm therefore thinking about implementing the following generate-purpose inspection functions for row types record_length(record) returns smallint Returns the number of fields in the given record. record_names(record) returns name[] Returns the names of the record's fields. Array will contain NULLs if one or more fieldsare unnamed. record_types(record) returns regtype[]; Returns the OIDs of the record's types. Array won't contain NULLs record_value(record, name, anyelement) returns anyelement Returns the value of a certain (named) field. The type of thethird argument defines the return type (its value is ignored). The field's value is cast to that type if possible, otherwisean error is raised. record_value(record, smallint, anyelement) returns anyelement Returns the value of the field at the given position. record_values(record, regtype, anyelement) returns anyarray Returns an array of all values of all fields with the giventype or whose type is a domain over the given type. No casting is done. Any comment/critique is appreciated. Would anyone else find those functions useful? best regards, Florian Pflug
"Florian G. Pflug" <fgp@phlo.org> writes: > I'd like to replace this function-generating function by a generic > trigger function that works for all tables. Due to the lack of any way > to inspect the *structure* of a record type, however, I'd have to use a > C language function for that, which induces quite some maintenance > headaches (especially if deployed on windows). Trying to do this in plpgsql is doomed to failure and heartache, because it's fundamentally a strongly typed language. The proposed functions won't fix that and hence will be unusable in practice. I'd suggest either using C, or using one of the less-strongly-typed PLs. regards, tom lane
Tom Lane wrote: > "Florian G. Pflug" <fgp@phlo.org> writes: >> I'd like to replace this function-generating function by a generic >> trigger function that works for all tables. Due to the lack of any >> way to inspect the *structure* of a record type, however, I'd have >> to use a C language function for that, which induces quite some >> maintenance headaches (especially if deployed on windows). > > Trying to do this in plpgsql is doomed to failure and heartache, > because it's fundamentally a strongly typed language. The proposed > functions won't fix that and hence will be unusable in practice. I'd > suggest either using C, or using one of the less-strongly-typed PLs. Well, the proposed functions at least allow for some more flexibility in working with row types, given that you know in advance which types you will be dealing with (but not necessarily the precise ordering and number of the record's fields). They might feel a bit kludgy because of the "anyelement" dummy argument that bridges the gap between the statically typed nature of SQL and the rather dynamic RECORDs, but the kludgy-ness factor is still within reasonable limits I think. Since all the other PLs (except C) are not nearly as integrated with the postgres type system, using them for this task doesnot really buy anything IMHO. AFAIK, all these PLs will convert any SQL type which isn't specifically mapped to one of the PLs types to a string. *That* I can do with pl/pgsql too, by simply using record_out() and then parsing the result... C of course lets me work around all these problems - but at the cost of a longer development time and (more importantly) more maintenance headaches (especially on windows, where a C compiler is not just one apt-get/yum/whatever call away). Regarding usability - the proposed function would for example allow you to implement a wide-range of row-to-text conversion functions in pure pl/pgsql by calling record_value(record, name, anyelement) with NULL::varchar as the last argument for each field, and then concatinating the resulting text together any way you like. best regards, Florian Pflug
"Florian G. Pflug" <fgp@phlo.org> writes: > Tom Lane wrote: >> Trying to do this in plpgsql is doomed to failure and heartache, > Well, the proposed functions at least allow for some more flexibility in > working with row types, given that you know in advance which types you > will be dealing with (but not necessarily the precise ordering and > number of the record's fields). They might feel a bit kludgy because of > the "anyelement" dummy argument that bridges the gap between the > statically typed nature of SQL and the rather dynamic RECORDs, but the > kludgy-ness factor is still within reasonable limits I think. It sounds pretty d*mn klugy to me, and I stand by my comment that it isn't going to work anyway. When you try it you are going to run into "parameter type doesn't match that while preparing the plan" errors. regards, tom lane
Tom Lane wrote: > "Florian G. Pflug" <fgp@phlo.org> writes: >> Tom Lane wrote: >>> Trying to do this in plpgsql is doomed to failure and heartache, > >> Well, the proposed functions at least allow for some more >> flexibility in working with row types, given that you know in >> advance which types you will be dealing with (but not necessarily >> the precise ordering and number of the record's fields). They might >> feel a bit kludgy because of the "anyelement" dummy argument that >> bridges the gap between the statically typed nature of SQL and the >> rather dynamic RECORDs, but the kludgy-ness factor is still within >> reasonable limits I think. > > It sounds pretty d*mn klugy to me, and I stand by my comment that it > isn't going to work anyway. When you try it you are going to run > into "parameter type doesn't match that while preparing the plan" > errors. Ok, I must be missing something. I currently fail to see how my proposed record_value(record, name, anyelement) returns anyelement function differs (from the type system's point of view) from value_from_string(text, anyelement) returns anyelement which simply casts the text value to the given type and can easily be implemented in plpgsq. create or replace function value_from_string(v_value text, v_type_dummy anyelement) returns anyelement as $body$ declare v_result v_type_dummy%type; begin if v_value is null then return null; end if; v_result := v_value; return v_result; end; $body$ language plpgsql immutable; -- Returns 124 select value_from_string('123', NULL::int) + 1; -- returns {1,2,3,4} select value_from_string('{1,2,3}', NULL::int[]) || array[4]; best regards, Florian Pflug
"Florian G. Pflug" <fgp@phlo.org> writes: > Ok, I must be missing something. I currently fail to see how > my proposed > record_value(record, name, anyelement) returns anyelement > function differs (from the type system's point of view) from > value_from_string(text, anyelement) returns anyelement > which simply casts the text value to the given type and can easily be > implemented in plpgsq. The problem is at the call site --- if you try to call it with different record types on different calls you're going to get a failure. Or so I expect anyway. regards, tom lane
Hello new hstore has a very nice interface for record field iteration http://okbob.blogspot.com/2009/10/dynamic-access-to-record-fields-in.html Regards Pavel Stehule 2009/11/13 Florian G. Pflug <fgp@phlo.org>: > Hi > > I'm currently working on a project where we need to build a global cache > table containing all values of certain types found in any of the other > tables. Currently, a seperate insert, update and delete (plpgsql) > trigger function exists for each table in the database which is > auto-generated by a (plpgsql) function which queries the system catalogs > to find all fields with a certain type, and then generates the > appropriate plpgsql function using EXECUTE '...'. > > I'd like to replace this function-generating function by a generic > trigger function that works for all tables. Due to the lack of any way > to inspect the *structure* of a record type, however, I'd have to use a > C language function for that, which induces quite some maintenance > headaches (especially if deployed on windows). > > I'm therefore thinking about implementing the following generate-purpose > inspection functions for row types > > record_length(record) returns smallint > Returns the number of fields in the given record. > > record_names(record) returns name[] > Returns the names of the record's fields. Array will contain NULLs > if one or more fields are unnamed. > > record_types(record) returns regtype[]; > Returns the OIDs of the record's types. Array won't contain NULLs > > record_value(record, name, anyelement) returns anyelement > Returns the value of a certain (named) field. The type of the third > argument defines the return type (its value is ignored). The > field's value is cast to that type if possible, otherwise an > error is raised. > > record_value(record, smallint, anyelement) returns anyelement > Returns the value of the field at the given position. > > record_values(record, regtype, anyelement) returns anyarray > Returns an array of all values of all fields with the given type or > whose type is a domain over the given type. No casting is done. > > Any comment/critique is appreciated. > > Would anyone else find those functions useful? > > best regards, > Florian Pflug > >
Pavel Stehule wrote: > Hello > > new hstore has a very nice interface for record field iteration > > Yes, and I have used it, but it really would be nicer to have some introspection facilities built in, especially for use in triggers. cheers andrew
2009/11/14 Andrew Dunstan <andrew@dunslane.net>: > > > Pavel Stehule wrote: >> >> Hello >> >> new hstore has a very nice interface for record field iteration >> >> > > Yes, and I have used it, but it really would be nicer to have some > introspection facilities built in, especially for use in triggers. I am not sure. PL/pgSQL is really bad language for this task. Any procedure developed in plpgsql should be pretty slow. Personally I am happy with current 8.5. If some need it, the he could to use hstore - contrib modules are not problem on every platform, but cannot generate too much general triggers simply (what is good). I understand well to motivation. But now I thinking, so general triggers are very bad and risk technique and is better do things else. If someone really need it, then he could to write C procedure. Regards Pavel Stehule > > cheers > > andrew > > >
Andrew Dunstan <andrew@dunslane.net> writes: > Yes, and I have used it, but it really would be nicer to have some > introspection facilities built in, especially for use in triggers. Maybe, but the proposal at hand is spectacularly ugly --- in particular it seems designed around the assumption that a given trigger will only care about handling a predetermined set of datatypes, which hardly fits with PG's normal goals for datatype extensibility. If the argument is that you don't like hstore or other PLs because they'll smash everything to text, then I think you have to do better than this. regards, tom lane
Tom Lane wrote: > "Florian G. Pflug" <fgp@phlo.org> writes: >> Ok, I must be missing something. I currently fail to see how my >> proposed record_value(record, name, anyelement) returns anyelement >> function differs (from the type system's point of view) from >> value_from_string(text, anyelement) returns anyelement which simply >> casts the text value to the given type and can easily be >> implemented in plpgsq. > > The problem is at the call site --- if you try to call it with > different record types on different calls you're going to get a > failure. Or so I expect anyway. Ah, OK - so it's really the "record" type, and not my anyelement kludge that might cause problems. Actually, I do now realize that "record" is a way more special case than I'd have initially thought. For example, I could have sworn that it's possible to pass "record" values to pl/pgsql functions, but just found out the hard way that it isn't. Seems that the possibility of declaring "record" variables lulled me into thinking it's pretty standard type when it actually isn't. best regards, Florian Pflug
Tom Lane wrote: > Andrew Dunstan <andrew@dunslane.net> writes: >> Yes, and I have used it, but it really would be nicer to have some >> introspection facilities built in, especially for use in triggers. > > Maybe, but the proposal at hand is spectacularly ugly --- in particular > it seems designed around the assumption that a given trigger will only > care about handling a predetermined set of datatypes, which hardly > fits with PG's normal goals for datatype extensibility. If the argument > is that you don't like hstore or other PLs because they'll smash > everything to text, then I think you have to do better than this. While I agree that handling arbitrary datatypes at runtime would be nice, I really don't see how that could ever be done from within a plpgsql procedure, unless plpgsql somehow morphs into a dynamically typed language. Plus, the set of datatypes an application deals with is usually much smaller than the set of tables, and less likely to change over time. I'd also argue that this restriction does not conflict with PG's goal of datatype extensibility at all. Datatype extensibility in PG's boils down to being able to create new datatypes without modifying postgres itself - but it still expects that you do so while designing your application. Which also is when trigger functions that use record_value() or a similar function would be written. Plus, fully generic handling of data of arbitrary type is a somewhat strange notion anyway, because it leaves you with very few operations guaranteed to be defined for those values. In the case of PG, you'd be pretty much limited to casting those values from and to text. best regards, Florian Pflug
"Florian G. Pflug" <fgp@phlo.org> writes: > While I agree that handling arbitrary datatypes at runtime would be > nice, I really don't see how that could ever be done from within a > plpgsql procedure, unless plpgsql somehow morphs into a dynamically > typed language. Which is not likely to happen, which is why this is fundamentally a dead end. I don't think it's appropriate to put ugly, hard to use band-aids over the fact that plpgsql isn't designed to do this. One of the principal reasons why we work so hard to support multiple PLs is that they have different strengths. If you need something that's more dynamically typed than plpgsql, you should go use something else. > Plus, fully generic handling of data of arbitrary type is a somewhat > strange notion anyway, because it leaves you with very few operations > guaranteed to be defined for those values. In the case of PG, you'd be > pretty much limited to casting those values from and to text. Well, that's the wrong way to look at it. To me, the right design would involve saying that my trigger needs to do operation X on the data, and therefore it should support all datatypes that can do X. It should not need a hard-wired list of which types those are. Perhaps it would help if we looked at some specific use-cases that people need, rather than debating abstractly. What do you need your generic trigger to *do*? regards, tom lane
Tom Lane wrote: > "Florian G. Pflug" <fgp@phlo.org> writes: > >> While I agree that handling arbitrary datatypes at runtime would be >> nice, I really don't see how that could ever be done from within a >> plpgsql procedure, unless plpgsql somehow morphs into a dynamically >> typed language. >> > > Which is not likely to happen, which is why this is fundamentally a > dead end. I don't think it's appropriate to put ugly, hard to use > band-aids over the fact that plpgsql isn't designed to do this. > One of the principal reasons why we work so hard to support multiple > PLs is that they have different strengths. If you need something that's > more dynamically typed than plpgsql, you should go use something else. > > >> Plus, fully generic handling of data of arbitrary type is a somewhat >> strange notion anyway, because it leaves you with very few operations >> guaranteed to be defined for those values. In the case of PG, you'd be >> pretty much limited to casting those values from and to text. >> > > Well, that's the wrong way to look at it. To me, the right design > would involve saying that my trigger needs to do operation X on the > data, and therefore it should support all datatypes that can do X. > It should not need a hard-wired list of which types those are. > > Perhaps it would help if we looked at some specific use-cases that > people need, rather than debating abstractly. What do you need your > generic trigger to *do*? > > > The two things I have wanted most badly in the past are a) to be able to address a field in NEW and OLD by a string name (usually passed in via a trigger argument) and b) to be able to discover the names if the fields in NEW and OLD cheers andrew
Andrew Dunstan <andrew@dunslane.net> writes: > Tom Lane wrote: >> Perhaps it would help if we looked at some specific use-cases that >> people need, rather than debating abstractly. What do you need your >> generic trigger to *do*? > The two things I have wanted most badly in the past are > a) to be able to address a field in NEW and OLD by a string name > (usually passed in via a trigger argument) and But what are you then going to do with that field? Are you just assuming that it will be of a pre-agreed datatype? Or that you can perform some specific operation on it? What are you expecting will happen if it isn't or can't? > b) to be able to discover the names if the fields in NEW and OLD It doesn't seem hard or ugly to provide an API for that, but again I'm wondering what happens next. regards, tom lane
Tom Lane wrote: > "Florian G. Pflug" <fgp@phlo.org> writes: >> While I agree that handling arbitrary datatypes at runtime would be >> nice, I really don't see how that could ever be done from within a >> plpgsql procedure, unless plpgsql somehow morphs into a >> dynamically typed language. > > Which is not likely to happen, which is why this is fundamentally a > dead end. I don't think it's appropriate to put ugly, hard to use > band-aids over the fact that plpgsql isn't designed to do this. One > of the principal reasons why we work so hard to support multiple PLs > is that they have different strengths. If you need something that's > more dynamically typed than plpgsql, you should go use something > else. In principle, I agree. In pratice, however, the company who I do my current project for has settled on plpgsql and isn't willing to use other PLs in their software because they lack the skill to maintain code written in other PLs. Therefore I'm trying to find an at least somewhat acceptable solution using plpgsql. >> Plus, fully generic handling of data of arbitrary type is a >> somewhat strange notion anyway, because it leaves you with very few >> operations guaranteed to be defined for those values. In the case >> of PG, you'd be pretty much limited to casting those values from >> and to text. > > Well, that's the wrong way to look at it. To me, the right design > would involve saying that my trigger needs to do operation X on the > data, and therefore it should support all datatypes that can do X. It > should not need a hard-wired list of which types those are. True, but that'd require fairly large changes to plpgsql AFAICS. > Perhaps it would help if we looked at some specific use-cases that > people need, rather than debating abstractly. What do you need your > generic trigger to *do*? I need to build a global index table of all values of a certain type together with a pointer to the row and table that contains them. Since all involved tables have an "id" column, storing that pointer is the easy part. The hard part is collecting all those values in an insert/update/delete trigger so that I can update the global index accordingly. Currently, a set of plpgsql functions generate a seperate trigger function for each table. Yuck! Instead of this nearly-impossible to read code-generating function I want to create a generic trigger function that works for any of the involved tables. Preferrably in plpgsql because of the skill issue mentioned above. best regards, Florian Pflug
Tom Lane wrote: > Andrew Dunstan <andrew@dunslane.net> writes: > >> Tom Lane wrote: >> >>> Perhaps it would help if we looked at some specific use-cases that >>> people need, rather than debating abstractly. What do you need your >>> generic trigger to *do*? >>> > > >> The two things I have wanted most badly in the past are >> > > >> a) to be able to address a field in NEW and OLD by a string name >> (usually passed in via a trigger argument) and >> > > But what are you then going to do with that field? Are you just > assuming that it will be of a pre-agreed datatype? Or that you > can perform some specific operation on it? What are you expecting > will happen if it isn't or can't? > Yes, in many cases I'll assume it's a given datatype. A good example is an auto-update-timestamp trigger where the name of the timestamp field is passed in as a trigger argument. > >> b) to be able to discover the names if the fields in NEW and OLD >> > > It doesn't seem hard or ugly to provide an API for that, but again > I'm wondering what happens next. > One case I have is a custom audit package that ignores certain fields when logging changes. So it would be nice to be able to iterate over the field names and check if NEW.foo is distinct from OLD.foo, skipping the field names we don't care about to decide if the change needs to be logged. cheers andrew
"Florian G. Pflug" <fgp@phlo.org> writes: > Tom Lane wrote: >> Perhaps it would help if we looked at some specific use-cases that >> people need, rather than debating abstractly. What do you need your >> generic trigger to *do*? > I need to build a global index table of all values of a certain type > together with a pointer to the row and table that contains them. Since > all involved tables have an "id" column, storing that pointer is the > easy part. The hard part is collecting all those values in an > insert/update/delete trigger so that I can update the global index > accordingly. So in this case it seems like you don't actually need any polymorphism at all; the target columns are always of a known datatype. You just don't want to commit to their names. I wonder though why you're willing to pin down the name of the "id" column but not the name of the data column. > Currently, a set of plpgsql functions generate a seperate trigger > function for each table. Yuck! Would you be happy with an approach similar to what Andrew mentioned, ie, you generate CREATE TRIGGER commands that list the names of the target column(s) as TG_ARGV arguments? The alternative to that seems to be that you iterate at runtime through all the table columns to see which ones are of the desired type. Which might be less trouble to set up, but the performance penalty of figuring out basically-unchanging information again on every single tuple update seems awful high. regards, tom lane
I'm curious if anyone has tried to link postgres authentication with a product called likewise. Likesoft software will allow a linux/unix system to authenticate against a windows domain. I have it working on several flavors of linux and working on getting it tied into several samba shares. I've heard there is a way to make it work with postgres but couldn't find any details. I'm curious if anyone has tried to do this and would love any tips :D Thanks!
Tom Lane wrote: > "Florian G. Pflug" <fgp@phlo.org> writes: >> Tom Lane wrote: >>> Perhaps it would help if we looked at some specific use-cases >>> that people need, rather than debating abstractly. What do you >>> need your generic trigger to *do*? > >> I need to build a global index table of all values of a certain >> type together with a pointer to the row and table that contains >> them. Since all involved tables have an "id" column, storing that >> pointer is the easy part. The hard part is collecting all those >> values in an insert/update/delete trigger so that I can update the >> global index accordingly. > > So in this case it seems like you don't actually need any > polymorphism at all; the target columns are always of a known > datatype. You just don't want to commit to their names. I wonder > though why you're willing to pin down the name of the "id" column but > not the name of the data column. There might be more than one (or none at all) columns of the type to be indexed. I need to process all such columns (each of them produces a seperate record in the index table). Plus, this schema is relatively volatile - new fields are added about once a month or so. >> Currently, a set of plpgsql functions generate a seperate trigger >> function for each table. Yuck! > > Would you be happy with an approach similar to what Andrew mentioned, > ie, you generate CREATE TRIGGER commands that list the names of the > target column(s) as TG_ARGV arguments? The alternative to that seems > to be that you iterate at runtime through all the table columns to > see which ones are of the desired type. Which might be less trouble > to set up, but the performance penalty of figuring out > basically-unchanging information again on every single tuple update > seems awful high. Hm.. I had hoped to get away without any need to modify the trigger definitions if the schema changes. But having a function that does "DROP TRIGGER; CREATE TRIGGER..." is already a huge improvement over having one that does "CREATE FUNCTION...". I've now played around with the EXECUTE 'select $1.' || quote_ident(fieldname)' USING NEW/OLD trick, and simply look up the existing field with SELECT attname FROM pg_attribute WHEREattrelid = TG_RELID ANDatttypeid IN (...) ANDattname NOT IN ('referenced_by', 'self') ANDattnum > 0 AND NOT attisdropped This at least gives me a working proof-of-concept implementation of the trigger. Still, doing that SELECT seems rather silly since NEW and OLD already contain the required information. So I still believe that having something like record_name() and record_types() would be useful. And at least these functions have less of an issue with the type system... best regards, Florian Pflug
Andrew Dunstan <andrew@dunslane.net> writes: > Tom Lane wrote: >>> Perhaps it would help if we looked at some specific use-cases that >>> people need, rather than debating abstractly. What do you need your >>> generic trigger to *do*? > One case I have is a custom audit package that ignores certain fields > when logging changes. So it would be nice to be able to iterate over the > field names and check if NEW.foo is distinct from OLD.foo, skipping the > field names we don't care about to decide if the change needs to be logged. So, inventing syntax at will, what you're imagining is something like modified := false;for name in names(NEW) loop -- ignore modified_timestamp continue if name = 'modified_timestamp'; -- check all other columns if NEW.{name} is distinct from OLD.{name} then modified := true; exit; end if;end loop;if modified then ... While this is perhaps doable, the performance would take your breath away ... and I don't mean that in a positive sense. The only way we could implement that in plpgsql as it stands would be that every single execution of the IF would invole a parse/plan cycle for the "$1 IS DISTINCT FROM $2" expression. At best we would avoid a replan when successive executions had the same datatypes for the tested columns (ie, adjacent columns in the table have the same types). Which would happen some of the time, but the cost of the replans would still be enough to sink you. This might look neat but I don't think it's actually useful for any production application. We'd need to find some way of expressing it that allows caching of the expression plans. But really I think the entire approach is pretty much backwards from an efficiency standpoint. I would sooner have some sort of primitive "changed_columns(NEW, OLD)" that spits out a list of the names of changed columns (or maybe the not-changed ones, not sure). It would not require any fundamental restructuring and it would run several orders of magnitude faster than you could ever hope to do it at the plpgsql level. regards, tom lane
"Florian G. Pflug" <fgp@phlo.org> writes: > Still, doing that SELECT seems rather silly since NEW and OLD already > contain the required information. So I still believe that having > something like record_name() and record_types() would be useful. And at > least these functions have less of an issue with the type system... Yeah. I don't have any objection in principle to providing such functions; I'm just wondering how far that really goes towards solving real-world problems. regards, tom lane
On Sat, Nov 14, 2009 at 3:24 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > This might look neat but I don't think it's actually useful for any > production application. We'd need to find some way of expressing it > that allows caching of the expression plans. But really I think the > entire approach is pretty much backwards from an efficiency standpoint. > I would sooner have some sort of primitive "changed_columns(NEW, OLD)" > that spits out a list of the names of changed columns (or maybe the > not-changed ones, not sure). It would not require any fundamental > restructuring and it would run several orders of magnitude faster > than you could ever hope to do it at the plpgsql level. huge +1 to this. This problem comes up all the time...I was in fact this exact moment working on something just like Florian for table auditing purposes...comparing new/old but needing to filter out uninteresting columns. One of those things that should be a SMOP but isn't ;-). I worked out a plpgsql approach using dynamic sql...performance wasn't _that_ bad, but any speedup is definitely welcome. The way I did it was to pass both new and old to a function as text, and build an 'is distinct from' from with the interesting field list querying out fields from the expanded composite type...pretty dirty. merlin
2009/11/14 Merlin Moncure <mmoncure@gmail.com>: > On Sat, Nov 14, 2009 at 3:24 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: >> This might look neat but I don't think it's actually useful for any >> production application. We'd need to find some way of expressing it >> that allows caching of the expression plans. But really I think the >> entire approach is pretty much backwards from an efficiency standpoint. >> I would sooner have some sort of primitive "changed_columns(NEW, OLD)" >> that spits out a list of the names of changed columns (or maybe the >> not-changed ones, not sure). It would not require any fundamental >> restructuring and it would run several orders of magnitude faster >> than you could ever hope to do it at the plpgsql level. > > huge +1 to this. This problem comes up all the time...I was in fact > this exact moment working on something just like Florian for table > auditing purposes...comparing new/old but needing to filter out > uninteresting columns. One of those things that should be a SMOP but > isn't ;-). I worked out a plpgsql approach using dynamic > sql...performance wasn't _that_ bad, but any speedup is definitely > welcome. C function is_not_distinct(RECORD, RECORD, [variadic columnnames]) should not be a problem (I thing). Pavel > > The way I did it was to pass both new and old to a function as text, > and build an 'is distinct from' from with the interesting field list > querying out fields from the expanded composite type...pretty dirty. > > merlin >
2009/11/14 Merlin Moncure <mmoncure@gmail.com>: > On Sat, Nov 14, 2009 at 3:24 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: >> This might look neat but I don't think it's actually useful for any >> production application. We'd need to find some way of expressing it >> that allows caching of the expression plans. But really I think the >> entire approach is pretty much backwards from an efficiency standpoint. >> I would sooner have some sort of primitive "changed_columns(NEW, OLD)" >> that spits out a list of the names of changed columns (or maybe the >> not-changed ones, not sure). It would not require any fundamental >> restructuring and it would run several orders of magnitude faster >> than you could ever hope to do it at the plpgsql level. > > huge +1 to this. This problem comes up all the time...I was in fact > this exact moment working on something just like Florian for table > auditing purposes...comparing new/old but needing to filter out > uninteresting columns. One of those things that should be a SMOP but > isn't ;-). I worked out a plpgsql approach using dynamic > sql...performance wasn't _that_ bad, but any speedup is definitely > welcome. > > The way I did it was to pass both new and old to a function as text, > and build an 'is distinct from' from with the interesting field list > querying out fields from the expanded composite type...pretty dirty. > isn't better job for TRIGGER WHEN clause Pavel > merlin >
On Sat, Nov 14, 2009 at 21:07, u235sentinel <u235sentinel@gmail.com> wrote: > I'm curious if anyone has tried to link postgres authentication with a > product called likewise. > > Likesoft software will allow a linux/unix system to authenticate against a > windows domain. I have it working on several flavors of linux and working > on getting it tied into several samba shares. I've heard there is a way to > make it work with postgres but couldn't find any details. > > I'm curious if anyone has tried to do this and would love any tips :D I've never heard of likewise, but PostgreSQL will natively authenticate to a Windows domain using either LDAP or GSSAPI. (Unless you're using a pre-windows2000 windows domain, but for your own sake I really hope you don't...) -- Magnus HaganderMe: http://www.hagander.net/Work: http://www.redpill-linpro.com/