Thread: Missing array support
Some nice advances to SQL standard array support were made, but there are a few things that don't work yet in the sense of feature S091 "Basic array support". Joe, do you want to take on some of these? They should be pretty easy (for you). * Declaration of multidimensional arrays (see clause 6.1): create table test2 (a int, b text array[5] array[6]); ERROR: syntax error at or near "array" at character 44 * Empty arrays (see clause 6.4): insert into test values (1, array[]); ERROR: syntax error at or near "]" at character 35 * Cardinality function (returns array dimensions, see clause 6.17). * Using an array as a table source using UNNEST, something like: select * from unnest(test.b); (Check the exact spec to be sure; clause 7.6.) * Some information schema work (doing that now...) -- Peter Eisentraut peter_e@gmx.net
Peter Eisentraut wrote: > Some nice advances to SQL standard array support were made, but there are > a few things that don't work yet in the sense of feature S091 "Basic array > support". Joe, do you want to take on some of these? They should be > pretty easy (for you). > > * Declaration of multidimensional arrays (see clause 6.1): > > create table test2 (a int, b text array[5] array[6]); > ERROR: syntax error at or near "array" at character 44 I don't see anything about multidimensional arrays at all. I take it this is SQL99 (ISO/IEC 9075-2:1999 (E))? Can you point to a more specific paragraph? > * Empty arrays (see clause 6.4): > > insert into test values (1, array[]); > ERROR: syntax error at or near "]" at character 35 I saw this, but interpreted it as a data type specification, not an expression. Here's what SQL200x says: <empty specification> ::= ARRAY <left bracket or trigraph> <right bracket or trigraph> Syntax Rules 1) The declared type DT of an <empty specification> ES is ET ARRAY[0], where the element type ET is determined by the context in which ES appears. ES is effectively replaced by CAST ( ES AS DT ). NOTE 69 – In every such context, ES is uniquely associated with some expression or site of declared type DT, which thereby becomes the declared type of ES. So array[] should produce '{}' of (an array) type determined by the context? OK -- seems easy enough. > * Cardinality function (returns array dimensions, see clause 6.17). <cardinality expression> ::= CARDINALITY <left paren> <collection value expression> <right paren> 6) If <cardinality expression> is specified, then the declared type of the result is exact numeric with implementation-defined precision and scale 0 (zero). 8) The result of <cardinality expression> is the number of elements of the result of the <collection value expression>. Seems easy. > * Using an array as a table source using UNNEST, something like: > > select * from unnest(test.b); > (Check the exact spec to be sure; clause 7.6.) Interesting. I already wrote (essentially) this function, but it was rejected months ago when we were discussing its limitations. I didn't realize there was a spec compliant way to do it: <table reference> ::= <table primary> <table primary> ::= <collection derived table> [ AS ] <correlation name> [ <left paren> <derived columnlist> <right paren> ] <collection derived table> ::= UNNEST <left paren> <collection value expression> <right paren> [ WITH ORDINALITY ] 1) If a <table reference> TR specifies a <collection derived table> CDT, then let C be the <collection value expression> immediately contained in CDT, let CN be the <correlation name> immediately contained in TR, and let TEMP be an <identifier> that is not equivalent to CN nor to any other <identifier> contained in TR. a) Case: i) If TR specifies a <derived column list> DCL, then Case: 1)If CDT specifies WITH ORDINALITY, then DCL shall contain 2 <column name>s. Let N1 and N2 be respectively the firstand second of those <column name>s. 2) Otherwise, DCL shall contain 1 (one) <column name>; let N1 be that <column name>. Let N2 be a <column name> that is not equivalent to N1, CN, TEMP, or any other <identifier> contained in TR. ii) Otherwise, let N1 and N2 be two <column name>s that are not equivalentto one another nor to CN, TEMP, or any other <identifier> contained in TR. b) Let RECQP be: WITH RECURSIVE TEMP(N1, N2) AS ( SELECT C[1] AS N1, 1 AS N2 FROM (VALUES(1)) AS CN WHERE 0 < CARDINALITY(C) UNION SELECT C[N2+1] AS N1, N2+1 AS N2 FROM TEMP WHERE N2 < CARDINALITY(C)) c) Case: i) If TR specifies a <derived column list> DCL, then let PDCLP be ( DCL ) ii) Otherwise, let PDCLPbe a zero-length string. d) Case: i) If CDT specifies WITH ORDINALITY, then let ELDT be: LATERAL ( RECQP SELECT * FROM TEMP AS CN PDCLP) ii) Otherwise, let ELDT be: LATERAL ( RECQP SELECT N1 FROM TEMP AS CN PDCLP ) e) CDT is equivalent tothe <lateral derived table> ELDT. 14) A <collection derived table> is not updatable. Whew! Anyone care to help me interpret that! At it's most basic level, I think these are valid: select * from unnest(array['a','b']); ?column? ---------- a b select * from unnest(array['a','b']) WITH ORDINALITY; ?column? | ?column? ----------+---------- 1 | a 2 | b select * from unnest(array['a','b']) as t(f1, f2) WITH ORDINALITY; f1 | f2 ----+---- 1 | a 2 | b Does this look correct? Again, shouldn't be too hard as most of the work is already done. I'd just need to do some grammar modifications. > * Some information schema work (doing that now...) > So I take it I need not worry about that? None of this is very difficult. I'll try to fit it in between now and Monday evening, but if not it's very doable for 7.5. Joe
> > * Some information schema work (doing that now...) > > > So I take it I need not worry about that? > > > None of this is very difficult. I'll try to fit it in between now and > Monday evening, but if not it's very doable for 7.5. Joe, you have to get in the swing of things --- beta isn't until July 15, and even after that, you can fix bugs, so once it is in, you can fiddle with it for months. :-) -- Bruce Momjian | http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001+ If your life is a hard drive, | 13 Roberts Road + Christ can be your backup. | Newtown Square, Pennsylvania19073
Bruce Momjian wrote: >>None of this is very difficult. I'll try to fit it in between now and >>Monday evening, but if not it's very doable for 7.5. > > Joe, you have to get in the swing of things --- beta isn't until July > 15, and even after that, you can fix bugs, so once it is in, you can > fiddle with it for months. :-) > :-) Yeah, but isn't feature freeze July 1? Joe
Joe Conway wrote: > Bruce Momjian wrote: > >>None of this is very difficult. I'll try to fit it in between now and > >>Monday evening, but if not it's very doable for 7.5. > > > > Joe, you have to get in the swing of things --- beta isn't until July > > 15, and even after that, you can fix bugs, so once it is in, you can > > fiddle with it for months. :-) > > > > :-) > > Yeah, but isn't feature freeze July 1? Yes, but once the "feature" is in, you can adjust it if it isn't working. It might not apply to your item, though, because anything that requires system catalog adjustments is frowned on during beta. I am just pointing out that beating the system is a popular hacker passtime during beta. :-) -- Bruce Momjian | http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001+ If your life is a hard drive, | 13 Roberts Road + Christ can be your backup. | Newtown Square, Pennsylvania19073
Bruce Momjian <pgman@candle.pha.pa.us> writes: > Joe Conway wrote: >> Yeah, but isn't feature freeze July 1? > Yes, but once the "feature" is in, you can adjust it if it isn't > working. > I am just pointing out that beating the system is a popular hacker > passtime during beta. :-) It's just a matter of staking out what's considered an implemented feature. The ARRAY[] syntax is definitely in, so if it needs a few adjustments around the edges to make it more spec-compliant, no one will blink at doing that during beta. If I were you I'd file this on the to-fix-later list and concentrate on polymorphic aggregates during the next couple days. If that's not done by Tuesday it will be a tough sell to put in during beta. regards, tom lane
Tom Lane wrote: > It's just a matter of staking out what's considered an implemented > feature. The ARRAY[] syntax is definitely in, so if it needs a few > adjustments around the edges to make it more spec-compliant, no one > will blink at doing that during beta. OK, but some of what Peter requested were new features too. > If I were you I'd file this on the to-fix-later list and concentrate > on polymorphic aggregates during the next couple days. If that's not > done by Tuesday it will be a tough sell to put in during beta. Agreed. I'm planning (in principle at least) concentrate on this stuff between now and Monday evening, and I'm taking Monday off work, so hopefully I can get a fair amount done. Joe
Joe Conway <mail@joeconway.com> writes: > So array[] should produce '{}' of (an array) type determined by the > context? OK -- seems easy enough. Is it? I think we'd decided that this could only reasonably be handled by creating a datatype representing array-of-UNKNOWN. I'm afraid to do that because I think it might allow the parser's type resolution algorithms to follow paths we will not like. Perhaps it can be made to work, but I think it will require some careful study. regards, tom lane
Tom Lane wrote: > Joe Conway <mail@joeconway.com> writes: >>So array[] should produce '{}' of (an array) type determined by the >>context? OK -- seems easy enough. > Is it? I think we'd decided that this could only reasonably be handled > by creating a datatype representing array-of-UNKNOWN. I'm afraid to do > that because I think it might allow the parser's type resolution > algorithms to follow paths we will not like. Perhaps it can be made to > work, but I think it will require some careful study. > But see the spec wording: 1) The declared type DT of an <empty specification> ES is ET ARRAY[0], where the element type ET is determined by the context in which ES appears. ES is effectively replaced by CAST ( ES AS DT ). NOTE 69 – In every such context, ES is uniquely associated with some expression or site of declared type DT, which thereby becomes the declared type of ES. I took that to mean that this sould only work in contexts where the data type is known. Come to think of it, I guess in most cases of ARRAY[elem1,elem2,elem3] we derive the data type using the elements in the array expression, so in practice there may be few places where this would work. We should be able to come up with a data type for inserts and updates though, shouldn't we? Joe
Joe Conway writes: > I don't see anything about multidimensional arrays at all. I take it > this is SQL99 (ISO/IEC 9075-2:1999 (E))? Can you point to a more > specific paragraph? It doesn't say anything specifically about multidimensional arrays, but the grammar clearly allows declaring arrays of arrays. <data type> ::= <predefined type> | <row type> | <user-defined type> | <reference type> | <collection type> <collection type> ::= <data type> <array specification> <array specification> ::= <collection type constructor> <left bracket or trigraph> <unsignedinteger> <right bracket or trigraph> <collection type constructor> ::= ARRAY This also has some consequences for the cardinality function. In order to get the cardinality of the second dimension, you'd need to call cardinality(a[1]). (I suppose it allows different cardinalities at various positions, so the array does not need to be an n-dimensional rectangle.) > > * Using an array as a table source using UNNEST, something like: > > > > select * from unnest(test.b); > > (Check the exact spec to be sure; clause 7.6.) > Whew! Anyone care to help me interpret that! At it's most basic level, I > think these are valid: > > select * from unnest(array['a','b']); > ?column? > ---------- > a > b > > select * from unnest(array['a','b']) WITH ORDINALITY; > ?column? | ?column? > ----------+---------- > 1 | a > 2 | b Yes. > select * from unnest(array['a','b']) as t(f1, f2) WITH ORDINALITY; > f1 | f2 > ----+---- > 1 | a > 2 | b The WITH ORDINALITY goes before the AS clause. The reason it is defined in terms of the LATERAL clause is that that allows you to refer to column aliases defined in FROM items to its left. This is the way variable arguments of function calls as table sources can be resolved. (At least this is my interpretation. I found some examples on the web a few months ago about this.) -- Peter Eisentraut peter_e@gmx.net
I wrote: > * Using an array as a table source using UNNEST, something like: > > select * from unnest(test.b); Btw., it would be really nice if some limited form of this could get done, so I could finish the information schema views pertaining to group privileges. I'd just need a way to find out what users are in what groups. If unnest() would work for locally constant arguments, I think it could be done like SELECT g.groname FROM pg_user u, pg_group g WHERE u.usesysid IN (SELECT * FROM UNNEST((SELECT grolist FROM pg_group WHERE grosysid = g.grosysid))) AND u.usename= current_user; Or is there some other way to do this now? -- Peter Eisentraut peter_e@gmx.net
Peter Eisentraut wrote: > Btw., it would be really nice if some limited form of this could get done, > so I could finish the information schema views pertaining to group > privileges. I'd just need a way to find out what users are in what > groups. If unnest() would work for locally constant arguments, I think it > could be done like > > SELECT g.groname > FROM pg_user u, pg_group g > WHERE u.usesysid IN (SELECT * FROM UNNEST((SELECT grolist FROM pg_group WHERE grosysid = g.grosysid))) > AND u.usename = current_user; > > Or is there some other way to do this now? > It isn't in CVS yet, but hopefully before Monday evening you'll be able to do this: regression=# create user u1; CREATE USER regression=# create user u2; CREATE USER regression=# create user u3; CREATE USER regression=# create group g1 with user u1,u2; CREATE GROUP regression=# create group g2 with user u1,u2,u3; CREATE GROUP regression=# \c - u1 You are now connected as new user u1. regression=> SELECT g.groname FROM pg_group g, pg_user u WHERE u.usename = current_user AND u.usesysid = ANY (g.grolist); groname --------- g1 g2 (2 rows) Joe
Peter Eisentraut <peter_e@gmx.net> writes: > Btw., it would be really nice if some limited form of this could get done, > so I could finish the information schema views pertaining to group > privileges. I'd just need a way to find out what users are in what > groups. As of a few minutes ago, SELECT g.groname FROM pg_user u, pg_group g WHERE u.usesysid = ANY (g.grolist) AND u.usename = current_user; regards, tom lane
Tom Lane wrote: > Peter Eisentraut <peter_e@gmx.net> writes: > >>Btw., it would be really nice if some limited form of this could get done, >>so I could finish the information schema views pertaining to group >>privileges. I'd just need a way to find out what users are in what >>groups. > > As of a few minutes ago, > > SELECT g.groname FROM pg_user u, pg_group g > WHERE u.usesysid = ANY (g.grolist) AND u.usename = current_user; > Hmmm, I just updated to cvs tip (so I could try this), did `configure`, `make clean`, and `make all` and I'm getting this failure: make[2]: Leaving directory `/opt/src/pgsql/src/port' make -C backend all make[2]: Entering directory `/opt/src/pgsql/src/backend' msgfmt -o po/cs.mo po/cs.po msgfmt -o po/de.mo po/de.po msgfmt -o po/es.mo po/es.po make[2]: *** No rule to make target `po/hr.po', needed by `po/hr.mo'. Stop. make[2]: Leaving directory `/opt/src/pgsql/src/backend' make[1]: *** [all] Error 2 make[1]: Leaving directory `/opt/src/pgsql/src' make: *** [all] Error 2 Any ideas? Joe
Joe Conway wrote: > Hmmm, I just updated to cvs tip (so I could try this), did `configure`, > `make clean`, and `make all` and I'm getting this failure: > > make[2]: Leaving directory `/opt/src/pgsql/src/port' > make -C backend all > make[2]: Entering directory `/opt/src/pgsql/src/backend' > msgfmt -o po/cs.mo po/cs.po > msgfmt -o po/de.mo po/de.po > msgfmt -o po/es.mo po/es.po > make[2]: *** No rule to make target `po/hr.po', needed by `po/hr.mo'. > Stop. FWIW, I find that if I remove "hr" and "tr" from this line in /opt/src/pgsql/src/backend/nls.mk, everything goes fine: AVAIL_LANGUAGES := cs de es hu ru sv zh_CN zh_TW Do I need to do something to get new language files? Joe
Joe Conway wrote: > FWIW, I find that if I remove "hr" and "tr" from this line in > /opt/src/pgsql/src/backend/nls.mk, everything goes fine: > > AVAIL_LANGUAGES := cs de es hu ru sv zh_CN zh_TW > > Do I need to do something to get new language files? Replying to myself again ;-) I was a bit too quick to say "everything goes fine". I got several more nls related failures. Attached is the patch I used to back out the ones causing me problems. Did a "cvs add" get missed somewhere, or am I doing something wrong? Thanks, Joe Index: src/backend/nls.mk =================================================================== RCS file: /opt/src/cvs/pgsql-server/src/backend/nls.mk,v retrieving revision 1.4 diff -c -r1.4 nls.mk *** src/backend/nls.mk 28 Jun 2003 22:30:59 -0000 1.4 --- src/backend/nls.mk 29 Jun 2003 01:09:29 -0000 *************** *** 1,6 **** # $Header: /opt/src/cvs/pgsql-server/src/backend/nls.mk,v 1.4 2003/06/28 22:30:59 petere Exp $ CATALOG_NAME := postgres ! AVAIL_LANGUAGES := cs de es hr hu ru sv tr zh_CN zh_TW GETTEXT_FILES := + gettext-files GETTEXT_TRIGGERS:= elog:2 postmaster_error yyerror --- 1,6 ---- # $Header: /opt/src/cvs/pgsql-server/src/backend/nls.mk,v 1.4 2003/06/28 22:30:59 petere Exp $ CATALOG_NAME := postgres ! AVAIL_LANGUAGES := cs de es hu ru sv zh_CN zh_TW GETTEXT_FILES := + gettext-files GETTEXT_TRIGGERS:= elog:2 postmaster_error yyerror Index: src/bin/pg_controldata/nls.mk =================================================================== RCS file: /opt/src/cvs/pgsql-server/src/bin/pg_controldata/nls.mk,v retrieving revision 1.4 diff -c -r1.4 nls.mk *** src/bin/pg_controldata/nls.mk 28 Jun 2003 22:31:00 -0000 1.4 --- src/bin/pg_controldata/nls.mk 29 Jun 2003 01:19:00 -0000 *************** *** 1,5 **** # $Header: /opt/src/cvs/pgsql-server/src/bin/pg_controldata/nls.mk,v 1.4 2003/06/28 22:31:00 petere Exp $ CATALOG_NAME := pg_controldata ! AVAIL_LANGUAGES := de es fr hu pt_BR ru sv zh_CN GETTEXT_FILES := pg_controldata.c GETTEXT_TRIGGERS:= _ --- 1,5 ---- # $Header: /opt/src/cvs/pgsql-server/src/bin/pg_controldata/nls.mk,v 1.4 2003/06/28 22:31:00 petere Exp $ CATALOG_NAME := pg_controldata ! AVAIL_LANGUAGES := de es pt_BR ru sv zh_CN GETTEXT_FILES := pg_controldata.c GETTEXT_TRIGGERS:= _ Index: src/bin/pg_dump/nls.mk =================================================================== RCS file: /opt/src/cvs/pgsql-server/src/bin/pg_dump/nls.mk,v retrieving revision 1.8 diff -c -r1.8 nls.mk *** src/bin/pg_dump/nls.mk 28 Jun 2003 22:31:01 -0000 1.8 --- src/bin/pg_dump/nls.mk 29 Jun 2003 01:16:32 -0000 *************** *** 1,6 **** # $Header: /opt/src/cvs/pgsql-server/src/bin/pg_dump/nls.mk,v 1.8 2003/06/28 22:31:01 petere Exp $ CATALOG_NAME := pg_dump ! AVAIL_LANGUAGES := cs de pt_BR ru sv zh_CN zh_TW GETTEXT_FILES := pg_dump.c common.c pg_backup_archiver.c pg_backup_custom.c \ pg_backup_db.c pg_backup_files.c pg_backup_null.c \ pg_backup_tar.c pg_restore.c pg_dumpall.c --- 1,6 ---- # $Header: /opt/src/cvs/pgsql-server/src/bin/pg_dump/nls.mk,v 1.8 2003/06/28 22:31:01 petere Exp $ CATALOG_NAME := pg_dump ! AVAIL_LANGUAGES := cs de ru sv zh_CN zh_TW GETTEXT_FILES := pg_dump.c common.c pg_backup_archiver.c pg_backup_custom.c \ pg_backup_db.c pg_backup_files.c pg_backup_null.c \ pg_backup_tar.c pg_restore.c pg_dumpall.c Index: src/bin/pg_resetxlog/nls.mk =================================================================== RCS file: /opt/src/cvs/pgsql-server/src/bin/pg_resetxlog/nls.mk,v retrieving revision 1.5 diff -c -r1.5 nls.mk *** src/bin/pg_resetxlog/nls.mk 28 Jun 2003 22:31:01 -0000 1.5 --- src/bin/pg_resetxlog/nls.mk 29 Jun 2003 01:19:26 -0000 *************** *** 1,5 **** # $Header: /opt/src/cvs/pgsql-server/src/bin/pg_resetxlog/nls.mk,v 1.5 2003/06/28 22:31:01 petere Exp $ CATALOG_NAME := pg_resetxlog ! AVAIL_LANGUAGES := de hu pt_BR ru sv zh_CN GETTEXT_FILES := pg_resetxlog.c GETTEXT_TRIGGERS:= _ --- 1,5 ---- # $Header: /opt/src/cvs/pgsql-server/src/bin/pg_resetxlog/nls.mk,v 1.5 2003/06/28 22:31:01 petere Exp $ CATALOG_NAME := pg_resetxlog ! AVAIL_LANGUAGES := de pt_BR ru sv zh_CN GETTEXT_FILES := pg_resetxlog.c GETTEXT_TRIGGERS:= _ Index: src/bin/psql/nls.mk =================================================================== RCS file: /opt/src/cvs/pgsql-server/src/bin/psql/nls.mk,v retrieving revision 1.8 diff -c -r1.8 nls.mk *** src/bin/psql/nls.mk 28 Jun 2003 22:31:01 -0000 1.8 --- src/bin/psql/nls.mk 29 Jun 2003 01:17:31 -0000 *************** *** 1,6 **** # $Header: /opt/src/cvs/pgsql-server/src/bin/psql/nls.mk,v 1.8 2003/06/28 22:31:01 petere Exp $ CATALOG_NAME := psql ! AVAIL_LANGUAGES := cs de fr hu ru sv zh_CN zh_TW GETTEXT_FILES := command.c common.c copy.c help.c input.c large_obj.c \ mainloop.c print.c startup.c describe.c GETTEXT_TRIGGERS:= _ psql_error simple_prompt --- 1,6 ---- # $Header: /opt/src/cvs/pgsql-server/src/bin/psql/nls.mk,v 1.8 2003/06/28 22:31:01 petere Exp $ CATALOG_NAME := psql ! AVAIL_LANGUAGES := cs de fr ru sv zh_CN zh_TW GETTEXT_FILES := command.c common.c copy.c help.c input.c large_obj.c \ mainloop.c print.c startup.c describe.c GETTEXT_TRIGGERS:= _ psql_error simple_prompt
Peter Eisentraut wrote: > It doesn't say anything specifically about multidimensional arrays, but > the grammar clearly allows declaring arrays of arrays. > > <data type> ::= > <predefined type> > | <row type> > | <user-defined type> > | <reference type> > | <collection type> > > <collection type> ::= > <data type> <array specification> > > <array specification> ::= > <collection type constructor> > <left bracket or trigraph> <unsigned integer> <right bracket or trigraph> > > <collection type constructor> ::= > ARRAY Yeah, I noticed that after I replied. So <data type> <array specification> means something like this is valid integer ARRAY[3] ARRAY[4] ARRAY[5] ? Is this the same then as our syntax? integer [3][4][5] > This also has some consequences for the cardinality function. In order to > get the cardinality of the second dimension, you'd need to call > cardinality(a[1]). (I suppose it allows different cardinalities at > various positions, so the array does not need to be an n-dimensional > rectangle.) Hmmm. So this implies that if arr is a 2D array, we need to treat: arr as a 2D array arr[n] as a 1D array arr[n][m] as ascalar If that's true, we have a good bit of work left to do to be compliant; e.g.: regression=# select f from z; f ----------------------------------------------------------------------------------- {{{1,1},{1,1},{1,1}},{{1,1},{1,1},{1,1}},{{1,1},{1,1},{1,1}},{{1,1},{1,1},{1,1}}} (1 row) regression=# select f[1][1] from z; f --- (1 row) regression=# select f[1][1][1] from z; f --- 1 (1 row) Based on the above, "select f[1][1] from z;" ought to result in "{1,1}"? >>select * from unnest(array['a','b']) as t(f1, f2) WITH ORDINALITY; >> f1 | f2 >>----+---- >> 1 | a >> 2 | b > > > The WITH ORDINALITY goes before the AS clause. OK > The reason it is defined in terms of the LATERAL clause is that that > allows you to refer to column aliases defined in FROM items to its left. > This is the way variable arguments of function calls as table sources can > be resolved. (At least this is my interpretation. I found some examples > on the web a few months ago about this.) Thanks for explaining that. I've never seen a LATERAL clause, and I was wondering just what this part meant. So this applies to the discussion we had a while back about set returning functions in the targetlist? Joe
On Sat, 28 Jun 2003, Joe Conway wrote: > > Do I need to do something to get new language files? > > causing me problems. Did a "cvs add" get missed somewhere, or am I doing > something wrong? Yes, a couple of cvs add was forgotten. Peter made an update with the comment "Merge PO file updates from 7.3 branch.". I checked out a new copy with tag REL7_3_2 and there are the missing files (at least the one I checked, but probably the rest also). -- /Dennis
I tried readding the files now, I seems it got them now. Possibly cvs was confused because those files already existed in the 7.3 branch so it found "dead revisions" in the head branch. Joe Conway writes: > Hmmm, I just updated to cvs tip (so I could try this), did `configure`, > `make clean`, and `make all` and I'm getting this failure: > > make[2]: Leaving directory `/opt/src/pgsql/src/port' > make -C backend all > make[2]: Entering directory `/opt/src/pgsql/src/backend' > msgfmt -o po/cs.mo po/cs.po > msgfmt -o po/de.mo po/de.po > msgfmt -o po/es.mo po/es.po > make[2]: *** No rule to make target `po/hr.po', needed by `po/hr.mo'. Stop. > make[2]: Leaving directory `/opt/src/pgsql/src/backend' > make[1]: *** [all] Error 2 > make[1]: Leaving directory `/opt/src/pgsql/src' > make: *** [all] Error 2 > > Any ideas? > > Joe > -- Peter Eisentraut peter_e@gmx.net
Peter Eisentraut wrote: > I tried readding the files now, I seems it got them now. Possibly cvs was > confused because those files already existed in the 7.3 branch so it > found "dead revisions" in the head branch. Thanks, this fixed it for me. Joe
Tom Lane wrote: > Joe Conway <mail@joeconway.com> writes: > >>So array[] should produce '{}' of (an array) type determined by the >>context? OK -- seems easy enough. > > Is it? I think we'd decided that this could only reasonably be handled > by creating a datatype representing array-of-UNKNOWN. I'm afraid to do > that because I think it might allow the parser's type resolution > algorithms to follow paths we will not like. Perhaps it can be made to > work, but I think it will require some careful study. I took a closer look -- yeah, without array-of-UNKNOWN I don't think we can make this work. I got something working by forcing the element type to UNKNOWN when the elements list is empty in transformExpr(), but then select_common_type() turns around and turns UNKNOWN into TEXT, so you wind up with an empty text[]. I won't bother sending that patch in because I *know* it will get rejected ;-) I guess we should put array-of-UNKNOWN on the list of things to look at for 7.5. Joe
Peter Eisentraut wrote: >>>* Using an array as a table source using UNNEST, something like: >>> >>>select * from unnest(test.b); >>>(Check the exact spec to be sure; clause 7.6.) >> >>select * from unnest(array['a','b']); >>?column? >>---------- >> a >> b >> >>select * from unnest(array['a','b']) WITH ORDINALITY; >> ?column? | ?column? >>----------+---------- >> 1 | a >> 2 | b > >>select * from unnest(array['a','b']) as t(f1, f2) WITH ORDINALITY; >> f1 | f2 >>----+---- >> 1 | a >> 2 | b > > The WITH ORDINALITY goes before the AS clause. > > The reason it is defined in terms of the LATERAL clause is that that > allows you to refer to column aliases defined in FROM items to its left. > This is the way variable arguments of function calls as table sources can > be resolved. (At least this is my interpretation. I found some examples > on the web a few months ago about this.) > If I can get this done *without* supporting LATERAL by the end of the evening (i.e. just implement the examples), would it possibly be accepted? Or should UNNEST wait until we get LATERAL? Joe
Added. --------------------------------------------------------------------------- Joe Conway wrote: > Bruce Momjian wrote: > >>I guess we should put array-of-UNKNOWN on the list of things to look at > >>for 7.5. > >> > > Yeah; maybe something like this? > > Delay resolution of array expression type as long as possible so that > assignment coercion can be performed on empty array expressions. > > Joe > > -- Bruce Momjian | http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001+ If your life is a hard drive, | 13 Roberts Road + Christ can be your backup. | Newtown Square, Pennsylvania19073
Bruce Momjian wrote: >>I guess we should put array-of-UNKNOWN on the list of things to look at >>for 7.5. >> Yeah; maybe something like this? Delay resolution of array expression type as long as possible so that assignment coercion can be performed on empty array expressions. Joe
Joe, do you need a TODO added for this? --------------------------------------------------------------------------- Joe Conway wrote: > Tom Lane wrote: > > Joe Conway <mail@joeconway.com> writes: > > > >>So array[] should produce '{}' of (an array) type determined by the > >>context? OK -- seems easy enough. > > > > Is it? I think we'd decided that this could only reasonably be handled > > by creating a datatype representing array-of-UNKNOWN. I'm afraid to do > > that because I think it might allow the parser's type resolution > > algorithms to follow paths we will not like. Perhaps it can be made to > > work, but I think it will require some careful study. > > I took a closer look -- yeah, without array-of-UNKNOWN I don't think we > can make this work. > > I got something working by forcing the element type to UNKNOWN when the > elements list is empty in transformExpr(), but then select_common_type() > turns around and turns UNKNOWN into TEXT, so you wind up with an empty > text[]. > > I won't bother sending that patch in because I *know* it will get > rejected ;-) > > I guess we should put array-of-UNKNOWN on the list of things to look at > for 7.5. > > Joe > > > ---------------------------(end of broadcast)--------------------------- > TIP 4: Don't 'kill -9' the postmaster > -- Bruce Momjian | http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001+ If your life is a hard drive, | 13 Roberts Road + Christ can be your backup. | Newtown Square, Pennsylvania19073