Thread: FOREIGN KEY and shift/reduce
Hi, I just committed a patch that turns on FOREIGN KEY. Thus, REFERENCES used in CREATE TABLE now automatically creates the appropriate constraint triggers. The implementation also supports omitting the PK column definition, if the corresponding columns should be the PRIMARY KEY of the referenced table. Also I completed some more of the generic trigger procs. For MATCH FULL, the key existence check in PK table and these actions are completed: ON DELETE RESTRICT ON DELETE CASCADE ON UPDATE RESTRICT ON UPDATE CASCADE Still missing are the SET NULL and SET DEFAULT actions. The former is easy and will follow soon, the latter looks tricky if the implementation should support ALTER TABLE/DEFAULT (what it IMHO must even if that ALTER TABLE isn't implemented yet). Anyway, I ran into some shift/reduce problem in the main parser. The syntax according to SQL3 says <constraint attributes> ::= <constraint check time> [ [ NOT ] DEFERRABLE ] | [ NOT ] DEFERRABLE [ <constraint check time> ] <constraint check time> defines INITIALLY DEFERRED/IMMEDIATE and defaults to IMMEDIATE. If I allow the <constraint attributes> in column constraints, I get 2 shift/reduce conflicts. Seems the syntax interferes with NOT NULL. Actually I commented that part out, so the complete syntax is available only for table constraints, not on the column level. Could some yacc-guru please take a look at it? Another interesting question is about inheritance. If a REFERENCES constraint exists for a table, must another table, inheriting this one, also get all the FK checks applied? Jan -- #======================================================================# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #========================================= wieck@debis.com (Jan Wieck) #
I have the following table and view: create table users ( user_id integer not null primary key, first_names varchar(50)not null, last_name varchar(50) not null, password varchar(30) not null, email varchar(50) not null unique, census_rights_p boolean default 'f', locale_rights_p boolean default 'f', admin_rights_p boolean default 'f', -- to suppress email alerts on_vacation_until date, -- set when user reappears at site last_visit datetime, -- this is what most pages queryagainst (since the above column -- will only be a few minutes old for most pages in a session) second_to_last_visit datetime, registration_date date, registration_ip varchar(50), user_state varchar(100)check(user_state is null or user_state in ('need_email_verification_and_admin_approv', 'n eed_admin_approv', 'need_email_verification', 'rejected', 'authorized', 'banned', 'deleted') ), deleted_p boolean default 'f', banned_p boolean default 'f', -- who andwhy this person was banned banning_user integer, banning_note varchar(4000), portrait_loaded boolean default 'f', portrait_type varchar(10) default '' ); -- Create an "alert table" view of just those users who should -- be sent e-mail alerts. create view users_alertable as select *from userswhere (on_vacation_until is null or on_vacation_until < 'now'::date)and (deleted_p = 'f'); This has been working for months, just fine. I've been porting over a bunch more stuff from Oracle to this Postgres-based system, and bam! Now any select from the view dies with: unknown node tag 600 in apply_RIR_view I've tried dropping and rebuilding the table and view in a test database and the problem remains. I recall running into problems with other operations many moons ago, where a particular node type wasn't being handled by a particular operator (the ones I'd seen previously were fixed by the excellent 6.5.* versions). Is this a similar case? I may do a little digging myself tonight, but thought I'd ask to see if this rings a bell with anyone. It's a bit strange because this view's been working great on this table for so long. I added a couple of extra columns to the table recently but the view worked immediately afterwards. The stuff I've been porting creates views willy-nilly and it's almost like there's an interaction taking place, but that doesn't seem right. It fails in the same manner if I simply declare the view as: create view users_alertable as select * from users; - Don Baccus, Portland OR <dhogaza@pacifier.com> Nature photos, on-line guides, Pacific Northwest Rare Bird Alert Serviceand other goodies at http://donb.photo.net.
Don Baccus wrote: > This has been working for months, just fine. I've been porting over a bunch > more stuff from Oracle to this Postgres-based system, and bam! Now any > select from the view dies with: > > unknown node tag 600 in apply_RIR_view Node tag 600 is T_Query. > I've tried dropping and rebuilding the table and view in a test database > and the problem remains. I recall running into problems with other > operations many moons ago, where a particular node type wasn't being > handled by a particular operator (the ones I'd seen previously were > fixed by the excellent 6.5.* versions). > > Is this a similar case? I may do a little digging myself tonight, but > thought I'd ask to see if this rings a bell with anyone. It's a bit > strange because this view's been working great on this table for so > long. I added a couple of extra columns to the table recently but > the view worked immediately afterwards. The stuff I've been porting > creates views willy-nilly and it's almost like there's an interaction > taking place, but that doesn't seem right. > > It fails in the same manner if I simply declare the view as: > > create view users_alertable as select * from users; I assume the column with the IN constraint is either new or changed recently. Seems the system generates some subselect for that and the rewriter is unable to handle this case. I don't have the time to tackle that, just some hint to push you into the right direction. Be careful if hacking inside the rewriter, it's a very sensitive area! Jan -- #======================================================================# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #========================================= wieck@debis.com (Jan Wieck) #
On Mon, 6 Dec 1999, Jan Wieck wrote: > Hi, > > Another interesting question is about inheritance. If a > REFERENCES constraint exists for a table, must another table, > inheriting this one, also get all the FK checks applied? Hi, This inspire me for next question: What is in PosgreSQL inherited? IMHO is problem make inheriting FOREIGN KEY if not is support for UNIQUE or PRIMARY KEYs inheriting. (Or is it in CVS tree?). PostgreSQL 6.5.3: test=> create table aaa (x int4 UNIQUE); NOTICE: CREATE TABLE/UNIQUE will create implicit index 'aaa_x_key' for table 'aaa' CREATE test=> create table y () inherits (aaa); CREATE test=> insert into y values (1); INSERT 590807 1 test=> insert into y values (1); INSERT 590808 1 Karel PS. I very look forward to a FOREIGN KEY, with this feature will life easier and the PgSQL will more tread on the heelsof non-GNU engines.
> -----Original Message----- > From: owner-pgsql-hackers@postgreSQL.org > [mailto:owner-pgsql-hackers@postgreSQL.org]On Behalf Of Jan Wieck > > Hi, > > I just committed a patch that turns on FOREIGN KEY. Thus, > REFERENCES used in CREATE TABLE now automatically creates the > appropriate constraint triggers. The implementation also > supports omitting the PK column definition, if the > corresponding columns should be the PRIMARY KEY of the > referenced table. > > Also I completed some more of the generic trigger procs. For > MATCH FULL, the key existence check in PK table and these > actions are completed: > > ON DELETE RESTRICT > ON DELETE CASCADE > ON UPDATE RESTRICT > ON UPDATE CASCADE > Nice. I tried a little. < session 1 > => create table ri1 (id int4 primary key);NOTICE: CREATE TABLE/PRIMARY KEY will create implicitindex 'ri1_pkey' for table'ri1'CREATE => insert into ri1 values (1);INSERT 92940 1 =>create table ri2 (id int4 references ri1 match full on delete restrict); NOTICE: CREATE TABLE will create implicit trigger(s)forFOREIGN KEY check(s)CREATE => begin;BEGIN => delete from ri1 where id=1;DELETE 1 < session 2 > => insert into ri2 values (1);INSERT 92960 1 < session 1 > => commit;END => select * from ri1;id--(0 rows) => select * from ri2;id-- 1(1 row) Is this a temporary behavior ? Regards. Hiroshi Inoue Inoue@tpf.co.jp
Hiroshi Inoue wrote: > Nice. > I tried a little. > > < session 1 > > => create table ri1 (id int4 primary key); > NOTICE: CREATE TABLE/PRIMARY KEY will create implicit > index 'ri1_pkey' for table 'ri1' > CREATE > => insert into ri1 values (1); > INSERT 92940 1 > =>create table ri2 (id int4 references ri1 match full on delete restrict); > NOTICE: CREATE TABLE will create implicit trigger(s) for > FOREIGN KEY check(s) > CREATE > => begin; > BEGIN > => delete from ri1 where id=1; > DELETE 1 > > < session 2 > > => insert into ri2 values (1); > INSERT 92960 1 Outch, I see the shared visibility conflict. So the CHECK constraint trigger must get an exclusive lock somehow - I think an internal "FOR UPDATE OF" can do it - will try. Thanks, Jan -- #======================================================================# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #========================================= wieck@debis.com (Jan Wieck) #
> If I allow the <constraint attributes> in column constraints, > I get 2 shift/reduce conflicts. Seems the syntax interferes > with NOT NULL. Actually I commented that part out, so the > complete syntax is available only for table constraints, not > on the column level. > Could some yacc-guru please take a look at it? Guru I'm not, but I should be able to track it down if it isn't fixed when I *finally* get time to finish the join syntax. Maybe a few weeks... - Thomas -- Thomas Lockhart lockhart@alumni.caltech.edu South Pasadena, California
Hiroshi Inoue wrote: > Nice. > I tried a little. > > [...] > => begin; > BEGIN > => delete from ri1 where id=1; > DELETE 1 > > < session 2 > > => insert into ri2 values (1); > INSERT 92960 1 > > < session 1 > > => commit; > END > => select * from ri1; > id > -- > (0 rows) > => select * from ri2; > id > -- > 1 > (1 row) > > Is this a temporary behavior ? Fixed. Session 2 waits now until session 1 ends transaction. I'm thinking about another enhancement to the regression test now. Something where at least two sessions can run queries in a predefined order. Otherwise, something like the above cannot be checked during regression. I'm not sure how that can be done with a standard shell, and that's a must. Maybe something using named pipes and so - will play around a little. Jan -- #======================================================================# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #========================================= wieck@debis.com (Jan Wieck) #
wieck@debis.com (Jan Wieck) writes: > I'm thinking about another enhancement to the regression test > now. Something where at least two sessions can run queries > in a predefined order. Otherwise, something like the above > cannot be checked during regression. Yes, we could really use something like that. > I'm not sure how that can be done with a standard shell, and > that's a must. Maybe something using named pipes and so - > will play around a little. Would probably be more portable to write a little bit of C code that opens several libpq connections and reads a script file with interleaved commands to send to the different connections. OTOH, by the time you got done, it might have much of psql in it (certainly at least the display code). Peter, in your slicing and dicing of psql, did you end up with anything that might make this a feasible approach? regards, tom lane
Re: [HACKERS] Parallel regress tests (was Re: FOREIGN KEY and shift/reduce)
From
Peter Eisentraut
Date:
On Tue, 7 Dec 1999, Tom Lane wrote: > wieck@debis.com (Jan Wieck) writes: > > I'm thinking about another enhancement to the regression test > > now. Something where at least two sessions can run queries > > in a predefined order. Otherwise, something like the above > > cannot be checked during regression. > Peter, in your slicing and dicing of psql, did you end up with anything > that might make this a feasible approach? Um, you could call another psql from within psql, like so: /* psql script */ create this select that \! psql -f 'second-script' select more That satisfies the requirement of two separate sessions and a predefined order. I haven't actually tried it, but if it happens to not work as desired, it could certainly be fixed. -- Peter Eisentraut Sernanders vaeg 10:115 peter_e@gmx.net 75262 Uppsala http://yi.org/peter-e/ Sweden
Peter Eisentraut <e99re41@DoCS.UU.SE> writes: >> Peter, in your slicing and dicing of psql, did you end up with anything >> that might make this a feasible approach? > Um, you could call another psql from within psql, like so: > /* psql script */ > create this > select that > \! psql -f 'second-script' > select more > That satisfies the requirement of two separate sessions and a predefined > order. I assume that the \! command won't continue until the subjob exits? If so, this doesn't give us any way to verify that query A will wait for query B to finish ... at least not without locking up the test... Another possible approach is to accept that a parallel multi-backend test lashup *doesn't* have to run on every single system that Postgres runs on, if we keep it separate from the standard regress tests. For example, it's not much work to build a parallel test driver in Perl (I have done it), and I think that an auxiliary test package that requires Perl would be acceptable. regards, tom lane
Re: [HACKERS] Parallel regress tests (was Re: FOREIGN KEY and shift/reduce)
From
Vince Vielhaber
Date:
On Tue, 7 Dec 1999, Tom Lane wrote: > > > /* psql script */ > > create this > > select that > > \! psql -f 'second-script' > > select more > > > That satisfies the requirement of two separate sessions and a predefined > > order. > > I assume that the \! command won't continue until the subjob exits? > If so, this doesn't give us any way to verify that query A will wait for > query B to finish ... at least not without locking up the test... \! psql -f 'second-script' & will do it. You may wanna redirect your output. Vince. -- ========================================================================== Vince Vielhaber -- KA8CSH email: vev@michvhf.com flame-mail: /dev/null # include <std/disclaimers.h> Have you seenhttp://www.pop4.net? Online Campground Directory http://www.camping-usa.com Online Giftshop Superstore http://www.cloudninegifts.com ==========================================================================
Re: [HACKERS] Parallel regress tests (was Re: FOREIGN KEY and shift/reduce)
From
Peter Eisentraut
Date:
On Tue, 7 Dec 1999, Tom Lane wrote: > > Um, you could call another psql from within psql, like so: > > > /* psql script */ > > create this > > select that > > \! psql -f 'second-script' > > select more > > > That satisfies the requirement of two separate sessions and a predefined > > order. > > I assume that the \! command won't continue until the subjob exits? > If so, this doesn't give us any way to verify that query A will wait for > query B to finish ... at least not without locking up the test... I'm kind of losing you here. You want parallel execution *and* a predefined order of execution? In my (limited) book, those contradict each other a little bit. If it helps you can also do this: \! psql -f 'second-script' >& output & and the thus invoked script will happily continue executing even after you quit the first psql. I guess you could also do some simple synchronization things, like have the second psql wait on a file to spring into existence: /* second-script */ \! while [ ! -f /tmp/lock.file ]; do ;; done \! rm /tmp/lock.file Kind of like a simple semaphore. Isn't that what you are getting at? In the overall view of things it almost seems like these kind of tests need a human eye supervising them, because how do really determine "query A waits for query B to finish" otherwise? -- Peter Eisentraut Sernanders vaeg 10:115 peter_e@gmx.net 75262 Uppsala http://yi.org/peter-e/ Sweden
Re: [HACKERS] Parallel regress tests (was Re: FOREIGN KEY and
From
wieck@debis.com (Jan Wieck)
Date:
Peter Eisentraut wrote: > On Tue, 7 Dec 1999, Tom Lane wrote: > > > I assume that the \! command won't continue until the subjob exits? > > If so, this doesn't give us any way to verify that query A will wait for > > query B to finish ... at least not without locking up the test... > > I'm kind of losing you here. You want parallel execution *and* a > predefined order of execution? In my (limited) book, those contradict each > other a little bit. If it helps you can also do this: > \! psql -f 'second-script' >& output & > and the thus invoked script will happily continue executing even after you > quit the first psql. Yes, we mean controlled order of execution across multiple backends. And a script invoked in background, like the above, doesn't do the trick - and BTW we already have that kind of testing. > I guess you could also do some simple synchronization things, like have > the second psql wait on a file to spring into existence: > /* second-script */ > \! while [ ! -f /tmp/lock.file ]; do ;; done > \! rm /tmp/lock.file > > Kind of like a simple semaphore. Isn't that what you are getting at? Kind of, but wasting CPU while waiting. OTOH, some sleep(1) inside the loop would slow down to a certain degree. As usual, Tom and I had similar ideas. A little amount of C code should do it. Only that I would like to use pipes to/from psql instead of dealing with libpq and formatting myself. One problem I noticed so far is, that the new psql (in contrast to the v6.5 one) doesn't flush it's output when doing it to a pipe. To control that one backend finished execution of a query, I thought to send a special comment line and wait for it to be echoed back. This would only work if psql flushes. Peter, could you please add some switch to psql that activates an fflush() whenever psql would like to read more input commands. In the meantime I can start to write the driver using the v6.5 psql. Jan -- #======================================================================# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #========================================= wieck@debis.com (Jan Wieck) #
On Tue, 7 Dec 1999, Jan Wieck wrote: > > I guess you could also do some simple synchronization things, like have > > the second psql wait on a file to spring into existence: > > /* second-script */ > > \! while [ ! -f /tmp/lock.file ]; do ;; done > > \! rm /tmp/lock.file > > > > Kind of like a simple semaphore. Isn't that what you are getting at? > > Kind of, but wasting CPU while waiting. OTOH, some sleep(1) > inside the loop would slow down to a certain degree. Well, we're testing and not benchmarking... > code should do it. Only that I would like to use pipes > to/from psql instead of dealing with libpq and formatting > myself. If you like, the print routines in psql are competely isolated from the rest of the code and you can just #include and use them. > Peter, could you please add some switch to psql that > activates an fflush() whenever psql would like to read more > input commands. In the meantime I can start to write the > driver using the v6.5 psql. I'm going to go an a flush hunt... -- Peter Eisentraut Sernanders vaeg 10:115 peter_e@gmx.net 75262 Uppsala http://yi.org/peter-e/ Sweden
Peter Eisentraut <e99re41@DoCS.UU.SE> writes: >> Peter, could you please add some switch to psql that >> activates an fflush() whenever psql would like to read more >> input commands. In the meantime I can start to write the >> driver using the v6.5 psql. > I'm going to go an a flush hunt... Offhand I see no reason why psql shouldn't *always* fflush its output before reading a new command. It'd waste a few microseconds per command when reading from a file, but that'd hardly be noticeable in the great scheme of things. Adding another switch is just offering another way for people to make a mistake... regards, tom lane
Re: [HACKERS] Parallel regress tests (was Re: FOREIGN KEY and shift/reduce)
From
Thomas Lockhart
Date:
> > ... not sure how that can be done with a standard shell, and > > that's a must. Maybe something using named pipes and so - > > will play around a little. > Would probably be more portable to write a little bit of C code that > opens several libpq connections and reads a script file with > interleaved commands to send to the different connections. There are tools I'd consider using for this such as tcl/expect which help with simulating interactive sessions and with coordinating multiple actions. Forget the "sh only" approach if it makes things difficult; some platforms ship with a much richer toolset, and these tools might be well suited to your problem. If it works and there is a big demand for the "sh tools" then someone can recode it later, and it isn't holding you up now. ... my 2 cents... - Thomas -- Thomas Lockhart lockhart@alumni.caltech.edu South Pasadena, California
Re: [HACKERS] Parallel regress tests (was Re: FOREIGN KEY andshift/reduce)
From
Thomas Lockhart
Date:
> I guess you could also do some simple synchronization things, like have > the second psql wait on a file to spring into existence: > /* second-script */ > \! while [ ! -f /tmp/lock.file ]; do ;; done > \! rm /tmp/lock.file > Kind of like a simple semaphore. Isn't that what you are getting at? LISTEN/NOTIFY might be useful for this... - Thomas -- Thomas Lockhart lockhart@alumni.caltech.edu South Pasadena, California
Thomas Lockhart <lockhart@alumni.caltech.edu> writes: >> Kind of like a simple semaphore. Isn't that what you are getting at? > LISTEN/NOTIFY might be useful for this... LISTEN/NOTIFY would be one of the things we were trying to *test*. It's not helpful for checking intra-transaction-block behavior anyway, since notifys are only sent at commit and only heard between transactions. I like the tcl/expect idea a lot. We'd have to upgrade our tcl interface to support asynchronous queries (ie, send query then do other stuff until answer arrives); AFAIR it can't handle that now. But that'd be well worth doing anyway... regards, tom lane
Re: [HACKERS] Parallel regress tests (was Re: FOREIGN KEY andshift/reduce)
From
wieck@debis.com (Jan Wieck)
Date:
Tom Lane wrote: > Thomas Lockhart <lockhart@alumni.caltech.edu> writes: > >> Kind of like a simple semaphore. Isn't that what you are getting at? > > > LISTEN/NOTIFY might be useful for this... > > LISTEN/NOTIFY would be one of the things we were trying to *test*. > It's not helpful for checking intra-transaction-block behavior anyway, > since notifys are only sent at commit and only heard between > transactions. > > I like the tcl/expect idea a lot. We'd have to upgrade our tcl > interface to support asynchronous queries (ie, send query then do other > stuff until answer arrives); AFAIR it can't handle that now. But that'd > be well worth doing anyway... That would be really nice, especially because Tcl is my preferred tool language. But I wouldn't use libpgtcl, instead it should control a number of psql sessions over command pipelines. These can already handle asynchronous actions across all supported platforms. I'll build the first version of the tool using standard Tcl, and just keep in mind that the command language must be easily implementable with lex/yacc so someone can convert it later into a C version. Jan -- #======================================================================# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #========================================= wieck@debis.com (Jan Wieck) #
Re: [HACKERS] Parallel regress tests (was Re: FOREIGN KEY andshift/reduce)
From
wieck@debis.com (Jan Wieck)
Date:
Tom Lane wrote: > I like the tcl/expect idea a lot. We'd have to upgrade our tcl > interface to support asynchronous queries (ie, send query then do other > stuff until answer arrives); AFAIR it can't handle that now. But that'd > be well worth doing anyway... O.K. The multi session test tool, written in Tcl, is ready. Where should it go and how should it be named? I think it wouldn't be a bad idea to have it in a place where it can be used from any location, instead of putting it into the regression dir. Maybe install it into bin? Just to show a little: # ---- # Multi session test suite # ---- # ---- # Session 1 starts a transaction and deletes all rows from t1 # ---- / connect A begin; select * from t1; delete from t1; / wait A # ---- # Session 2 tries to select all from t1 for update - should block # ---- / connect B begin; select * from t1 for update of t1; commit; / wait B # ---- # Now session 1 rolls back ... # ---- / use A rollback; / wait A # ---- # ... what must release session 2 # ---- / wait B / close A / close B The above input produces this output: # ---- # Multi session test suite # ---- # ---- # Session 1 starts a transaction and deletes all rows from t1 # ---- (A) QUERY: begin; (A) QUERY: select * from t1; (A) a1|b1|c1 (A) --+--+----- (A) 2| 2|key 2 (A) 3| 3|key 3 (A) 8| 8|key 7 (A) (3 rows) (A) (A) QUERY: delete from t1; # ---- # Session 2 tries to select all from t1 for update - should block # ---- (B) QUERY: begin; (B) QUERY: select * from t1 for update of t1; *** Session of connection B seems locked # ---- # Now session 1 rolls back ... # ---- (A) QUERY: rollback; # ---- # ... what must release session 2 # ---- (B) a1|b1|c1 (B) --+--+----- (B) 2| 2|key 2 (B) 3| 3|key 3 (B) 8| 8|key 7 (B) (3 rows) (B) (B) QUERY: commit; The default time for the "wait" command is 5 seconds, but can be specified explicitly as "wait A 10". Session names can of course be longer than one character. The script stopped at session B's SELECT ... FOR UPDATE, and correctly continued with the *** message 5 seconds later. As you might have noticed, it doesn't matter that the COMMIT of session 2 was placed directly after the SELECT. If the session hangs, all queries are internally buffered. Jan -- #======================================================================# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #========================================= wieck@debis.com (Jan Wieck) #
wieck@debis.com (Jan Wieck) writes: > The multi session test tool, written in Tcl, is ready. Looks way cool. > Where should it go and how should it be named? Why not throw it in as another src/bin/ subdirectory, or maybe put it in Peter's new "src/bin/scripts/" directory? No great ideas for a name here. > The default time for the "wait" command is 5 seconds, but can > be specified explicitly as "wait A 10". It makes me uncomfortable that there are any explicit times at all. A developer might set up a test script with delays that seem ample on his hardware, yet will fail when someone tries to use the script on a much slower and/or heavily loaded system. Can we find a way to avoid needing explicit times in the scripts? If not, there should probably be a command-line switch that allows all the times to be scaled by some amount. (Ugly, but could be really handy.) regards, tom lane
Re: [HACKERS] Parallel regress tests (was Re: FOREIGN KEY andshift/reduce)
From
wieck@debis.com (Jan Wieck)
Date:
Tom Lane wrote: > wieck@debis.com (Jan Wieck) writes: > > The multi session test tool, written in Tcl, is ready. > > Looks way cool. Did I ever say that I love Tcl :-) ? It took me less that 5 hours to create a full featured tool, that already discovered another bug (not yet fixed) in the RI procs, close to that Hiroshi reported (just do the violation the other way round and it will be accepted). I know, that there are similar powerful languages availabel (perl for one). It's just that I looked for a good scripting language some years ago and found Tcl (version 7.4 at that time). Today it is such a magic tool for someone, familiar with the C language, that I think it was one of the best coices I ever made. Tcl was the first language I ever embedded as a PL, and the difficulties reported (and yet missing results) on approaches to embedd other languages tell the entire story. > > Where should it go and how should it be named? > > Why not throw it in as another src/bin/ subdirectory, or maybe put it > in Peter's new "src/bin/scripts/" directory? No great ideas for > a name here. Exactly what I intended to do. I prefer src/bin, since it isn't a shell script. I think that "pgmultitest" (my internal development name) wouldn't be such a bad decision. > > The default time for the "wait" command is 5 seconds, but can > > be specified explicitly as "wait A 10". > > It makes me uncomfortable that there are any explicit times at all. > A developer might set up a test script with delays that seem ample > on his hardware, yet will fail when someone tries to use the script > on a much slower and/or heavily loaded system. > > Can we find a way to avoid needing explicit times in the scripts? > If not, there should probably be a command-line switch that allows all > the times to be scaled by some amount. (Ugly, but could be really > handy.) Once again, similar thoughts and feelings. The third parameter should indeed be a value used internally to ESTIMATE the real time require, to tell that the session is blocked in a lock operation. And the estimation should be based on some prior analyzed system performance. Load peak's might still confuse the estimation. I don't have any clue right now, how to estimate, but anyway, load peaks lead to estimator confusions when it comes to estimate the execution time of some operation. I don't think, that a few (1-2) seconds of delay at an expected lock would really hurt. Well, a test stressing the locking mechanism (like the RI test I want to have), might take a while, even if executed on high end hardware. But I cannot imagine any other way, than using multiple sessions and response timeouts, to detect from the outside that a query ended in a blocking lock request. Except we extend the entire FE/BE protocol with information, telling "I'm blocked" / "I'm resuming" (plus adding appropriate messages to psql), there is absolutely no way to avoid the above timeouts. And we don't want to add regression test requirements to the FE/BE protocol and psql - no? The already discussed flushing feature is a requirement, but it might be useful in other situations too and thus worth anyway. Jan -- #======================================================================# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #========================================= wieck@debis.com (Jan Wieck) #
At 04:57 AM 12/9/99 +0100, Jan Wieck wrote: > I know, that there are similar powerful languages availabel > (perl for one). It's just that I looked for a good scripting > language some years ago and found Tcl (version 7.4 at that > time). Today it is such a magic tool for someone, familiar > with the C language, that I think it was one of the best > coices I ever made. And this is the scripting language embedded in the NaviSoft, later AOLserver web server. Your comments pretty much sum things up. There's an official port of the ArsDigita Community System to postgres (from Oracle) underway. This is in some sense the wrong forum to mention such things, but in another sense it is the right forum, because mainstream database server companies are pouring everything they've got into "web-ifying" their tools. But, really, with a lightweight threaded server like AOLserver and an excellent Tcl API, middleware and such is really not necessary. We think the port's going to be excellent, though I'm steadily building up a list of Postgres bugs to report (I'll wait until I have a dozen or so). One symptom of PotgreSQL's stability in 6.5.* is that the bugs are of the reproducible, mostly language-oriented kind and because of this they're easy to isolate and work-around (not always the truth in a multi-threaded environment such as typifies AOLserver). The enthusiasm for this in-progress port's pretty surprising, given that we only announced it four-five days ago and even then more or less under duress. PostgreSQL, though, has the potential to be an ideal web db server for small-to-mid-range sites. Given that Oracle wants like $22,500 for a fully paid up license for a single-processor P500, you can understand the interest for integrating these tools, which have been used to build a number of high-profile sites. - Don Baccus, Portland OR <dhogaza@pacifier.com> Nature photos, on-line guides, Pacific Northwest Rare Bird Alert Serviceand other goodies at http://donb.photo.net.
PostgreSQL front ends (was Re: [HACKERS] Parallel regress tests (was Re: FOREIGN KEY andshift/reduce))
From
Lamar Owen
Date:
[cc: list trimmed, and subject changed....] On Wed, 08 Dec 1999, Don Baccus wrote: > At 04:57 AM 12/9/99 +0100, Jan Wieck wrote: > > > I know, that there are similar powerful languages availabel > > (perl for one). It's just that I looked for a good scripting > > language some years ago and found Tcl (version 7.4 at that > > time). Today it is such a magic tool for someone, familiar > > with the C language, that I think it was one of the best > > coices I ever made. > > And this is the scripting language embedded in the NaviSoft, later > AOLserver web server. Your comments pretty much sum things up. I replied to Jan privately about that, but, since you brought it up on-list.... > This is in some sense the wrong forum to mention such things, but in > another sense it is the right forum, because mainstream database server > companies are pouring everything they've got into "web-ifying" their > tools. Vince almost finished the switch, but got tangled up in the high latency of the AOLserver developers for 3.0beta3. His comments about the ease of setup were on the money, but he wanted to use CGI, and CGI is far from AOLserver's strong suite. The tcl API is the preferred way to do AOLserver hacking. > symptom of PotgreSQL's stability in 6.5.* is that the bugs are of the > reproducible, mostly language-oriented kind and because of this they're > easy to isolate and work-around (not always the truth in a multi-threaded > environment such as typifies AOLserver). As opposed to PostgreSQL 6.2.1, the first version that was officially supported by the AOLserver crew. PostgreSQL has come a long way! > PostgreSQL, though, has the potential to be an ideal web db server for > small-to-mid-range sites. The use of AOLserver+Tcl+PostgreSQL has the potential to far surpass the speed of Apache+mod_perl+mySQL -- or Apache+mod_php+mySQL. To quote Philip Greenspun, AOLserver is now handling 28 thousand hits at AOL. Oh, that's 28K hits _per_second_ aggregate over AOL's server farm. IMO,. AOLserver is far and away the best web front end for PostgreSQL. And that's my final answer. -- Lamar Owen WGCR Internet Radio
Don Baccus <dhogaza@pacifier.com> writes: > the port's going to be excellent, though I'm steadily building up a list > of Postgres bugs to report (I'll wait until I have a dozen or so). Actually, I think it's easier to keep track of things if you file bug reports as separate messages, rather than one big message that lists a bunch of unrelated problems. So feel free to send 'em in as you find 'em. regards, tom lane
As I've mentioned previously, I'm porting over a web toolkit from Oracle to Postgres. One portion of the toolkit creates and alters tables to add user-defined fields into what's being used essentially as a meta-table (used to later define real tables). One page forgets to check for an empty form before issuing its "alter table" command, and it crashed the backend. I'm correcting its forgetfulness, but am also reporting the problem: alter table foo add; gives a parser error, which is fine. alter table foo add(); crashes the backend. I'd say it's really low priority, but should be fixed. - Don Baccus, Portland OR <dhogaza@pacifier.com> Nature photos, on-line guides, Pacific Northwest Rare Bird Alert Serviceand other goodies at http://donb.photo.net.
Re: PostgreSQL front ends (was Re: [HACKERS] Parallel regress tests (was Re: FOREIGN KEY andshift/reduce))
From
Don Baccus
Date:
At 11:47 PM 12/8/99 -0500, Lamar Owen wrote: >Vince almost finished the switch, but got tangled up in the high latency of the >AOLserver developers for 3.0beta3. His comments about the ease of setup were >on the money, but he wanted to use CGI, and CGI is far from AOLserver's strong >suite. The tcl API is the preferred way to do AOLserver hacking. The PHP folks have claimed that they'll have an embedded interpreter for AOLserver for their next release. It's not clear that they'll duplicate the rich API the server provides Tcl though. If they do, this will be an alternative for those who don't like Tcl or AOLserver's ADPs (HTML with embedded Tcl snippets) - Don Baccus, Portland OR <dhogaza@pacifier.com> Nature photos, on-line guides, Pacific Northwest Rare Bird Alert Serviceand other goodies at http://donb.photo.net.
Re: [HACKERS] Parallel regress tests (was Re: FOREIGN KEY andshift/reduce)
From
Peter Eisentraut
Date:
On 1999-12-08, Jan Wieck mentioned: > The multi session test tool, written in Tcl, is ready. Where > should it go and how should it be named? I think it wouldn't > be a bad idea to have it in a place where it can be used from > any location, instead of putting it into the regression dir. > Maybe install it into bin? Please consider not doing that. Bin is for user programs. I don't see why it shouldn't at least go under the test tree, if not under regress. -- Peter Eisentraut Sernanders väg 10:115 peter_e@gmx.net 75262 Uppsala http://yi.org/peter-e/ Sweden
Don Baccus <dhogaza@pacifier.com> writes: > alter table foo add(); > crashes the backend. > I'd say it's really low priority, but should be fixed. A crash is never a good thing. If you feel like patching your copy, the problem is in backend/parser/gram.y: *************** *** 759,769 **** } | ADD '(' OptTableElementList ')' { - Node *lp = lfirst($3); - if (length($3) != 1) elog(ERROR,"ALTER TABLE/ADD() allows one column only"); ! $$ = lp; } | DROP opt_column ColId { elog(ERROR,"ALTERTABLE/DROP COLUMN not yet implemented"); } --- 759,767 ---- } | ADD '(' OptTableElementList ')' { if(length($3) != 1) elog(ERROR,"ALTER TABLE/ADD() allows one column only"); ! $$ = (Node *) lfirst($3); } | DROP opt_column ColId { elog(ERROR,"ALTER TABLE/DROP COLUMN not yet implemented"); } *************** Line numbers certainly not right for 6.5 ... regards, tom lane
Re: [HACKERS] Parallel regress tests (was Re: FOREIGN KEY andshift/reduce)
From
wieck@debis.com (Jan Wieck)
Date:
Peter Eisentraut wrote: > On 1999-12-08, Jan Wieck mentioned: > > > The multi session test tool, written in Tcl, is ready. Where > > should it go and how should it be named? I think it wouldn't > > be a bad idea to have it in a place where it can be used from > > any location, instead of putting it into the regression dir. > > Maybe install it into bin? > > Please consider not doing that. Bin is for user programs. I don't see w= > hy > it shouldn't at least go under the test tree, if not under regress. You're right, I'll place it into the regression tree. Those of us hackers, who need it handy in a more central place can copy it to whereever they like. Jan -- #======================================================================# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #========================================= wieck@debis.com (Jan Wieck) #
Re: [HACKERS] Parallel regress tests (was Re: FOREIGN KEY andshift/reduce)
From
wieck@debis.com (Jan Wieck)
Date:
Peter Eisentraut wrote: > On 1999-12-08, Jan Wieck mentioned: > > > The multi session test tool, written in Tcl, is ready. Where > > should it go and how should it be named? I think it wouldn't > > be a bad idea to have it in a place where it can be used from > > any location, instead of putting it into the regression dir. > > Maybe install it into bin? > > Please consider not doing that. Bin is for user programs. I don't see w= > hy > it shouldn't at least go under the test tree, if not under regress. BTW: I added one fflush(stdout) to psql/commands.c where the query buffer is printed for the \p command. After that, my test tool is totally happy with it. Jan -- #======================================================================# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #========================================= wieck@debis.com (Jan Wieck) #
wieck@debis.com (Jan Wieck) writes: > If I allow the <constraint attributes> in column constraints, > I get 2 shift/reduce conflicts. Seems the syntax interferes > with NOT NULL. Actually I commented that part out, so the > complete syntax is available only for table constraints, not > on the column level. > Could some yacc-guru please take a look at it? Well, I'm not a guru, but I looked anyway. It's a mess. The problem is that when NOT is the next token, the grammar doesn't know whether the NOT is starting NOT NULL, which would be a new ColConstraintElem, or starting NOT DEFERRABLE, which would be part of the current ColConstraintElem. So it can't decide whether it's time to reduce the current stack contents to a finished ColConstraintElem or not. The only way to do that is to look ahead further than the NOT. In short, we no longer have an LR(1) grammar. Yipes. After a few minutes' thought, it seems that the least unclean way to attack this problem is to hack up the lexer so that "NOT<whitespace>NULL" is lexed as a single keyword. Assuming that that's doable (I haven't tried, but I think it's possible), the required changes in the grammar would be small. The shift/reduce problem would go away, since we'd essentially have pushed the required lookahead into the lexer. It's possible that making this change would even allow us to use full a_expr rather than b_expr in DEFAULT expressions. I'm not sure about it, but that'd be a nice side benefit if so. Does anyone see a better answer? This'd definitely be a Big Kluge from the lexer's point of view, but I don't see an answer at the grammar level. BTW --- if we do this, it'd be a simple matter to allow "NOTNULL" with no embedded space, which is something that I think a number of other DBMSes accept. (Which may tell us something about how they solved this problem...) It's not a keyword according to SQL92, so I'm inclined *not* to accept it, but perhaps someone else wants to argue the case. regards, tom lane
> wieck@debis.com (Jan Wieck) writes: > > If I allow the <constraint attributes> in column constraints, > > I get 2 shift/reduce conflicts. Seems the syntax interferes > > with NOT NULL. Actually I commented that part out, so the > > complete syntax is available only for table constraints, not > > on the column level. > > > Could some yacc-guru please take a look at it? > > Well, I'm not a guru, but I looked anyway. It's a mess. The problem > is that when NOT is the next token, the grammar doesn't know whether > the NOT is starting NOT NULL, which would be a new ColConstraintElem, > or starting NOT DEFERRABLE, which would be part of the current > ColConstraintElem. So it can't decide whether it's time to reduce > the current stack contents to a finished ColConstraintElem or not. > The only way to do that is to look ahead further than the NOT. Tom and I talked about moving NOT DEFERED up into the main level with NOT NULL. In gram.y, line 949 and line, could there be a test that if the last List element of $1 is a constraint, and if $2 is NOT DEFERED, we can set the bit in $1 and just skip adding the defered node? If not, we can throw an error. Also, ColQualList seems very strange. Why the two actions? I have removed it and made ColQualifier work properly. -- Bruce Momjian | http://www.op.net/~candle maillist@candle.pha.pa.us | (610) 853-3000+ If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup. | Drexel Hill, Pennsylvania19026
> > Well, I'm not a guru, but I looked anyway. It's a mess. The problem > > is that when NOT is the next token, the grammar doesn't know whether > > the NOT is starting NOT NULL, which would be a new ColConstraintElem, > > or starting NOT DEFERRABLE, which would be part of the current > > ColConstraintElem. So it can't decide whether it's time to reduce > > the current stack contents to a finished ColConstraintElem or not. > > The only way to do that is to look ahead further than the NOT. > Tom and I talked about moving NOT DEFERED up into the main level with > NOT NULL. > > In gram.y, line 949 and line, could there be a test that if the last > List element of $1 is a constraint, and if $2 is NOT DEFERED, we can set > the bit in $1 and just skip adding the defered node? If not, we can > throw an error. Also, I am using flex 2.5.4, and an seeing no shift-reduce errors from the current gram.y. -- Bruce Momjian | http://www.op.net/~candle maillist@candle.pha.pa.us | (610) 853-3000+ If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup. | Drexel Hill, Pennsylvania19026
Bruce Momjian <pgman@candle.pha.pa.us> writes: > Also, I am using flex 2.5.4, and an seeing no shift-reduce errors from > the current gram.y. You wouldn't, because the critical item is still commented out. | REFERENCES ColId opt_column_list key_match key_actions { /* XXX * Need ConstraintAttributeSpec as $6 -- Jan */ If you add ConstraintAttributeSpec to the end of this production, it fails... regards, tom lane
> > If I allow the <constraint attributes> in column constraints, > > I get 2 shift/reduce conflicts. Seems the syntax interferes > > with NOT NULL. Actually I commented that part out, so the > > complete syntax is available only for table constraints, not > > on the column level. > Well, I'm not a guru, but I looked anyway. It's a mess. The problem > is that when NOT is the next token, the grammar doesn't know whether > the NOT is starting NOT NULL, which would be a new ColConstraintElem, > or starting NOT DEFERRABLE, which would be part of the current > ColConstraintElem. So it can't decide whether it's time to reduce > the current stack contents to a finished ColConstraintElem or not. > The only way to do that is to look ahead further than the NOT. > > In short, we no longer have an LR(1) grammar. Yipes. > > After a few minutes' thought, it seems that the least unclean way > to attack this problem is to hack up the lexer so that > "NOT<whitespace>NULL" is lexed as a single keyword. Assuming that > that's doable (I haven't tried, but I think it's possible), the > required changes in the grammar would be small. The shift/reduce > problem would go away, since we'd essentially have pushed the > required lookahead into the lexer. > > It's possible that making this change would even allow us to use > full a_expr rather than b_expr in DEFAULT expressions. I'm not > sure about it, but that'd be a nice side benefit if so. > > Does anyone see a better answer? This'd definitely be a Big Kluge > from the lexer's point of view, but I don't see an answer at the > grammar level. I'd like a chance to fix it at the grammar level. It involves mixing NOT DEFERRABLE and NOT NULL into the same clauses, but if I can work it out I'd rather isolate the Big Kluges in gram.y, which seems to collect that kind of stuff. scan.l is still fairly clean... - Thomas -- Thomas Lockhart lockhart@alumni.caltech.edu South Pasadena, California
Thomas Lockhart <lockhart@alumni.caltech.edu> writes: >> Does anyone see a better answer? This'd definitely be a Big Kluge >> from the lexer's point of view, but I don't see an answer at the >> grammar level. > I'd like a chance to fix it at the grammar level. It involves mixing > NOT DEFERRABLE and NOT NULL into the same clauses, but if I can work > it out I'd rather isolate the Big Kluges in gram.y, which seems to > collect that kind of stuff. scan.l is still fairly clean... Bruce and I were talking about that last night. I think it could be fixed by having the grammar treatNOT DEFERRABLEDEFERRABLEINITIALLY IMMEDIATEINITIALLY DEFERRED as independent ColConstraintElem clauses, and then have a post-pass in analyze.c that folds the flags into the preceding REFERENCES clause (and complains if they don't appear right after a REFERENCES clause). Pretty grotty, especially since you probably wouldn't want to do the same thing for the other uses of these clauses in TableConstraint and CreateTrigStmt ... but possibly cleaner than a lexer hack. Another possible approach is to leave scan.l untouched and put a filter subroutine between gram.y and scan.l. The filter would normally just call the scanner and return the token as-is; but when it gets a NOT token, it would call the scanner again to see if it gets a NULL token. If so, it returns a single "NOTNULL" token to the grammar; otherwise it stashes away the lookahead token to return on the next call. This last approach probably involves the least amount of dirt, but it does require being able to get in between yacc and lex. I'm not sure whether we'd have portability problems doing that; never tried it. regards, tom lane
> Bruce and I were talking about that last night. I think it could be > fixed by having the grammar treat > NOT DEFERRABLE > DEFERRABLE > INITIALLY IMMEDIATE > INITIALLY DEFERRED > as independent ColConstraintElem clauses, and then have a post-pass in > analyze.c that folds the flags into the preceding REFERENCES clause > (and complains if they don't appear right after a REFERENCES clause). > Pretty grotty, especially since you probably wouldn't want to do the > same thing for the other uses of these clauses in TableConstraint > and CreateTrigStmt ... but possibly cleaner than a lexer hack. analyze.c already does a grotty scan of the constraint clauses to push them out into the place Vadim's implementation expects them. We could identify the FK clauses and push them somewhere else, no problem (well, at least in principle ;). - Thomas -- Thomas Lockhart lockhart@alumni.caltech.edu South Pasadena, California
Thomas Lockhart wrote: > > Bruce and I were talking about that last night. I think it could be > > fixed by having the grammar treat > > NOT DEFERRABLE > > DEFERRABLE > > INITIALLY IMMEDIATE > > INITIALLY DEFERRED > > as independent ColConstraintElem clauses, and then have a post-pass in > > analyze.c that folds the flags into the preceding REFERENCES clause > > (and complains if they don't appear right after a REFERENCES clause). > > Pretty grotty, especially since you probably wouldn't want to do the > > same thing for the other uses of these clauses in TableConstraint > > and CreateTrigStmt ... but possibly cleaner than a lexer hack. > > analyze.c already does a grotty scan of the constraint clauses to push > them out into the place Vadim's implementation expects them. We could > identify the FK clauses and push them somewhere else, no problem > (well, at least in principle ;). I already added my own list of constraint clauses, where foreign key ones are pushed out of the place until the index stuff is done. Then the list is processed to add the trigger statements to extras_after. It's enough of crippled code there IMHO. I like the other approach by wrapping around yylex() better. We definitely insist on bison, and ship a prepared gram.c. And a little test here showed, that having static int kludge_yylex_wrapper(void); #define yylex() kludge_yylex_wrapper() in the top declarations section and defining #undef yylex() static int kludge_yylex_wrapper(void) { return yylex(); } at the very end of gram.y does a fine job, changing totally nothing. So that's a perfect place to do exactly what Tom suggested. I don't see any portability issues on that. Jan -- #======================================================================# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #========================================= wieck@debis.com (Jan Wieck) #