Thread: Error in new psql
Peter, I just noticed that the new psql doesn't handle semicolon inside of unmatched parentheses correct any more. This is a requirement for defining multi action rules and was properly supported by v6.5.* psql. The CURRENT version submits the query buffer as soon, as it encounters the first semicolon outside of a string literal, and that is wrong according to the definition of CREATE RULE. Jan -- #======================================================================# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #========================================= wieck@debis.com (Jan Wieck) #
> Peter, > > I just noticed that the new psql doesn't handle semicolon > inside of unmatched parentheses correct any more. This is a > requirement for defining multi action rules and was properly > supported by v6.5.* psql. > > The CURRENT version submits the query buffer as soon, as it > encounters the first semicolon outside of a string literal, > and that is wrong according to the definition of CREATE RULE. I assume you mean: test=> select (;) ERROR: parser: parse error at or near ")" -- Bruce Momjian | http://www.op.net/~candle maillist@candle.pha.pa.us | (610) 853-3000+ If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup. | Drexel Hill, Pennsylvania19026
Bruce Momjian wrote: > > Peter, > > > > I just noticed that the new psql doesn't handle semicolon > > inside of unmatched parentheses correct any more. This is a > > requirement for defining multi action rules and was properly > > supported by v6.5.* psql. > > > > The CURRENT version submits the query buffer as soon, as it > > encounters the first semicolon outside of a string literal, > > and that is wrong according to the definition of CREATE RULE. > > I assume you mean: > > test=> select (;) > ERROR: parser: parse error at or near ")" Kinda, actually I meant CREATE RULE myrule AS ON DELETE TO mytable DO ( DELETE FROM myothertab1 WHERE key = old.key; DELETE FROM myothertab2 WHERE key = old.key; ); ERROR: parser: parse error at or near "" This is a possible syntax which (IIRC) got released with v6.4 and is subject to the examples in the rule system documentation. The parser still accepts it, so breaking it due to changes in psql is an IMHO unacceptable backward incompatibility. Jan -- #======================================================================# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #========================================= wieck@debis.com (Jan Wieck) #
> > test=> select (;) > > ERROR: parser: parse error at or near ")" > > Kinda, > > actually I meant > > CREATE RULE myrule AS ON DELETE TO mytable DO ( > DELETE FROM myothertab1 WHERE key = old.key; > DELETE FROM myothertab2 WHERE key = old.key; > ); > ERROR: parser: parse error at or near "" > > This is a possible syntax which (IIRC) got released with v6.4 > and is subject to the examples in the rule system > documentation. The parser still accepts it, so breaking it > due to changes in psql is an IMHO unacceptable backward > incompatibility. > Yes, certainly this will be fixed. -- Bruce Momjian | http://www.op.net/~candle maillist@candle.pha.pa.us | (610) 853-3000+ If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup. | Drexel Hill, Pennsylvania19026
> > I assume you mean: > > > > test=> select (;) > > ERROR: parser: parse error at or near ")" > > Kinda, > > actually I meant > > CREATE RULE myrule AS ON DELETE TO mytable DO ( > DELETE FROM myothertab1 WHERE key = old.key; > DELETE FROM myothertab2 WHERE key = old.key; > ); > ERROR: parser: parse error at or near "" > > This is a possible syntax which (IIRC) got released with v6.4 > and is subject to the examples in the rule system > documentation. The parser still accepts it, so breaking it > due to changes in psql is an IMHO unacceptable backward > incompatibility. OK, I fixed it. Just one addition test in an _if_ statement. -- Bruce Momjian | http://www.op.net/~candle maillist@candle.pha.pa.us | (610) 853-3000+ If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup. | Drexel Hill, Pennsylvania19026
Bruce Momjian wrote: > > actually I meant > > > > CREATE RULE myrule AS ON DELETE TO mytable DO ( > > DELETE FROM myothertab1 WHERE key = old.key; > > DELETE FROM myothertab2 WHERE key = old.key; > > ); > > ERROR: parser: parse error at or near "" > > OK, I fixed it. Just one addition test in an _if_ statement. Thank you. You remember, that it's not the first time multiple action rules have been broken? The other one was due to the EXCEPT/INTERCEPT patch. I added a check to the rules regression test after that, to ensure it never happens again. Unfortunately, Peter's enforcement to use old psql for regression prevented it from showing up. Don't misunderstand this as some whining about it. It is a very important issue. It shows that the changes made to psql can cause backward incompatibilities by themself. AFAIK, the proposed procedure to activate the new psql was to run the regression test with an old psql, if it's O.K. run it again with the new one and replace all expected output files. THIS IS INADEQUATE according to the results seen in this case. Don't know if anyone would feel comfortable with it, but at least, the postmaster log must be checked to show up exactly the same too. The only alternative would be to check every old/expected to new/results manually (what's really a whole lot of damned stupid work). Jan -- #======================================================================# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #========================================= wieck@debis.com (Jan Wieck) #
On 1999-12-11, Jan Wieck mentioned: > I just noticed that the new psql doesn't handle semicolon > inside of unmatched parentheses correct any more. This is a > requirement for defining multi action rules and was properly > supported by v6.5.* psql. Aah, I knew that there must have been a reason for this parentheses counting. Patch attached. Backslash-escaping semicolons works as well, by the way. -- Peter Eisentraut Sernanders väg 10:115 peter_e@gmx.net 75262 Uppsala http://yi.org/peter-e/ Sweden
On 1999-12-10, Bruce Momjian mentioned: > I assume you mean: > > test=> select (;) > ERROR: parser: parse error at or near ")" That was actually a different bug, which must have slipped in on the latest update. Please use the attached patch. This overlaps with the one sent in a few minutes ago, but I think you'll easily figure out what's going on. Just a few lines to delete. -- Peter Eisentraut Sernanders väg 10:115 peter_e@gmx.net 75262 Uppsala http://yi.org/peter-e/ Sweden
On 1999-12-11, Jan Wieck mentioned: > I added a check to the rules regression test after that, to > ensure it never happens again. Unfortunately, Peter's > enforcement to use old psql for regression prevented it from > showing up. To be completely honest, I was just waiting to see what this was good for. As you have seen (or not), it was more or less disabled but still there. Regarding the regression tests, before any more of this stuff gets thrown around, how do you regenerate the output? Easily? Do it now. As far as I'm concerned, psql is finished. Anything else will be bug-fixing. I'm planning on some sort of beta somewhere around Feb 1st with release on Feb 29th (to prove Y2K compliancy). If we don't come up with a name by then, we can always start naming it after Norse gods. -- Peter Eisentraut Sernanders väg 10:115 peter_e@gmx.net 75262 Uppsala http://yi.org/peter-e/ Sweden
[Charset ISO-8859-1 unsupported, filtering to ASCII...] > On 1999-12-11, Jan Wieck mentioned: > > > I just noticed that the new psql doesn't handle semicolon > > inside of unmatched parentheses correct any more. This is a > > requirement for defining multi action rules and was properly > > supported by v6.5.* psql. > > Aah, I knew that there must have been a reason for this parentheses > counting. Patch attached. Backslash-escaping semicolons works as well, by > the way. This is the same as the patch I did. Thanks. -- Bruce Momjian | http://www.op.net/~candle maillist@candle.pha.pa.us | (610) 853-3000+ If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup. | Drexel Hill, Pennsylvania19026
> Don't know if anyone would feel comfortable with it, but at > least, the postmaster log must be checked to show up exactly > the same too. The only alternative would be to check every > old/expected to new/results manually (what's really a whole > lot of damned stupid work). I've done a whole lot of dsw before, and will get to it sometime unless someone does it first... - Thomas -- Thomas Lockhart lockhart@alumni.caltech.edu South Pasadena, California
Peter Eisentraut wrote: > On 1999-12-11, Jan Wieck mentioned: > > > I added a check to the rules regression test after that, to > > ensure it never happens again. Unfortunately, Peter's > > enforcement to use old psql for regression prevented it from > > showing up. > > To be completely honest, I was just waiting to see what this was good > for. As you have seen (or not), it was more or less disabled but still > there. Maybe it sounded the like, but I really did not wanted to citicize your work. It was a great job and IMHO a big leap forward in user friendliness of psql. I expect all this tab- completion and help stuff to be highly appreceated and honored. Let me be the first to explicitly say CONGRATS. What I just wanted to point out is, that such a little, subtle change in psql's input preprocessing could distort an existing feature. In this case, it's totally clear to me that is was only disabled and still there. But I only stumbled over it because I tried to create a multi action rule by hand to evaluate some comment I was writing on a list. Without that, the proposed procedure (I outlined) to update expected output would have broken the "rules" regression test and stamped the broken results into expected. So it probably wouldn't have been noticed until after release. And who can guarantee that this kind of flaw cannot happen anywhere else? There are many, very old regression tests. Some of them go back to the roots, Postgres 4.2, and I'm not sure anyone ever looked at the expected results lately, if they are really what SHOULD be expected. The tenk data for example is something where even I don't know where it was coming from, and I already joined the Postgres community with release 4.2 back in 1994. All this IMHO isn't really subject to your personal responsibility. The interface of our interactive shell needed the now happened polishing for some time. Instead I wanted the backend developers to handle this major change in psql, which is a core utility of the regression suite, not as lax as past changes to it might have been. That's all. Jan -- #======================================================================# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #========================================= wieck@debis.com (Jan Wieck) #
[Charset ISO-8859-1 unsupported, filtering to ASCII...] > On 1999-12-10, Bruce Momjian mentioned: > > > I assume you mean: > > > > test=> select (;) > > ERROR: parser: parse error at or near ")" > > That was actually a different bug, which must have slipped in on the > latest update. Please use the attached patch. This overlaps with the one > sent in a few minutes ago, but I think you'll easily figure out what's > going on. Just a few lines to delete. I don't see any patch attached to this message. -- Bruce Momjian | http://www.op.net/~candle maillist@candle.pha.pa.us | (610) 853-3000+ If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup. | Drexel Hill, Pennsylvania19026
> And who can guarantee that this kind of flaw cannot happen > anywhere else? There are many, very old regression tests. > Some of them go back to the roots, Postgres 4.2, and I'm not > sure anyone ever looked at the expected results lately, if > they are really what SHOULD be expected. The tenk data for > example is something where even I don't know where it was > coming from, and I already joined the Postgres community with > release 4.2 back in 1994. Thomas is the regression man, and has checked the output to see that it was expected in the past. I assume he will regenerate it soon. A good point is that he can use the old psql to see any changes/breakage in the backend code, but can _not_ use the new psql to check because the output is different. That is a good point, and I think the one Jan was making. -- Bruce Momjian | http://www.op.net/~candle maillist@candle.pha.pa.us | (610) 853-3000+ If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup. | Drexel Hill, Pennsylvania19026
Bruce Momjian wrote: > > And who can guarantee that this kind of flaw cannot happen > > anywhere else? There are many, very old regression tests. > > Some of them go back to the roots, Postgres 4.2, and I'm not > > sure anyone ever looked at the expected results lately, if > > they are really what SHOULD be expected. The tenk data for > > example is something where even I don't know where it was > > coming from, and I already joined the Postgres community with > > release 4.2 back in 1994. > > Thomas is the regression man, and has checked the output to see that > it was expected in the past. I assume he will regenerate it soon. Oh yeah, I've seen his response with great pleasure. I did not knew that there's really someone taking care for breakage->expected glitches. > A good point is that he can use the old psql to see any changes/breakage > in the backend code, but can _not_ use the new psql to check because the > output is different. That is a good point, and I think the one Jan was > making. Yes. The verification, if the new expected output is correct, needs one or more eyes (and AFAIK Thomas has good ones - he's one of a fistful who notice mistakes in my statements even if they are between the lines :-)). Jan -- #======================================================================# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #========================================= wieck@debis.com (Jan Wieck) #
Posted this a few days ago on pgsql-general and deja, with no response, so hoping hackers might help... Anyone know what this error is or how to prevent it? Seems to usually show up on large queries... "ExecInitIndexScan: both left and right op's are rel-vars" I've seen it before, but can't recall a solution and couldn't find one in archives/deja... Thanks in advance... Ed pgsql 6.5.2, redhat 6.0 (2.2.5-15smp).
Ed Loehr <ELOEHR@austin.rr.com> writes: > Anyone know what this error is or how to prevent it? Seems to > usually show up on large queries... > "ExecInitIndexScan: both left and right op's are rel-vars" Sounds like you've found a bug. How about a specific example of a query that causes this? regards, tom lane
Tom Lane wrote: > Ed Loehr <ELOEHR@austin.rr.com> writes: > > Anyone know what this error is or how to prevent it? Seems to > > usually show up on large queries... > > "ExecInitIndexScan: both left and right op's are rel-vars" > > Sounds like you've found a bug. How about a specific example of > a query that causes this? Unfortunately, this is the simplest example I have to offer. The following query succeeds numerous times before going into a continuous failure mode due to the error above. Vacuuming the DB fixes the problem temporarily "for a while". SELECT sum( cet.default_budget_per_unit * cahrn.hr_count ) FROM contract_activity_hr_need cahrn, contract_expense_type cet, contract_activity_type_expense_type catet, contract_activity_typecat, activity pa WHERE -- lame attempt at making this easy on the eye... cet.contract_id = 1 AND catet.contract_id = 1 AND cahrn.contract_id = 1 AND pa.contract_id = 1 AND cat.contract_id = 1 AND cet.expense_unit_id= 6 AND pa.activity_state_id <> 5 AND pa.activity_state_id <> 4 AND (pa.billable = 0 OR cahrn.billable = 0) AND catet.expense_type_id = cet.expense_type_id AND catet.activity_type_id= cat.activity_type_id AND cahrn.contract_activity_type_id = cat.id AND pa.activity_type_id= cat.activity_type_id; Without including the rather lengthy schema definition for the 5 tables involved, let me clarify the data types of the example by saying that every single column in the query above is of type INTEGER except for cet.default_budget_per_unit in the SELECT clause, which is of type FLOAT8. Note that all columns above ending in 'XXX_id' are foreign keys referencing the 'id' column of the 'XXX' table, which is declared as type SERIAL. Note also that every table has a couple of book-keeping columns ('creation_time' and 'record_status'). For example, cet.contract_id is an INTEGER value acting as a foreign key to the 'contract' table: CREATE TABLE contract ( id SERIAL, -- pkey, ref'd as fkey 'contract_id' ... creation_time DATETIME NOT NULLDEFAULT CURRENT_TIMESTAMP, record_status INTEGER NOT NULL DEFAULT 1 ); CREATE TABLE contract_expense_type ( id SERIAL, contract_id INTEGER NOT NULL, -- fkey to contract table ... creation_time DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP, record_status INTEGER NOT NULL DEFAULT 1 ); One might suspect the size of my tuples might be a factor. I believe my **largest** rowsize in any table is 152 bytes, though I'm not sure how VARCHARs are sized (all my varchar values are considerably less than 256 bytes, and rarely are there more than 2 of these in a table). I think the error comes from line 862 of .../src/backend/executor/nodeIndexscan.c, though it's possible it may have come at times from line 925 of the same file (a similar error msg differing only by an apostrophe). Other current configuration details: Pgsql configured with: ./configure --prefix=/usr/local/pgsql -with-odbc PG: PostgreSQL 6.5.2 on i686-pc-linux-gnu, compiled by gcc egcs-2.91.66 OS: RH6.1 Linux XXX 2.2.12-20smp #1 SMP Mon Sep 27 10:34:45 EDT 1999 i686 unknown, HW: dual P3 600Mhz w/1Gb RAM and 3 UW 9Gb SCSI drives in software RAID. SW: Apache 1.3.9 with mod_ssl 2.4.9, mod_perl 1.21, DBI 1.13, DBD/Pg 0.92 I've also seen this problem on RH6.0, Pg6.5.2, Linux2.2.12-15, 512MbRAM, dual450MhzP3, NoRAID, mod_ssl 2.4.5... Any help would be greatly appreciated. I can code around this, of course, but it'd be nice... Cheers, Ed Loehr
Ed Loehr <ELOEHR@austin.rr.com> writes: >> Sounds like you've found a bug. How about a specific example of >> a query that causes this? > Unfortunately, this is the simplest example I have to offer. The > following query succeeds numerous times before going into a continuous > failure mode due to the error above. Vacuuming the DB fixes the > problem temporarily "for a while". Oh my, *that's* interesting. I have no idea what could be causing that. The error message you're getting suggests that the planner is generating an incorrect plan tree for the query, which I'd believe soon enough, but I don't understand why the behavior would change over time. A VACUUM could change the planner's results by altering the stored statistics for the tables --- but if you're not vacuuming, the plan should be the same every time. Does the EXPLAIN output showing the query plan change from when it's working to when it's not? What would really be helpful is to see the EXPLAIN VERBOSE output in both states (preferably, the pretty-printed version that gets put in the postmaster log file, not the compressed version that gets sent to the client). Also, what indexes do you have on these tables? regards, tom lane
Tom Lane wrote: > Ed Loehr <ELOEHR@austin.rr.com> writes: > > ... query succeeds numerous times before going into a continuous > > failure mode due to the error above. Vacuuming the DB fixes the > > problem "for a while". > > Oh my, *that's* interesting. I have no idea what could be causing that. > The error message you're getting suggests that the planner is generating > an incorrect plan tree for the query, which I'd believe soon enough, > but I don't understand why the behavior would change over time. > A VACUUM could change the planner's results by altering the stored > statistics for the tables --- but if you're not vacuuming, the plan > should be the same every time. No intermediate vacuuming is occurring, AFAIK (though I'm trying to figure out how to trigger vacuuming on this error). Speculating, does the genetic algorithm twiddle any of the planner's stats? I ask because I know some of my other queries involve 6 or more tables, and I seem to recall that was a trigger point for genetic algorithms to kick in with default settings. I am running with defaults. > Does the EXPLAIN output showing the query plan change from when it's > working to when it's not? What would really be helpful is to see the > EXPLAIN VERBOSE output in both states (preferably, the pretty-printed > version that gets put in the postmaster log file, not the compressed > version that gets sent to the client). I will attempt to capture EXPLAIN output for the problem situation. > Also, what indexes do you have on these tables? I have single-column indices on most every foreign key field (ie, contract_id), some unique and some not, and on every primary key field (i.e., 'id' in the 'contract' table). I have a few multi-column indices. The only types I use in the entire database are INTEGER, SERIAL, FLOAT8, DATETIME, and VARCHAR, and I have indices involving on all of these types at one point or another. I also have a few of what I'd call "overlapping" indices, i.e., create table mytable ( id serial, dog_id integer, cat_id integer, ... ); create index mytable_dog_idxon mytable(dog_id); create index mytable_cat_idx on mytable(cat_id); create index mytable_dogcat_idx onmytable(dog_id,cat_id); ...thinking these indices would allow the fastest lookups from 3 different angles (at the cost of slower inserts, of course). Not sure my intuition here corresponds directly with the technical reality... Your question also reminds me of a scenario I'd wondered about: create table mytable ( id serial, ... primary key (id) ); create unique index mytable_id on mytable(id); The primary key designation implicitly creates a unique index ('mytable_id_pkey', is it?). What happens if I inadvertently create another unique index on the same field (other than being worthless, redundant, and a needless performance hit)? I believe I have this situation in some cases as a result of adding the 'primary key' designation later, and hadn't gotten around to cleaning it up. Does that smell like a rat? Any other ideas? Cheers, Ed Loehr
Ed Loehr <ELOEHR@austin.rr.com> writes: > Tom Lane wrote: >> Oh my, *that's* interesting. I have no idea what could be causing that. > Speculating, does the genetic algorithm twiddle any of the planner's > stats? No, or at least no more than regular planning does. Let's say it's not *supposed* to. When dealing with a hard-to-characterize bug, it's wise not to rule anything out... > I ask because I know some of my other queries involve 6 or > more tables, and I seem to recall that was a trigger point for genetic > algorithms to kick in with default settings. I think the default is 11 tables in 6.5.*. At least I get play=> show geqo; NOTICE: GEQO is ON beginning with 11 relations SHOW VARIABLE > create index mytable_dog_idx on mytable(dog_id); > create index mytable_cat_idx on mytable(cat_id); > create index mytable_dogcat_idx on mytable(dog_id,cat_id); > ...thinking these indices would allow the fastest lookups from 3 different > angles (at the cost of slower inserts, of course). Not sure my intuition > here corresponds directly with the technical reality... I doubt the 2-column index earns its keep given that you have another index on the front column. A multicolumn index is a pretty specialized beast, so I don't recommend creating one unless you have a very specific heavily-used query in mind. (Of course, if you're making a multicol UNIQUE index to enforce uniqueness of a multicol primary key, that's a different matter entirely. But if you're just fishing for performance improvements, you're probably fishing in the wrong place.) > Your question also reminds me of a scenario I'd wondered about: > create table mytable ( > id serial, > ... > primary key (id) > ); > create unique index mytable_id on mytable(id); > The primary key designation implicitly creates a unique index > ('mytable_id_pkey', is it?). Yes, I think so. > What happens if I inadvertently create > another unique index on the same field (other than being worthless, > redundant, and a needless performance hit)? AFAIK it should work, but as you say it's a useless performance hit. It's barely conceivable that there's a bug lurking in there, since it's a very-seldom-exercised case. But having lots of (nonidentical) indexes on one table is very well exercised, and it's tough to see why it would matter if two of them happened to have identical parameters. regards, tom lane
Tom Lane wrote: > > Ed Loehr <ELOEHR@austin.rr.com> writes: > > create index mytable_dog_idx on mytable(dog_id); > > create index mytable_cat_idx on mytable(cat_id); > > create index mytable_dogcat_idx on mytable(dog_id,cat_id); > > > ...thinking these indices would allow the fastest lookups from 3 different > > angles (at the cost of slower inserts, of course). Not sure my intuition > > here corresponds directly with the technical reality... > > I doubt the 2-column index earns its keep given that you have another > index on the front column. A multicolumn index is a pretty specialized > beast, so I don't recommend creating one unless you have a very specific > heavily-used query in mind. (Of course, if you're making a multicol > UNIQUE index to enforce uniqueness of a multicol primary key, that's > a different matter entirely. But if you're just fishing for performance > improvements, you're probably fishing in the wrong place.) Actually I think that the first (dog_id) is worthless in this situation as (dog_id,cat_id) can be used instead of it. I vaguely remember that Hiroshi posted a patch some time ago that fixed the plan to use more then only the first column of multi-column index if possible. The first column of a multi-column index has always been used afaik. ------------------------ Hannu
Tom Lane wrote: > Does the EXPLAIN output showing the query plan change from when it's > working to when it's not? What would really be helpful is to see the > EXPLAIN VERBOSE output in both states (preferably, the pretty-printed > version that gets put in the postmaster log file, not the compressed > version that gets sent to the client). Yes, the query plan changes between working state and non-working state. Vaccum triggers the change. Other things may also, I'm not sure yet. Here are the failing and successful query plans, respectively... QUERY PLAN: (failed due to ExecInitIndexScan left/right rel op error) Aggregate (cost=10.05 rows=1 width=48) -> Nested Loop (cost=10.05 rows=1 width=48) -> Nested Loop (cost=8.05 rows=1width=36) -> Nested Loop (cost=6.05 rows=1 width=24) -> Nested Loop (cost=4.05 rows=1width=16) -> Index Scan using activity_cid on activity pa (cost=2.05 rows=1 width=8) -> Index Scan using contract_activity_type_pkey on contract_activity_type cat (cost=2.00 rows=2 width=8) -> Index Scan using contract_activity_type_exp_pkey on contract_activity_type_expense_ catet (cost=2.00 rows=2 width=8) -> Index Scan using contract_expense_type_pkey on contract_expense_type cet (cost=2.00rows=1 width=12) -> Index Scan using contract_activity_hr_need_pkey on contract_activity_hr_need cahrn (cost=2.00 rows=2 width=12) VACUUM QUERY PLAN: (successful query after vacuuming) Aggregate (cost=9.58 rows=1 width=48) -> Nested Loop (cost=9.58 rows=1 width=48) -> Nested Loop (cost=7.58 rows=1width=36) -> Nested Loop (cost=5.53 rows=1 width=28) -> Nested Loop (cost=3.53 rows=1width=16) -> Seq Scan on contract_activity_type cat (cost=1.53 rows=1 width=8) -> Index Scan using contract_activity_type_exp_pkey on contract_activity_type_expense_ catet (cost=2.00 rows=2width=8) -> Index Scan using contract_expense_type_pkey on contract_expense_type cet (cost=2.00rows=1 width=12) -> Index Scan using activity_cid on activity pa (cost=2.05 rows=1 width=8) -> Index Scan using contract_activity_hr_need_pkey on contract_activity_hr_need cahrn (cost=2.00 rows=2 width=12) Other ideas? Cheers, Ed Loehr
Ed Loehr <ELOEHR@austin.rr.com> writes: > Yes, the query plan changes between working state and non-working state. > Vaccum triggers the change. Other things may also, I'm not sure yet. Here > are the failing and successful query plans, respectively... Mmmm ... I suspected it had something to do with indexscan on the inner side of a nestloop (the optimizer has some strange hacks for that). Looks like I was right. Could I trouble you for the EXPLAIN VERBOSE output, rather than just EXPLAIN? (Preferably, the pretty-printed form that gets dumped into the postmaster log, not the unreadable form that psql shows.) regards, tom lane