Thread: key set delete query
I notice to my dismay that key set query optimization only works for SELECT :( This could be a significant performance problem for my application, which would like to do a key set based delete. Any thoughts on the advisability of changing the criteria for transformation of key set based queries? I know that cnfify is being reworked for release 7; any comment on when 7 will be available? How stable is the latest beta? About 10 days ago I tried building from the daily snapshot, but didn't get very far. Thanks for any comments! //\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\ Michael McCarthy TCSI Corporation michael@tcsi.com 1080 Marina Village Parkway (510) 749-8739 Alameda, CA 94501
Michael McCarthy <michael@tcsi.com> writes: > I notice to my dismay that key set query optimization only works for > SELECT :( This could be a significant performance problem for my > application, which would like to do a key set based delete. > Any thoughts on the advisability of changing the criteria for > transformation of key set based queries? I have no intention of putting any effort into KSQO, other than removing it as soon as the rest of the optimizer does a reasonable job with OR queries ;-). You could try removing the check for a SELECT and see what happens, though. > I know that cnfify is being reworked for release 7; any comment on when > 7 will be available? Current plan is to go to beta mode on 1 Feb. Final release, hard to say. Marc will probably say "1 March for sure", but I lean more towards the "when it's ready" approach... > How stable is the latest beta? About 10 days ago I tried building from the > daily snapshot, but didn't get very far. We seem to be in the usual pre-beta flurry of committing half-tested code :-(. I think the underlying stability of the system has been amazingly good throughout this development cycle, but there have been a lot of minor build and regress test problems in the past couple weeks. If you have the know-how to get past those, things pretty much work. regards, tom lane
I have a table (invoices) that have several fields for various charges, these fields are of type numeric(9,2). I have a function and a trigger to total these fields on every update or insert and place the value in the total field of the respective invoice. CREATE FUNCTION invoice_total_trigger() RETURNS OPAQUE AS ' BEGIN NEW.total := NEW.fee_membership + NEW.fee_logins + NEW.fee_convention + NEW.fee_prints_jobs + NEW.fee_prints_apps + NEW.fee_hotlines + NEW.fee_postage + NEW.fee_ups + NEW.fee_late + NEW.fee_other1 + NEW.fee_other2 + NEW.fee_other3 + NEW.fee_pastdue - NEW.amount_paid; RETURN NEW; END; ' LANGUAGE 'plpgsql'; CREATE TRIGGER invoice_total_trigger BEFORE INSERT OR UPDATE ON invoice FOR EACH ROW EXECUTE PROCEDURE invoice_total_trigger(); Now, this all worked fine with the fields being type float8 but I didn't want to have to pad the numbers on display for formatting, so I decided to use the numeric data type so that postgres would automatically store numbers like 100 as 100.00.... Anyway, when I insert or update and the trigger calls the function, I get this error : Query Failed ERROR: overflow on numeric ABS(value) >= 10^2 for field with precision 0 scale 2 Several of the fields are over 100.00, this isn't a problem if I remove the trigger/function and insert into the other data tables values way over 100.00 so the problem is within the function I guess... It's simple addition and subtraction, is this a problem for the numeric type? If it's something else small and obvious, feel free to flame me until I die from it :-) Thanks!!!! -Mitch
"Mitch Vincent" <mitch@venux.net> writes: > Query Failed > ERROR: overflow on numeric ABS(value) >= 10^2 for field with precision 0 > scale 2 That's a known bug in the NUMERIC code --- cause not known yet, but apparently it's somewhat platform-dependent (or maybe there's more than one bug?). I've been expecting Jan Wieck to produce a fix, since NUMERIC is his turf, but he may be too busy with long-tuple support to want to look at it. One way or another it will get seen to before 7.0 beta, and hopefully we can produce a back-patch for 6.5.* as well. regards, tom lane
Hmm, ok Tom, thanks a lot for the information.. I tried doing them in decimal(9,2) too and well, it added them as numeric anyway -- I guess that's a dead end for a while? Thanks again.. -Mitch ----- Original Message ----- From: Tom Lane <tgl@sss.pgh.pa.us> To: Mitch Vincent <mitch@venux.net> Cc: <pgsql-sql@postgreSQL.org> Sent: Friday, January 14, 2000 4:27 PM Subject: Re: [SQL] numeric question.. > "Mitch Vincent" <mitch@venux.net> writes: > > Query Failed > > ERROR: overflow on numeric ABS(value) >= 10^2 for field with precision 0 > > scale 2 > > That's a known bug in the NUMERIC code --- cause not known yet, but > apparently it's somewhat platform-dependent (or maybe there's more > than one bug?). I've been expecting Jan Wieck to produce a fix, since > NUMERIC is his turf, but he may be too busy with long-tuple support to > want to look at it. > > One way or another it will get seen to before 7.0 beta, and hopefully > we can produce a back-patch for 6.5.* as well. > > regards, tom lane >
"Mitch Vincent" <mitch@venux.net> writes: > I tried doing them in decimal(9,2) too and well, it added them as numeric > anyway -- I guess that's a dead end for a while? DECIMAL is just an alias for NUMERIC, as far as I know... regards, tom lane
On 2000-01-14, Tom Lane mentioned: > "Mitch Vincent" <mitch@venux.net> writes: > > I tried doing them in decimal(9,2) too and well, it added them as numeric > > anyway -- I guess that's a dead end for a while? > > DECIMAL is just an alias for NUMERIC, as far as I know... NUMERIC specifies the data type exact numeric, with the decimal precision and scale specified by the <precision> and <scale>. DECIMAL specifies the data type exact numeric, with the decimal scale specified by the <scale> and the implementation-defined decimal precision equal to or greater than the value of the specified <precision>. Up to this day I'm wondering what exactly this means ... -- Peter Eisentraut Sernanders väg 10:115 peter_e@gmx.net 75262 Uppsala http://yi.org/peter-e/ Sweden
Peter Eisentraut <peter_e@gmx.net> writes: > NUMERIC specifies the data type exact numeric, with the decimal precision > and scale specified by the <precision> and <scale>. > DECIMAL specifies the data type exact numeric, with the decimal scale > specified by the <scale> and the implementation-defined decimal precision > equal to or greater than the value of the specified <precision>. > Up to this day I'm wondering what exactly this means ... Yeah, I read that too. I think what it means is that implementors are allowed to increase the precision (field width) of a DECIMAL field to a larger value if that's more efficient in their implementation, whereas NUMERIC must have exactly the specified precision. For example, if we were to reimplement NUMERIC using radix-10000 arithmetic (four decimal digits per 16-bit integer), then the most efficient field widths would be multiples of 4. According to the spec, we'd be allowed to increase the field width of a DECIMAL field to make it a multiple of 4, but not the width of a NUMERIC field. Right now, there is no such implementation advantage in Postgres, so the two field types are exactly equivalent in our code. regards, tom lane