Re: Slow UPADTE, compared to INSERT - Mailing list pgsql-performance
From | Mike C. Fletcher |
---|---|
Subject | Re: Slow UPADTE, compared to INSERT |
Date | |
Msg-id | 3FD0BAE9.7050801@rogers.com Whole thread Raw |
In response to | Re: Slow UPADTE, compared to INSERT (Ivar Zarans <iff@alcaron.ee>) |
List | pgsql-performance |
I just spent 2 days tracking this error down in my own code, actually. What I wound up doing is having the two places where I generate the queries (everything in my system goes through those two points, as I'm using a middleware layer) check values used as identifying fields for the presence of a bigint, and if one exists, replaces it with a wrapper that does the coerced-string representation: class Wrap: def __init__( self, value ): self.value = value def __str__( self ): return "'%s'::bigint"%(self.value,) __repr__ = __str__ value = Wrap(value) Just doing that for the indexing/identifying values ATM. pyPgSQL will back up to using simple repr for the object (rather than raising an error as it would if you were using a formatted string), but will otherwise treat it as a regular value for quoting and the like, so no other modifications to the code required. By no means an elegant fix, but since your post (well, the resulting thread) managed to solve my problem, figured I should at least tell everyone thanks and how I worked around the problem. You wouldn't want this kind of hack down in the pyPgSQL level I would think, as it's DB-version specific. I suppose you could alter the __repr__ of the PgInt8 class/type to always use the string or coerced form, but it seems wrong to me. I'm actually hesitant to include it in our own middleware layer, but oh well, it does seem to be necessary for even somewhat reasonable performance. BTW, my case was a largish (88,000 record) table with a non-unique bigint key, explain on update shows sequential search, while with 'int'::bigint goes to index search. Using pyPgSQL as the interface to 7.3.4 and 7.3.3. Enjoy, Mike Ivar Zarans wrote: >On Fri, Dec 05, 2003 at 10:08:20AM +0000, Richard Huxton wrote: > > ... >I am using PyPgSQL for PostgreSQL access and making update queries as this: > > ... >It seems, that PyPgSQL query quoting is not aware of this performance >problem (to which Cristopher referred) and final query, sent to server >is correct SQL, but not correct, considering PostgreSQL bugs. > > ... >Finally - what would be correct solution to this problem? Upgrading to >7.5 CVS is not an option :) One possibility is not to use PyPgSQL >variable substitution and create every query "by hand" - not very nice >solution, since variable substitution and quoting is quite convenient. > >Second (and better) possibility is to ask PyPgSQL develeopers to take care >of PostgreSQL oddities. > >Any other suggestions? > > _______________________________________ Mike C. Fletcher Designer, VR Plumber, Coder http://members.rogers.com/mcfletch/
pgsql-performance by date: