Hi Brice,
I think You are right, problem is just in php prepare/bindvalue
So it should be avoided...
I guess the reason you like to use bindvalue is safety in SQL injection
problem...
what should be handled on some way what depends on concrete case...
But far as I am aware string as input parametar of an function is safe
enough...
Kind Regards,
Misa
Sent from my Windows Phone
From: Brice Maron
Sent: 20/01/2012 19:28
To: David Johnston
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Immutable function with bind value
On Fri, Jan 20, 2012 at 16:00, David Johnston <polobo@yahoo.com> wrote:
> On Jan 20, 2012, at 6:15, Brice Maron <bmaron@gmail.com> wrote:
>
>> Hi,
>>
>> i've discovered something kind of weird =C2=A0while developing my app...
>> I was trying to fetch some records in a table using a function immutable.
>> In my interface it was really slow and while i was in a psql it was
>> really fast ...
>>
>> After some research i've found out that it was caused by the bind
>> parameter "forcing" the immutable function to execute each time my
>> query gets a record.
>>
>> while i know that the value can't be known at planning time the
>> difference between the binded / not binded is quite enormous...
>>
>> i've isolated a test case here....
>>
>> https://gist.github.com/e93792540cb3a68054c9
>>
>> (it happens at least in pg 8.4 and 9.1.2)
>>
>>
>> What do you think about it... bug? feature? =C2=A0how can i avoid it?
>>
>>
>> Thanks
>>
>>
>
> While you prove the behavior exists your test case is not a realistic exa=
mple of why you would do such a thing.
>
> I would have to say that I'd expect your query to execute the function on=
ce and cache the result for the remainder of the statement. To that end hav=
e you tried defining it as a STABLE function instead of immutable?
>
> In the real use-case what is it you are trying to accomplish?
>
> You might try using a WITH clause to resolve your function call and then =
use the result in the main query.
>
> David J.
Hi,
i know my code is not a real world thing but i tried to isolate the proble=
m...
Here is another piece that look more real and close to the problem i have...
https://gist.github.com/d83a9c5436d7cb8cebec
the taxonomy table has 300 000 records and the difference between
name_normalized like normalize(?) || '%'
and
name_normalized like normalize('my Taxa') || '%'
is really huge!
Thanks for the help
Brice
--=20
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general