Re: PREPARE / EXECUTE - Mailing list pgsql-hackers

From Hans-Jürgen Schönig
Subject Re: PREPARE / EXECUTE
Date
Msg-id 3DB6C871.4070808@cybertec.at
Whole thread Raw
In response to PREPARE / EXECUTE  (Hans-Jürgen Schönig <postgres@cybertec.at>)
Responses Re: PREPARE / EXECUTE  (Bruno Wolff III <bruno@wolff.to>)
Re: PREPARE / EXECUTE  (Greg Copeland <greg@CopelandConsulting.Net>)
List pgsql-hackers
The idea is not to have it accross multiple backends and having it in 
sync with the tables in the database. This is not the point.
My problem is that I have seen many performance critical applications 
sending just a few complex queries to the server. The problem is: If you 
have many queries where the relation "time planner"/"time executor" is 
very high (eg. complex joins with just one value as the result).
These applications stay the same for a long time (maybe even years) and 
so there is no need to worry about new tables and so forth - maybe there 
is not even a need to worry about new data. In these cases we could 
speed up the database significantly just by avoiding the use of the planner:

An example:
I have a join across 10 tables  + 2 subselects across 4 tables
on the machine I use for testing:   planner: 12 seconds   executor: 1 second

The application will stay the same forever.
I could be 10 times faster if there was a way to load the execution plan 
into the backend.
There is no way to use a persistent connection (many clients on 
different machines, dynamic IPs, etc. ...)
There is no way to have an "invalid" execution plan because there are no 
changes (new tables etc.) in the database.

Also: If people execute a prepared query and it fails they will know why 
- queries will fail if people drop a table even if these queries are not 
prepared.
A new feature like the one we are discussing might be used rarely but if 
people use it they will benefit A LOT.

If we had a simple ASCII interface to load the stuff into the planner 
people could save MANY cycles.
When talking about tuning it is nice to gain 10% or even 20% but in many 
cases it does not solve a problem - if a problem can be reduced by 90% 
it is a REAL gain.
Gaining 10% can be done by tweaking the database a little - gaining 
1000% cannot be done so it might be worth thinking about it even it the 
feature is only used by 20% of those users out there.  20% of all 
postgres users is most likely more than 15.000 people.

Again; it is not supposed to be a every-day solution. It is a solution 
for applications staying the same for a very long time.
   Hans


Tom Lane wrote:

>Hans-Jürgen Schönig <postgres@cybertec.at> writes:
>  
>
>>I wonder if there is a way to store a parsed/rewritten/planned query in 
>>a table so that it can be loaded again.
>>    
>>
>
>The original version of the PREPARE patch used a shared-across-backends
>cache for PREPAREd statements.  We rejected that for a number of
>reasons, one being the increased difficulty of keeping such a cache up
>to date.  I think actually storing the plans on disk would have all the
>same problems, but worse.
>
>            regards, tom lane
>
>---------------------------(end of broadcast)---------------------------
>TIP 2: you can get off all lists at once with the unregister command
>    (send "unregister YourEmailAddressHere" to majordomo@postgresql.org)
>  
>


-- 
*Cybertec Geschwinde u Schoenig*
Ludo-Hartmannplatz 1/14, A-1160 Vienna, Austria
Tel: +43/1/913 68 09; +43/664/233 90 75
www.postgresql.at <http://www.postgresql.at>, cluster.postgresql.at 
<http://cluster.postgresql.at>, www.cybertec.at 
<http://www.cybertec.at>, kernel.cybertec.at <http://kernel.cybertec.at>




pgsql-hackers by date:

Previous
From: Tom Lane
Date:
Subject: Re: PREPARE / EXECUTE
Next
From: Nicolae Mihalache
Date:
Subject: crashes with postgresql 7.2.1 on IRIX 6.5