Thread: Re: [PERFORMANCE] Stored Procedures

Re: [PERFORMANCE] Stored Procedures

From
Rikard Pavelic
Date:
>>Hi,
>>
>> Will simple queries such as "SELECT * FROM blah_table WHERE tag='x'; work any
>> faster by putting them into a stored procedure?

>
>IMHO no, why do you think so? You can use PREPARE instead, if you have many
>selects like this.


I tought that creating stored procedures in database means
storing it's execution plan (well, actually storing it like a
compiled object). Well, that's what I've learned couple a years
ago in colledge ;)

What are the advantages of parsing SP functions every time it's called?

My position is that preparing stored procedures for execution solves
more problems, that it creates.
And the most important one to be optimizing access to queries from
multiple connections (which is one of the most important reasons
for using stored procedures in the first place).

Best regards,
    Rikard


Re: [PERFORMANCE] Stored Procedures

From
"Jim C. Nasby"
Date:
On Fri, Jan 20, 2006 at 07:50:23PM +0100, Rikard Pavelic wrote:
> >>Hi,
> >>
> >>Will simple queries such as "SELECT * FROM blah_table WHERE tag='x'; work
> >>any
> >>faster by putting them into a stored procedure?
>
> >
> >IMHO no, why do you think so? You can use PREPARE instead, if you have many
> >selects like this.
>
>
> I tought that creating stored procedures in database means
> storing it's execution plan (well, actually storing it like a
> compiled object). Well, that's what I've learned couple a years
> ago in colledge ;)

My college professor said it, it must be true! ;P

My understanding is that in plpgsql, 'bare' queries get prepared and act
like prepared statements. IE:

SELECT INTO variable
    field
    FROM table
    WHERE condition = true
;

> What are the advantages of parsing SP functions every time it's called?
>
> My position is that preparing stored procedures for execution solves
> more problems, that it creates.
> And the most important one to be optimizing access to queries from
> multiple connections (which is one of the most important reasons
> for using stored procedures in the first place).

Ok, so post some numbers then. It might be interesting to look at the
cost of preparing a statement, although AFAIK that does not store the
query plan anywhere.

In most databases, query planning seems to be a pretty expensive
operation. My experience is that that isn't the case with PostgreSQL.
--
Jim C. Nasby, Sr. Engineering Consultant      jnasby@pervasive.com
Pervasive Software      http://pervasive.com    work: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf       cell: 512-569-9461

Re: [PERFORMANCE] Stored Procedures

From
Rikard Pavelic
Date:
Jim C. Nasby wrote:
> My college professor said it, it must be true! ;P
>
>
The famous joke ;)
> My understanding is that in plpgsql, 'bare' queries get prepared and act
> like prepared statements. IE:
>
> SELECT INTO variable
>     field
>     FROM table
>     WHERE condition = true
> ;
>
>
Unfortunately I don't know enough about PostgreSQL, but from responses
I've been reading I've
come to that conclusion.
> Ok, so post some numbers then. It might be interesting to look at the
> cost of preparing a statement, although AFAIK that does not store the
> query plan anywhere.
>
> In most databases, query planning seems to be a pretty expensive
> operation. My experience is that that isn't the case with PostgreSQL.
>

I didn't think about storing query plan anywhere on the disk, rather
keep them in memory pool.
It would be great if we had an option to use prepare statement for
stored procedure so it
would prepare it self the first time it's called and remained prepared
until server shutdown or
memory pool overflow.

This would solve problems with prepare which is per session, so for
prepared function to be
optimal one must use same connection.

Re: [PERFORMANCE] Stored Procedures

From
"Jim C. Nasby"
Date:
On Fri, Jan 20, 2006 at 08:38:23PM +0100, Rikard Pavelic wrote:
> This would solve problems with prepare which is per session, so for
> prepared function to be
> optimal one must use same connection.

If you're dealing with something that's performance critical you're not
going to be constantly re-connecting anyway, so I don't see what the
issue is.
--
Jim C. Nasby, Sr. Engineering Consultant      jnasby@pervasive.com
Pervasive Software      http://pervasive.com    work: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf       cell: 512-569-9461

Re: [PERFORMANCE] Stored Procedures

From
Rikard Pavelic
Date:
Jim C. Nasby wrote:
> If you're dealing with something that's performance critical you're not
> going to be constantly re-connecting anyway, so I don't see what the
> issue is.
>

I really missed your point.
In multi user environment where each user uses it's connection for
identification
purposes, this seems like a reasonable optimization.

I know there is pgpool, but it's non windows, and it's not the best
solution
for every other problem.


Re: [PERFORMANCE] Stored Procedures

From
Rikard Pavelic
Date:
Jim C. Nasby wrote:
> If you're dealing with something that's performance critical you're not
> going to be constantly re-connecting anyway, so I don't see what the
> issue is.
>

I didn't include mailing list in my second reply :( so here it is again.
Someone may find this interesting...

http://archives.postgresql.org/pgsql-general/2004-04/msg00084.php

 From Tom Lane:
"EXECUTE means something different in plpgsql than it does in plain SQL,

and you do not need PREPARE at all in plpgsql.  plpgsql's automatic
caching of plans gives you the effect of PREPARE on every statement
without your having to ask for it."


Re: [PERFORMANCE] Stored Procedures

From
Marcos
Date:
Em Sex, 2006-01-20 às 15:34 -0600, Jim C. Nasby escreveu:
> On Fri, Jan 20, 2006 at 08:38:23PM +0100, Rikard Pavelic wrote:
> > This would solve problems with prepare which is per session, so for
> > prepared function to be
> > optimal one must use same connection.
>
> If you're dealing with something that's performance critical you're not
> going to be constantly re-connecting anyway, so I don't see what the
> issue is.

This one was my doubt, perhaps in based desktop applications this is
true, but in web applications this is not the re-connecting is
constant :(.

Then the preprare not have very advantage because your duration is per
session.

Marcos.


Re: [PERFORMANCE] Stored Procedures

From
Markus Schaber
Date:
Hi, Marcos,

Marcos wrote:

> This one was my doubt, perhaps in based desktop applications this is
> true, but in web applications this is not the re-connecting is
> constant :(.

If this is true, then you have a much bigger performance problem than
query plan preparation.

You really should consider using a connection pool (most web application
servers provide pooling facilities) or some other means to keep the
connection between several http requests.

Worried,
Markus

--
Markus Schaber | Logical Tracking&Tracing International AG
Dipl. Inf.     | Software Development GIS

Fight against software patents in EU! www.ffii.org www.nosoftwarepatents.org

Re: [PERFORMANCE] Stored Procedures

From
Marcos
Date:
Hi Markus

> You really should consider using a connection pool (most web application
> servers provide pooling facilities) or some other means to keep the
> connection between several http requests.

Yes. I'm finding a connection pool, I found the pgpool but yet don't
understand how it's work I'm go read more about him.

Thanks

Marcos


Re: [PERFORMANCE] Stored Procedures

From
"Dave Dutcher"
Date:
I don't think pgpool is what you need.  If I understand pgpool
correctly, pgpool lets you pool multiple postgres servers together.  You
are just looking for database connection pooling.

A simple connection pool is basically just an application wide list of
connections.  When a client needs a connection, you just request a
connection from the pool.  If there is an unused connection in the pool,
it is given to the client and removed from the unused pool.  If there is
no unused connection in the pool, then a new connection is opened.  When
the client is done with it, the client releases it back into the pool.

You can google for 'database connection pool' and you should find a
bunch of stuff.  It's probably a good idea to find one already written.
If you write your own you have to make sure it can deal with things like
dead connections, synchronization, and maximum numbers of open
connections.

Dave


-----Original Message-----
From: pgsql-performance-owner@postgresql.org
[mailto:pgsql-performance-owner@postgresql.org] On Behalf Of Marcos
Sent: Monday, January 23, 2006 7:27 AM
To: Markus Schaber
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] [PERFORMANCE] Stored Procedures

Hi Markus

> You really should consider using a connection pool (most web
application
> servers provide pooling facilities) or some other means to keep the
> connection between several http requests.

Yes. I'm finding a connection pool, I found the pgpool but yet don't
understand how it's work I'm go read more about him.

Thanks

Marcos


---------------------------(end of broadcast)---------------------------
TIP 4: Have you searched our list archives?

               http://archives.postgresql.org


Re: [PERFORMANCE] Stored Procedures

From
Frank Wiles
Date:
On Mon, 23 Jan 2006 10:23:17 -0600
"Dave Dutcher" <dave@tridecap.com> wrote:

> I don't think pgpool is what you need.  If I understand pgpool
> correctly, pgpool lets you pool multiple postgres servers together.
> You are just looking for database connection pooling.

  While pgpool can let you pool together multiple backend servers,
  it also functions well as just a connection pooling device with
  only one backend.

 ---------------------------------
   Frank Wiles <frank@wiles.org>
   http://www.wiles.org
 ---------------------------------