stored procedure performance - Mailing list pgsql-general

From Rick Gigger
Subject stored procedure performance
Date
Msg-id E8715A31-EC27-43DA-AE4C-B9C0E8C43290@alpinenetworking.com
Whole thread Raw
Responses Re: stored procedure performance
List pgsql-general
I have a table that I populate with a stored procedure.  When the
stored procedure runs it deletes the table and rebuilds the whole
thing from scratch.  Initially it performs terribly but when I play
around with it for a while (I will describe this in a moment) it runs
very, very fast.

If I understand correctly when the stored procedure is first created
it creates query plans for each of the queries in the stored
procedure and caches them.  I don't know what order loading the dump
does everything in but I'm guessing that when the stored proc is
created the stats (or lack of stats) present at that time are causing
it to create bad query plans.

So I have two questions:

1) How can I verify this?  Is there any way to do an explain on
something running in a stored procedure?  (I am using pl/pgsql)
2) What can I do about it?  What actually fixes it is to run it a
while and made sure there is a bunch of data there (even if it is
deleted and not visible to anything) and run vacuum analyze.  Then
recreate the stored procedure.  Then run the stored procedure.  Then
it goes very, very fast.  So my only option now is to create the
database from the dump and then fix the stored proc manually.  I
guess I could run a script to do this.  Is there a way that I can
delay the creation of query plans till the database has some actual
data available to it to actually create good plans?

pgsql-general by date:

Previous
From: Aaron Koning
Date:
Subject: Re: Multi-row update w. plpgsql function
Next
From: Michael Glaesemann
Date:
Subject: Re: stored procedure performance