It's my first time posting to the list, though it's helped me through numerous jams throughout the past year as I've been familiarizing myself with all of the wonderful postgresisms =)
Just yesterday, I made what I thought would be a very minor and routine modification to a stored function. The function aggregates over several tables-worth of network packet data and generates some very simple statistics on packet loss (packets sent vs packets received). Initially, the function ran over an entire set of such data, but then I wanted to change it to only aggregate only on data within a specific time window, as necessary.
So I added 2 timestamp parameters to the function (start time and stop time) and utilized them in an appropriate WHERE clause to limit the dataset being aggregated, and suddenly the function takes much longer to run. For example, running the original un-windowed function over ~2hrs worth of packet data took a steady ~90ms. Running the new windowed version over the same data now takes ~15s. The result sets are identical.
So, baffled, I decided to take the slow-running code out of the stored function and run it as a straight sql query, replacing all parameter variables with static values. To my surprise, it only took ~90ms again. The result sets were all still identical.
Can anyone lend a guess as to what I'm running into here, or do I need to provide more specifics to recreate the issue? It's repeatable, but it's a fair bit of data for me to just post in here as-is. I've already discovered a few creative workarounds (e.g. plpgsql: return query execute ...) that make it run faster again, but at this point, I'm really more interested in finding out what would make sql code run so much slower as a stored function in the first place.