[PERFORM] Dynamic performance issues - Mailing list pgsql-performance

From Arne Roland
Subject [PERFORM] Dynamic performance issues
Date
Msg-id d93e3d0527514efc94975dceaf71442e@index.de
Whole thread Raw
List pgsql-performance

Hello,

 

I was encouraged to write up a few simplified, reproducible performance cases, that occur (similarly) in our production environment. Attached you find a generic script that sets up generic tables, used for the three different cases. While I think at all of them

I included the times needed on my machine, the times differ by a small margin on rerun. Every block used was hit in the ring buffer, no need to turn to the kernel buffer in these cases. While this isn’t exactly to be expected in the production cases, I doubt this impacts the performance-difference too much.

Even though I originally developed these examples with different planer settings the default ones seem to work quite reasonably.

 

One thing to notice is that in our environment there is a lot of dynamic query-building going on, which might help understanding why we care about the second and third case.

 

The first case is the most easy to work around, but I think it’s a very common one.

While it’s true that this is a special case of the – probably not so easy to solve - cross table correlation issue, this is somehow special because one of the table is accessed via a single unique key. I thought to bring it up, since I maintain (meta-)functions that build functions that select and reinsert these values in the query to expose them to the planner. While this solution works fine, this is a very common cross table correlation issues , while another chunk is the case where are referenced by a foreign key. I’m not sure whether it’s a good idea to acquire a lock at planning time or rather recheck the exact values at execution time, but even if it’s just using the exact values of that single row (similar to a stable function) at planning time to get a better estimate seems worth it to me.

 

The second case is something that happens a lot in our environment (mainly in dynamically composed queries). I wouldn’t be so pedantic if 30 would be the largest occurring list length, but we have bigger lists the issue gets bigger.

Besides the constraint exclusion with the values approach I showed, there is the much bigger issue artificially introducing cross table correlation issues, leading to absurd estimates (just try inserting 100000 in the values list to see what I mean), damaging the plan if one join more tables to it. I didn’t choose that case even though I think it’s much more frequent, just because joining more relations make it harder to grasp.

I try to guess the selectivity of that clause in application code and choosing an in or values clause accordingly. As one would expect that is not only annoying to maintain, but in a world of dynamic queries this also leads to quite bizarre performance behavior in some cases.

Using a hashtable to enforce the predicate (if the list contains more than x elements) would sound reasonable to me. One might consider workmem, even though just the thought of having a query string that rivals the size of work_mem sounds stupid. What do you think?

 

The third case is something a little bit more sophisticated. Sadly it isn’t just tied to this obvious case where one can just create an index (create unique index on u (expensive_func(id)); would solve this case), but appears mainly when there are more than three tables with a lot of different predicates of different expense and selectivity. Even though it’s not that incredible frequent, maintaining the corresponding application code (see case two) is still quite painful.

 

Best regards

Arne

Attachment

pgsql-performance by date:

Previous
From: Baron Schwartz
Date:
Subject: Re: [PERFORM] Index-Advisor Tools
Next
From: Adam Torres
Date:
Subject: Re: [PERFORM] Performance loss upgrading from 9.3 to 9.6