Hey all,
I can't seem to find it in the docs, so I'll ask here. Is there a
way in postgres to impose a limit on query costs? I'd noticed some
horribly disfigured queries hitting my poor database from one of our
developers, and corrected him. But many of our users are not so
easily contacted. What I want to know is if there's a configuration
parameter or source patch that will allow me to disallow a query
execution if the pre-execution cost estimation is too high.
The reason I'm asking? Our developer accidentally created a cross
product that resulted in an explain-plan cost of approximately
1.3 trillion. Before we tracked the problem down, the application
causing the query would halt for an answer indefinitely. Even after
stopping the application, postgres would grind on. The load on the
machine was hovering around 20, with 5 postgres threads sharing
the blame. I'd hate for any user to have the power to cripple
our database server by simply writing a bad query.
Should I contact the developers list on this and request a patch?
Could someone direct me to the correct piece of source code to...
er... enhance?
Thanks in advance.
--
+-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-+
| Shaun M. Thomas INN Database Programmer |
| Phone: (309) 743-0812 Fax : (309) 743-0830 |
| Email: sthomas@townnews.com AIM : trifthen |
| Web : hamster.lee.net |
| |
| "Most of our lives are about proving something, either to |
| ourselves or to someone else." |
| -- Anonymous |
+-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-+