On 5/16/14, 8:15 AM, Hans-Jürgen Schönig wrote:
> On 20 Feb 2014, at 01:38, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> I am really dubious that letting DBAs manage buffers is going to be
>> an improvement over automatic management.
>
> the reason for a feature like that is to define an area of the application which needs more predictable runtime
behaviour.
> not all tables are created equals in term of importance.
>
> example: user authentication should always be supersonic fast while some reporting tables might gladly be forgotten
evenif they happened to be in use recently.
>
> i am not saying that we should have this feature.
> however, there are definitely use cases which would justify some more control here.
> otherwise people will fall back and use dirty tricks sucks as “SELECT count(*)” or so to emulate what we got here.
Which is really just an extension of a larger problem: many applications do not care one iota about ideal performance;
theycare about *always* having some minimum level of performance. This frequently comes up with the issue of a query
planthat is marginally faster 99% of the time but sucks horribly for the remaining 1%. Frequently it's far better to
chosea less optimal query that doesn't have a degenerate case.
--
Jim C. Nasby, Data Architect jim@nasby.net
512.569.9461 (cell) http://jim.nasby.net