Re: TB-sized databases - Mailing list pgsql-performance

From Trevor Talbot
Subject Re: TB-sized databases
Date
Msg-id 90bce5730711300215x3ff6c68ekf1c55ce799e48578@mail.gmail.com
Whole thread Raw
In response to Re: TB-sized databases  (Gregory Stark <stark@enterprisedb.com>)
Responses Re: TB-sized databases  (Csaba Nagy <nagy@ecircle-ag.com>)
List pgsql-performance
On 11/29/07, Gregory Stark <stark@enterprisedb.com> wrote:
> "Simon Riggs" <simon@2ndquadrant.com> writes:
> > On Wed, 2007-11-28 at 14:48 +0100, Csaba Nagy wrote:

> >> In fact an even more useful option would be to ask the planner to throw
> >> error if the expected cost exceeds a certain threshold...

> > Tom's previous concerns were along the lines of "How would know what to
> > set it to?", given that the planner costs are mostly arbitrary numbers.

> Hm, that's only kind of true.

> Obviously few people know how long such a page read takes but surely you would
> just run a few sequential reads of large tables and set the limit to some
> multiple of whatever you find.
>
> This isn't going to precise to the level of being able to avoid executing any
> query which will take over 1000ms. But it is going to be able to catch
> unconstrained cross joins or large sequential scans or such.

Isn't that what statement_timeout is for? Since this is entirely based
on estimates, using arbitrary fuzzy numbers for this seems fine to me;
precision isn't really the goal.

pgsql-performance by date:

Previous
From: Robert Treat
Date:
Subject: Re: Training Recommendations
Next
From: Csaba Nagy
Date:
Subject: Re: TB-sized databases