Re: TB-sized databases - Mailing list pgsql-performance

From Tom Lane
Subject Re: TB-sized databases
Date
Msg-id 13851.1196351131@sss.pgh.pa.us
Whole thread Raw
In response to Re: TB-sized databases  (Gregory Stark <stark@enterprisedb.com>)
Responses Re: TB-sized databases
Re: TB-sized databases
List pgsql-performance
Gregory Stark <stark@enterprisedb.com> writes:
> "Simon Riggs" <simon@2ndquadrant.com> writes:
>> Tom's previous concerns were along the lines of "How would know what to
>> set it to?", given that the planner costs are mostly arbitrary numbers.

> Hm, that's only kind of true.

The units are not the problem.  The problem is that you are staking
non-failure of your application on the planner's estimates being
pretty well in line with reality.  Not merely in line enough that
it picks a reasonably cheap plan, but in line enough that if it
thinks plan A is 10x more expensive than plan B, then the actual
ratio is indeed somewhere near 10.

Given that this list spends all day every day discussing cases where the
planner is wrong, I'd have to think that that's a bet I wouldn't take.

You could probably avoid this risk by setting the cutoff at something
like 100 or 1000 times what you really want to tolerate, but how
useful is it then?

            regards, tom lane

pgsql-performance by date:

Previous
From: Tom Lane
Date:
Subject: Re: Query only slow on first run
Next
From: Andrew Sullivan
Date:
Subject: Re: 7.4 Checkpoint Question