On 7/12/05, Yves Vindevogel <yves.vindevogel@implements.be> wrote:
> Hi,
>
> We have a couple of database that are identical (one for each customer).
> They are all relatively small, ranging from 100k records to 1m records.
> There's only one main table with some smaller tables, a lot of indexes
> and some functions.
>
> I would like to make an estimation of the performance, the diskspace
> and other related things,
> when we have database of for instance 10 million records or 100 million
> records.
>
> Is there any math to be done on that ?
Its pretty easy to make a database run fast with only a few thousand
records, or even a million records, however things start to slow down
non-linearly when the database grows too big to fit in RAM.
I'm not a guru, but my attempts to do this have not been very accurate.
Maybe (just maybe) you could get an idea by disabling the OS cache on
the file system(s) holding the database and then somehow fragmenting
the drive severly (maybe by putting each table in it's own disk
partition?!?) and measuring performance.
On the positive side, there are a lot of wise people on this list who
have +++ experience optimzing slow queries on big databases. So
queries now that run in 20 ms but slow down to 7 seconds when your
tables grow will likely benefit from optimizing.
--
Matthew Nuzum
www.bearfruit.org