Thread: Projecting currentdb to more users
Hi, We have a couple of database that are identical (one for each customer). They are all relatively small, ranging from 100k records to 1m records. There's only one main table with some smaller tables, a lot of indexes and some functions. I would like to make an estimation of the performance, the diskspace and other related things, when we have database of for instance 10 million records or 100 million records. Is there any math to be done on that ? Met vriendelijke groeten, Bien à vous, Kind regards, <bold>Yves Vindevogel</bold> <bold>Implements</bold> <smaller> </smaller>Hi, We have a couple of database that are identical (one for each customer). They are all relatively small, ranging from 100k records to 1m records. There's only one main table with some smaller tables, a lot of indexes and some functions. I would like to make an estimation of the performance, the diskspace and other related things, when we have database of for instance 10 million records or 100 million records. Is there any math to be done on that ? Met vriendelijke groeten, Bien à vous, Kind regards, Yves Vindevogel Implements <smaller> Mail: yves.vindevogel@implements.be - Mobile: +32 (478) 80 82 91 Kempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76 Web: http://www.implements.be <italic><x-tad-smaller> First they ignore you. Then they laugh at you. Then they fight you. Then you win. Mahatma Ghandi.</x-tad-smaller></italic></smaller> Mail: yves.vindevogel@implements.be - Mobile: +32 (478) 80 82 91 Kempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76 Web: http://www.implements.be First they ignore you. Then they laugh at you. Then they fight you. Then you win. Mahatma Ghandi.
Attachment
On 7/12/05, Yves Vindevogel <yves.vindevogel@implements.be> wrote: > Hi, > > We have a couple of database that are identical (one for each customer). > They are all relatively small, ranging from 100k records to 1m records. > There's only one main table with some smaller tables, a lot of indexes > and some functions. > > I would like to make an estimation of the performance, the diskspace > and other related things, > when we have database of for instance 10 million records or 100 million > records. > > Is there any math to be done on that ? Its pretty easy to make a database run fast with only a few thousand records, or even a million records, however things start to slow down non-linearly when the database grows too big to fit in RAM. I'm not a guru, but my attempts to do this have not been very accurate. Maybe (just maybe) you could get an idea by disabling the OS cache on the file system(s) holding the database and then somehow fragmenting the drive severly (maybe by putting each table in it's own disk partition?!?) and measuring performance. On the positive side, there are a lot of wise people on this list who have +++ experience optimzing slow queries on big databases. So queries now that run in 20 ms but slow down to 7 seconds when your tables grow will likely benefit from optimizing. -- Matthew Nuzum www.bearfruit.org
From AMD's suit against Intel. Perhaps relevant to some PG/AMD issues. "...125. Intel has designed its compiler purposely to degrade performance when a program is run on an AMD platform. To achieve this, Intel designed the compiler to compile code along several alternate code paths. Some paths are executed when the program runs on an Intel platform and others are executed when the program is operated on a computer with an AMD microprocessor. (The choice of code path is determined when the program is started, using a feature known as "CPUID" which identifies the computer's microprocessor.) By design, the code paths were not created equally. If the program detects a "Genuine Intel" microprocessor, it executes a fully optimized code path and operates with the maximum efficiency. However, if the program detects an "Authentic AMD" microprocessor, it executes a different code path that will degrade the program's performance or cause it to crash..."
2005/7/12, Mohan, Ross <RMohan@arbinet.com>: > From AMD's suit against Intel. Perhaps relevant to some PG/AMD issues. Postgres is compiled with gnu compiler. Isn't it ? I don't know how much can Postgres benefit from an optimized Intel compiler. -- Jean-Max Reymond CKR Solutions Open Source Nice France http://www.ckr-solutions.com