> I need to increase the overall performance by a factor of 10, while at
> the same time the DB size increases by a factor of 50. e.g. 3000
> inserts/updates or 25,000 selects per second, over a 25GB database with
> most used tables of 5,000,000 and 1,000,000 rows.
Ok.. I would be surprised if you needed much more actual CPU power. I
suspect they're mostly idle waiting on data -- especially with a Quad
Xeon (shared memory bus is it not?).
I'd be looking to get your hands on a large pSeries machine from IBM or
perhaps an 8-way Opteron (not that hard to come by today, should be easy
in the near future). The key is low latency ram tied to a chip rather
than a centralized bus -- a 3800 SunFire would do too ;).
Write performance won't matter very much. 3000 inserts/second isn't high
-- some additional battery backed write cache may be useful but not
overly important with enough ram to hold the complete dataset. I suspect
those are slow due to things like foreign keys -- which of course are
selects.
> Notably, the data is very time-sensitive, so the active dataset at any
> hour is almost certainly going to be more on the order of 5GB than 25GB
> (plus I'll want all the indexes in RAM of course).
Very good. Find yourself 8GB to 12GB ram and you should be fine. In this
case, additional ram will keep the system from hitting the disk for
writes as well.
You may want to play around with checkpoints. Prevention of a checkpoint
during this hour will help prevent peaks. Be warned though, WAL will
grow very large, and recovery time should a crash occur could be
painful.
You say the data is very time sensitive -- how time sensitive? Are the
selects all based on this weeks data? A copy of the database on a second
machine (say your Quad Xeon) for static per client data would be very
useful to reduce needless load. I assume the application servers have
already cached any static global data by this point.
Finally, upgrade to 7.4. Do use prepared statements. Do limit the number
of connections any given application server is allowed (especially for
short transactions). 3 PostgreSQL processes per CPU (where the box limit
is not Disk) seems to be about right -- your OS may vary.
Pre-calculate anything you can. Are the $ amounts for a transaction
generally the the same? Do you tend to have repeat clients? Great --
make your current clients transactions a day in advance. Now you have a
pair of selects and 1 update (mark it with the time the client actually
approved it). If the client doesn't approve of the pre-calculated
transaction, throw it away at some later time.