> We do have a lot of INSERT/UPDATE calls, specifically on tables that
track
> user sessions, then of course things like comments, etc (where we'll
see
> 10-30 INSERT's per second, with TEXT field, and hundreds of reads per
> second). Additionally, our system does use a lot of aggregate
functions.
> I'll look into materialized views, it sounds like it may be worth
> implementing.
Right. The point is: is your i/o bottle neck on the read side or the
write side. With 10-30 inserts/sec and fsync off, it's definitely on
the read side. What's interesting is that such a low insert load is
causing i/o storm problems. How does your app run with fsync on?
With read-bound i/o problems, might want to consider upgrading memory
first to get better cache efficiency. You may want to consider Opteron
for > 4GB allocations (yummy!).
The good news is that read problems are usually solvable by being
clever, whereas write problems require hardware.
> One question I do have though - you specifically mentioned NOW() as
> something to watch out for, in that it's mutable. We typically use
This is specifically with regards to materialized views. Mutable
functions cause problems because when they are pushed unto the view,
they are refreshed...something to watch out for.
The trick with MVs is to increase your filesystem cache efficiency. The
big picture is to keep frequently read data in a single place to make
better benefit of cache. Aggregates naturally read multiple rows to
return a single row's worth of data so you want to target them first.
This all comes at a cost of update I/O time and some application
complexity.
> as a subselect to retrieve the number of associated rows to the
current
> query. Additionally, we use NOW a lot, primarily to detect the status
of
> a
> date, i.e.:
Might want to check if your application middleware (php?) exposes
PQntuples()...this is a zero cost way to get the same information.
> Based on feedback, I'm looking at a minor upgrade of our RAID
controller
> to
> a 3ware 9000 series (SATA with cache, battery backup optional), and
> re-configuring it for RAID 10. It's a damn cheap upgrade at around
$350
> and
> an hour of downtime, so I figure that it's worth it for us to give it
a
> shot.
p.s. you can also increase cache efficiency by reducing database size,
for example use int2/int4 vs. numerics.
Good luck!