> > Note that even though the processor is 99% in wait state the drive
is
> > only handling about 3 MB/s. That translates into a seek time of
2.2ms
> > which is actually pretty fast...But note that if this were a raid
array
> > Postgres's wouldn't be getting any better results. A Raid array
wouldn't
> > improve i/o latency at all and since it's already 99% waiting for
i/o
> > Postgres is not going to be able to issue any more.
>
> If it's a straight stupid RAID array, sure. But when you introduce a
good
> write caching controller into the mix, that can batch multiple writes,
> take advantage of more elevator sorting, and get more writes/seek
> accomplished. Combine that improvement with having multiple drives as
> well and the PITR performance situation becomes very different; you
really
> can get more than one drive in the array busy at a time. It's also
true
> that you won't see everything that's happening with vmstat because the
> controller is doing the low-level dispatching.
I don't follow. The problem is not writes but reads. And if the reads
are
random enough no cache controller will help.
The basic message is, that for modern IO systems you need to make sure
that
enough parallel read requests are outstanding. Write requests are not an
issue,
because battery backed controllers can take care of that.
Andreas