On 4.9.2013 20:52, Jeison Bedoya Delgado wrote:
> Hi merlin, Thanks for your interest, I'm using version 9.2.2, I have
> a machine with 128GB RAM, 32 cores, and my BD weighs 400GB. When I
> say slow I meant that while consultations take or failing a copy with
> pgdump, this taking the same time before had 10K disks in raid 10 and
> now that I have SSDs in Raid 10.
>
> That behavior is normal, or you can improve writing and reading.
SSDs are great random I/O, not that great for sequential I/O (better
than spinning drives, but you'll often run into other bottlenecks, for
example CPU).
I'd bet this is what you're seeing. pg_dump is a heavily sequential
workload (read the whole table from start to end, write a huge dump to
the disk). A good RAID array with 10k SAS drives can give you very good
performance (I'd say ~500MB/s reads and writes for 6 drives in RAID10).
I don't think the pg_dump will produce the data much faster.
Have you done any tests (e.g. using fio) to test the performance of the
two configurations? There might be some hw issue but if you have no
benchmarks it's difficult to judge.
Can you run the fio tests now? The code is here:
http://freecode.com/projects/fio
and there are even a basic example:
http://git.kernel.dk/?p=fio.git;a=blob_plain;f=examples/ssd-test.fio
And how exactly are you running the pg_dump? And collect some basic
stats next time it's running, for example a few samples from
vmstat 5
iostat -x -k 5
and watch top how much CPU it's using.
Tomas