First get a baseline for how things work with just pg_xlog on one small set (RAID 1 is often plenty) and RAID-10 on all the rest with all the data (i.e. base directory) there. With a fast HW RAID controller this is often just about as fast as any amount of breaking things out will be. But if you do break things out and they are fster then you'll know by how much. If it's slower then you know you've got a really busy set and some not so busy ones. And so on...
(side note, google mail in their infinite evilness make it tricky if not careful to reply below post using their webapp, beware).
I might have a table that needs some heavy writes, and while it doesn't necessarily have to be fast TPS wise, I don't want it to bog down rest of the database.
Reads are ok, as I'm planning for the DB to fit in RAM cache, so once read - it will be there - more or less.
It's distributing writes that I care about mostly.
I'll try iostat, whilst running characterisation scenarios. That was my plan anyway.
I had no idea separating indexes from tables might help too. Would have thought, they both are interconnected so much in the code, that dividing them up won't help as much.
What about table partitioning ? For heavy writes, would some sort of a strategy there make difference ?