On 7/26/13 8:32 AM, Tom Lane wrote:
> What I'd point out is that that is exactly what WAL does for us, ie
> convert a bunch of random writes into sequential writes. But sooner or
> later you have to put the data where it belongs.
Hannu was observing that SSD often doesn't do that at all. They can
maintain logical -> physical translation tables that decode where each
block was written to forever. When read seeks are really inexpensive,
the only pressure to reorder block is wear leveling.
That doesn't really help with regular drives though, where the low seek
time assumption doesn't play out so well. The whole idea of writing
things sequentially and then sorting them out later was the rage in 2001
for ext3 on Linux, as part of the "data=journal" mount option. You can
go back and see that people are confused but excited about the
performance at
http://www.ibm.com/developerworks/linux/library/l-fs8/index.html
Spoiler: if you use a workload that has checkpoint issues, it doesn't
help PostgreSQL latency. Just like using a large write cache, you gain
some burst performance, but eventually you pay for it with extra latency
somewhere.
--
Greg Smith 2ndQuadrant US greg@2ndQuadrant.com Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com