Re: Using Small Size SSDs to improve performance? - Mailing list pgsql-hackers

From Greg Smith
Subject Re: Using Small Size SSDs to improve performance?
Date
Msg-id 4C59FB31.4030402@2ndquadrant.com
Whole thread Raw
In response to Using Small Size SSDs to improve performance?  (Nilson <nilson.brazil@gmail.com>)
Responses Re: Using Small Size SSDs to improve performance?
Re: Using Small Size SSDs to improve performance?
List pgsql-hackers
Nilson wrote:
> 1) usage of a S5D to temporarily store the WAL log files until a 
> deamon process copy them to the regular HD.

The WAL is rarely as much of a bottleneck as people think it is.  
Because it's all sequential writes, so long as you put it onto a 
dedicated disk there's minimal advantage to be had using an SSD for it.  
Lots of small sequential writes is really not the place where SSD shines 
compared with regular disk.

> 2) usage of a S5D to store instructions to a make a checkpoint. 
> Instead of write the "dirty" pages directly to database files, 
> postgreSQL could dump to SSD the dirty pages and the instructions of 
> how update the data files. Later, a deamon process would update the 
> files following these instructions and erase the instruction files 
> after that.

This is essentially what happens with the operating system cache:  it 
buffers writes into memory as the checkpoint does them, and then later 
does the actual I/O to write them to disk--hopefully before the sync 
call that pushes it out comes in.  There are plenty of problems with how 
that's done right now.  But I don't feel there's enough benefit to 
optimize specifically for SSD when a more general improvement could be 
done instead in that area instead.

> I guess these ideas could improve the write performance significantly 
> (3x to 50x) in databases systems that perform writes with SYNC and 
> have many write bursts or handle large (20MB+) BLOBs (many WAL 
> segments and pages to write on checkpoint).

That's optimistic.  Right now heavy write systems get a battery-backed 
cache in the RAID card that typically absorbs 256MB to 512MB worth of 
activity.  You really need to reference SSD acceleration against that as 
your reference.  If you do that, the SSD gains stop looking so big.  
Checkpoint writes right now go:

shared_buffers -> OS cache -> RAID BBWC -> disk

And those two layers in the middle are already providing a significant 
speedup on burst workloads.  Ultimately, all the burst stuff has to make 
it onto regular disks eventually though, if you can't fit the whole 
thing on SSD, and then you're back to solving the non-SSD problem 
again.  That's the problem with these things that keeps them from being 
magic bullets; if you have a database large enough that you can't fit 
the working set in RAM nowadays, you probably can't fit whole thing on 
SSD either.

-- 
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
greg@2ndQuadrant.com   www.2ndQuadrant.us



pgsql-hackers by date:

Previous
From: Merlin Moncure
Date:
Subject: Re: Two different methods of sneaking non-immutable data into an index
Next
From: Josh Berkus
Date:
Subject: Re: Using Small Size SSDs to improve performance?