Thread: Synchronous replication + Fusion-io = waste of money OR significant performance boost? (compared to normal SATA-based SSD-disks)?

My company is in the process of migrating to a new pair of servers, running 9.1.

The database performance monetary transactions, we require
synchronous_commit on for all transactions.

Fusion-io is being considered, but will it give any significant
performance gain compared to normal SATA-based SSD-disks, due to the
fact we must replicate synchronously?

To make it more complicated, what about SLC vs MLC (for synchronous
replication)?

Assume optimal conditions, both servers have less than a meter between
each other, with the best possible network link between them providing
the lowest latency possible, maxed out RAM, maxed out CPUs, etc.

I've already asked this question to one of the core members, but the
answer was basically "you will have to test", I was therefore hoping
someone in the community already had some test results to avoid
wasting money.

Thank you for any advice!

Best regards,

Joel Jacobson
Trustly Group AB (former Glue Finance AB)

On Wed, Mar 7, 2012 at 2:54 AM, Joel Jacobson <joel@trustly.com> wrote:
> My company is in the process of migrating to a new pair of servers, running 9.1.
>
> The database performance monetary transactions, we require
> synchronous_commit on for all transactions.
>
> Fusion-io is being considered, but will it give any significant
> performance gain compared to normal SATA-based SSD-disks, due to the
> fact we must replicate synchronously?
>
> To make it more complicated, what about SLC vs MLC (for synchronous
> replication)?
>
> Assume optimal conditions, both servers have less than a meter between
> each other, with the best possible network link between them providing
> the lowest latency possible, maxed out RAM, maxed out CPUs, etc.
>
> I've already asked this question to one of the core members, but the
> answer was basically "you will have to test", I was therefore hoping
> someone in the community already had some test results to avoid
> wasting money.
>
> Thank you for any advice!

flash, just like hard drives, has some odd physical characteristics
that impose some performance constraints, especially when writing, and
double especially when MLC flash is used.  modern flash drives employ
non volatile buffers to work around these constraints that work pretty
well *most* of the time.  since MLC is much cheaper improvements in
flash controller technology are basically pushing SLC out of the
market except in high end applications.

if you need zero latency storage all the time and are willing to spend
the extra bucks, then pci-e  based SLC is definitely worth looking at
(you'll have another product to evaluate soon when the intel 720
ramsdale hits the market).  a decent MLC drive might work for you
though, i'd suggest testing there first and upgrading to the expensive
proprietary stuff if and only if you really need it.

my experience with flash and postgres is that even with low-mid range
drives like the intel 320 it's quite a challenge to make postgres be
i/o bound.

merlin

I've also looked at the Fusion-IO products.  They are not standard
flash drives.  They don't appear as SATA devices.  They contains an
FPGA that maps the flash directly to the PCI bus.  The kernel-mode
drivers blits data to/from them via DMA, not a SATA or SAS drive (that
would limit transfer rates to 6Gb/s).

But, I don't have any in-hand to test with yet... :(  But the
kool-aide looks tasty :)

On Thu, Mar 8, 2012 at 8:52 AM, Merlin Moncure <mmoncure@gmail.com> wrote:
> On Wed, Mar 7, 2012 at 2:54 AM, Joel Jacobson <joel@trustly.com> wrote:
>> My company is in the process of migrating to a new pair of servers, running 9.1.
>>
>> The database performance monetary transactions, we require
>> synchronous_commit on for all transactions.
>>
>> Fusion-io is being considered, but will it give any significant
>> performance gain compared to normal SATA-based SSD-disks, due to the
>> fact we must replicate synchronously?
>>
>> To make it more complicated, what about SLC vs MLC (for synchronous
>> replication)?
>>
>> Assume optimal conditions, both servers have less than a meter between
>> each other, with the best possible network link between them providing
>> the lowest latency possible, maxed out RAM, maxed out CPUs, etc.
>>
>> I've already asked this question to one of the core members, but the
>> answer was basically "you will have to test", I was therefore hoping
>> someone in the community already had some test results to avoid
>> wasting money.
>>
>> Thank you for any advice!
>
> flash, just like hard drives, has some odd physical characteristics
> that impose some performance constraints, especially when writing, and
> double especially when MLC flash is used.  modern flash drives employ
> non volatile buffers to work around these constraints that work pretty
> well *most* of the time.  since MLC is much cheaper improvements in
> flash controller technology are basically pushing SLC out of the
> market except in high end applications.
>
> if you need zero latency storage all the time and are willing to spend
> the extra bucks, then pci-e  based SLC is definitely worth looking at
> (you'll have another product to evaluate soon when the intel 720
> ramsdale hits the market).  a decent MLC drive might work for you
> though, i'd suggest testing there first and upgrading to the expensive
> proprietary stuff if and only if you really need it.
>
> my experience with flash and postgres is that even with low-mid range
> drives like the intel 320 it's quite a challenge to make postgres be
> i/o bound.
>
> merlin
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general

Hi,

On 9 March 2012 02:23, dennis jenkins <dennis.jenkins.75@gmail.com> wrote:
> I've also looked at the Fusion-IO products.  They are not standard
> flash drives.  They don't appear as SATA devices.  They contains an
> FPGA that maps the flash directly to the PCI bus.  The kernel-mode
> drivers blits data to/from them via DMA, not a SATA or SAS drive (that
> would limit transfer rates to 6Gb/s).
>
> But, I don't have any in-hand to test with yet... :(  But the
> kool-aide looks tasty :)

I think they are good investment but we wasn't able to use them because:
- iodrive was small (1.2TB only) and not very scalable in long term --
not enough PCIE slots
- iodrive duo/octal needs more power and cooling than we had
You can workaround both by upgrading server (two of them, HA) but you
just delay the situation when you can't insert new card (no slots or
power available).

Performance wise, we were able to reduce query time
- from few seconds to instant (500ms and better)
- any query exceeding  300sec (apache timeout) finished under minute

--
Ondrej Ivanic
(ondrej.ivanic@gmail.com)