Re: Best filesystem for a high load db - Mailing list pgsql-general

From Joseph Kregloh
Subject Re: Best filesystem for a high load db
Date
Msg-id CAAW2xfdxWU3XtTnuRQTVyuXhN8NH1ArZFwvwN=yeDA0eOE9Oyw@mail.gmail.com
Whole thread Raw
In response to Re: Best filesystem for a high load db  (Andy Colson <andy@squeakycode.net>)
Responses Re: Best filesystem for a high load db  (Vick Khera <vivek@khera.org>)
List pgsql-general
Currently I use FreeBSD 10 with ZFS filesystem for our Production database. Speed wise it's fine, i'm sure other filesystems could be faster, even though we have never compared it with other filesystems. The reason we do ZFS is to take advantage of the data compression and snapshots. It is very easy to generate a new slave just by copying the filesystem to another machine. Having different compression for tablespaces that don't get accessed as much, or tablespaces on faster disks. Doing big data migrations or pushes we are able to rollback if something fails. Also when upgrading to a newer version of Postgres, just take a snapshot and upgrade that.

Same with database backups. We issue a pg_start_backup(), take a few snapshots, issue pg_stop_backup(). Then ship the entire filesystem to a different machine and that's your backup.

One thing I am pushing to do is using SSDs for the ZIL and L2ARC. This would allow for a pretty nice boost in speed.

-Joseph

On Wed, Nov 26, 2014 at 9:50 AM, Andy Colson <andy@squeakycode.net> wrote:
On 11/26/2014 4:16 AM, Maila Fatticcioni wrote:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 11/25/2014 05:54 PM, Bill Moran wrote:
On Tue, 25 Nov 2014 17:27:18 +0100 Christoph Berg <cb@df7cb.de>
wrote:

Re: Bill Moran 2014-11-25
<20141125111630.d05d58a9eb083c7cf80ed9f8@potentialtech.com>
Anything with a journal is a performance problem. PostgreSQL
effectivly does its own journalling with the WAL logs. That's
not to say that there's no value to crash recovery to having a
journalling filesystem, but it's just to say that our
experience showed journaling filesystems to be slower. That
rules out ext4, unless you disable the journal. I seem to
remember ext4 with journalling disabled being one of the faster
filesystems, but I could be remembering wrong.

If you are using a non-journalling FS, you'll be waiting for a
full fsck after a system crash. Not sure that's an improvement.

It's an improvement if: a) You're investing in high-quality
hardware, so the chance of a system crash is very low. b) The
database is replicated, so your plan in the event of a primary
crash is to fail over to the backup anyway.

If both of those are in place (as they were at my previous job)
then the time it takes to fsck isn't an issue, and taking action
that causes the database to run faster when nothing is wrong can be
considered.

Obviously, the OP needs to assess the specific needs of the product
in question. Your point is very valid, and I'm glad you brought it
up (as a lot of people forget about it) but sometimes it's not the
most important factor.


Thank you a lot to have shared with me your experiences.
Indeed we will have two servers in cluster with high quality hardware
so a fsck restore shouldn't be a big problem.
I will analize the xfs option as well and then I will decide.

Thank you again,
Maila Fatticcioni


Also, if you do some timings, please share it with us, it'd be nice to have some more data points.

-Andy





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

pgsql-general by date:

Previous
From: Tom Lane
Date:
Subject: Re: issue in postgresql 9.1.3 in using arrow key in Solaris platform
Next
From: John R Pierce
Date:
Subject: Re: Active/Active clustering in postgres