Re: 8K recordsize bad on ZFS? - Mailing list pgsql-performance

From Josh Berkus
Subject Re: 8K recordsize bad on ZFS?
Date
Msg-id 4BE877BE.8000908@agliodbs.com
Whole thread Raw
In response to Re: 8K recordsize bad on ZFS?  (Greg Stark <gsstark@mit.edu>)
List pgsql-performance
> That still is consistent with it being caused by the files being
> discontiguous. Copying them moved all the blocks to be contiguous and
> sequential on disk and might have had the same effect even if you had
> left the settings at 8kB blocks. You described it as "overloading the
> array/drives with commands" which is probably accurate but sounds less
> exotic if you say "the files were fragmented causing lots of seeks so
> our drives we saturated the drives' iops capacity". How many iops were
> you doing before and after anyways?

Don't know.  This was a client system and once we got the target
numbers, they stopped wanting me to run tests on in.  :-(

Note that this was a brand-new system, so there wasn't much time for
fragmentation to occur.

--
                                  -- Josh Berkus
                                     PostgreSQL Experts Inc.
                                     http://www.pgexperts.com

pgsql-performance by date:

Previous
From: Greg Stark
Date:
Subject: Re: 8K recordsize bad on ZFS?
Next
From: "Carlo Stonebanks"
Date:
Subject: Function scan/Index scan to nested loop