On Oct 12, 2010, at 8:39 AM, Dan Harris wrote:
> On 10/11/10 8:02 PM, Scott Carey wrote:
>> would give you a 1MB read-ahead. Also, consider XFS and its built-in defragmentation. I have found that a longer
livedpostgres DB will get extreme
>> file fragmentation over time and sequential scans end up mostly random. On-line file defrag helps tremendously.
>>
> We just had a corrupt table caused by an XFS online defrag. I'm scared
> to use this again while the db is live. Has anyone else found this to
> be safe? But, I can vouch for the fragmentation issue, it happens very
> quickly in our system.
>
What version? I'm using the latest CentoOS extras build.
We've been doing online defrag for a while now on a very busy database with > 8TB of data. Not that that means there
areno bugs...
It is a relatively simple thing in xfs -- it writes a new file to temp in a way that allocates contiguous space if
available,then if the file has not been modified since it was re-written it is essentially moved on top of the other
one. This should be safe provided the journaling and storage is safe, etc.
> -Dan
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance