Re: Fragmentation of WAL files

From: Tom Lane
Subject: Re: Fragmentation of WAL files
Date: ,
Msg-id: 24354.1177601862@sss.pgh.pa.us
(view: Whole thread, Raw)
In response to: Re: Fragmentation of WAL files  (Bill Moran)
List: pgsql-performance

Tree view

Fragmentation of WAL files  (Jim Nasby, )
 Re: Fragmentation of WAL files  (Bill Moran, )
  Re: Fragmentation of WAL files  (Heikki Linnakangas, )
   Filesystem fragmentation (Re: Fragmentation of WAL files)  (Bill Moran, )
    Re: Filesystem fragmentation (Re: Fragmentation of WAL files)  ("Craig A. James", )
     Re: Filesystem fragmentation (Re: Fragmentation of WAL files)  (Gregory Stark, )
      Re: Filesystem fragmentation (Re: Fragmentation of WAL files)  (Tom Lane, )
    Re: Filesystem fragmentation (Re: Fragmentation of WAL files)  (Florian Weimer, )
  Re: Fragmentation of WAL files  (Tom Lane, )
  Re: Fragmentation of WAL files  (Greg Smith, )

> In response to Jim Nasby <>:
>> I was recently running defrag on my windows/parallels VM and noticed
>> a bunch of WAL files that defrag couldn't take care of, presumably
>> because the database was running. What's disturbing to me is that
>> these files all had ~2000 fragments.

It sounds like that filesystem is too stupid to coalesce successive
write() calls into one allocation fragment :-(.  I agree with the
comments that this might not be important, but you could experiment
to see --- try increasing the size of "zbuffer" in XLogFileInit to
maybe 16*XLOG_BLCKSZ, re-initdb, and see if performance improves.

The suggestion to use ftruncate is so full of holes that I won't
bother to point them all out, but certainly we could write more than
just XLOG_BLCKSZ at a time while preparing the file.

            regards, tom lane


pgsql-performance by date:

From: Benjamin Minshall
Date:
Subject: Re: Simple query, 10 million records...MySQL ten times faster
From: "Luke Lonergan"
Date:
Subject: Re: Simple query, 10 million records...MySQL ten times faster