Thread: Faster CREATE DATABASE by delaying fsync (was 8.4.1 ubuntu karmic slow createdb)

Faster CREATE DATABASE by delaying fsync (was 8.4.1 ubuntu karmic slow createdb)

From
Andres Freund
Date:
On Saturday 12 December 2009 21:38:41 Andres Freund wrote:
> On Saturday 12 December 2009 21:36:27 Michael Clemmons wrote:
> > If ppl think its worth it I'll create a ticket
> Thanks, no need. I will post a patch tomorrow or so.
Well. It was a long day...

Anyway.
In this patch I delay the fsync done in copy_file and simply do a second pass
over the directory in copy_dir and fsync everything in that pass.
Including the directory - which was not done before and actually might be
necessary in some cases.
I added a posix_fadvise(..., FADV_DONTNEED) to make it more likely that the
copied file reaches storage before the fsync. Without the speed benefits were
quite a bit smaller and essentially random (which seems sensible).

This speeds up CREATE DATABASE from ~9 seconds to something around 0.8s on my
laptop.  Still slower than with fsync off (~0.25) but quite a worthy
improvement.

The benefits are obviously bigger if the template database includes anything
added.


Andres

On Monday 28 December 2009 23:54:51 Andres Freund wrote:
> On Saturday 12 December 2009 21:38:41 Andres Freund wrote:
> > On Saturday 12 December 2009 21:36:27 Michael Clemmons wrote:
> > > If ppl think its worth it I'll create a ticket
> >
> > Thanks, no need. I will post a patch tomorrow or so.
>
> Well. It was a long day...
>
> Anyway.
> In this patch I delay the fsync done in copy_file and simply do a second
>  pass over the directory in copy_dir and fsync everything in that pass.
> Including the directory - which was not done before and actually might be
> necessary in some cases.
> I added a posix_fadvise(..., FADV_DONTNEED) to make it more likely that the
> copied file reaches storage before the fsync. Without the speed benefits
>  were quite a bit smaller and essentially random (which seems sensible).
>
> This speeds up CREATE DATABASE from ~9 seconds to something around 0.8s on
>  my laptop.  Still slower than with fsync off (~0.25) but quite a worthy
>  improvement.
>
> The benefits are obviously bigger if the template database includes
>  anything added.
Obviously the patch would be helpfull.

Andres

Andres Freund <andres@anarazel.de> writes:
> This speeds up CREATE DATABASE from ~9 seconds to something around 0.8s on my
> laptop.  Still slower than with fsync off (~0.25) but quite a worthy
> improvement.

I can't help wondering whether that's real or some kind of
platform-specific artifact.  I get numbers more like 3.5s (fsync off)
vs 4.5s (fsync on) on a machine where I believe the disks aren't lying
about write-complete.  It makes sense that an fsync at the end would be
a little bit faster, because it would give the kernel some additional
freedom in scheduling the required I/O, but it isn't cutting the total
I/O required at all.  So I find it really hard to believe a 10x speedup.

            regards, tom lane

On Tuesday 29 December 2009 00:06:28 Tom Lane wrote:
> Andres Freund <andres@anarazel.de> writes:
> > This speeds up CREATE DATABASE from ~9 seconds to something around 0.8s
> > on my laptop.  Still slower than with fsync off (~0.25) but quite a
> > worthy improvement.
> I can't help wondering whether that's real or some kind of
> platform-specific artifact.  I get numbers more like 3.5s (fsync off)
> vs 4.5s (fsync on) on a machine where I believe the disks aren't lying
> about write-complete.  It makes sense that an fsync at the end would be
> a little bit faster, because it would give the kernel some additional
> freedom in scheduling the required I/O, but it isn't cutting the total
> I/O required at all.  So I find it really hard to believe a 10x speedup.
Well, a template database is about 5.5MB big here - that shouldnt take too
long when written near-sequentially?
As I said the real benefit only occurred after adding posix_fadvise(..,
FADV_DONTNEED) which is somewhat plausible, because i.e. the directory entries
don't need to get scheduled for every file and because the kernel can reorder a
whole directory nearly sequentially. Without the advice it the kernel doesn't
know in time that it should write that data back and it wont do it for 5
seconds by default on linux or such...

I looked at the strace output - it looks sensible timewise to me. If youre
interested I can give you output of that.

Andres

On Tuesday 29 December 2009 00:06:28 Tom Lane wrote:
> Andres Freund <andres@anarazel.de> writes:
> > This speeds up CREATE DATABASE from ~9 seconds to something around 0.8s
> > on my laptop.  Still slower than with fsync off (~0.25) but quite a
> > worthy improvement.
>
> I can't help wondering whether that's real or some kind of
> platform-specific artifact.  I get numbers more like 3.5s (fsync off)
> vs 4.5s (fsync on) on a machine where I believe the disks aren't lying
> about write-complete.  It makes sense that an fsync at the end would be
> a little bit faster, because it would give the kernel some additional
> freedom in scheduling the required I/O, but it isn't cutting the total
> I/O required at all.  So I find it really hard to believe a 10x speedup.
I only comfortably have access to two smaller machines without BBU from here
(being in the Hacker Jeopardy at the ccc congress ;-)) and both show this
behaviour. I guess its somewhat filesystem dependent.

Andres

On Mon, Dec 28, 2009 at 10:54 PM, Andres Freund <andres@anarazel.de> wrote:
> fsync everything in that pass.
> Including the directory - which was not done before and actually might be
> necessary in some cases.

Er. Yes. At least on ext4 this is pretty important. I wish it weren't,
but it doesn't look like we're going to convince the ext4 developers
they're crazy any day soon and it would really suck for a database
created from a template to have files in it go missin.

--
greg

On Tuesday 29 December 2009 01:27:29 Greg Stark wrote:
> On Mon, Dec 28, 2009 at 10:54 PM, Andres Freund <andres@anarazel.de> wrote:
> > fsync everything in that pass.
> > Including the directory - which was not done before and actually might be
> > necessary in some cases.
>
> Er. Yes. At least on ext4 this is pretty important. I wish it weren't,
> but it doesn't look like we're going to convince the ext4 developers
> they're crazy any day soon and it would really suck for a database
> created from a template to have files in it go missin.
Actually it was necessary on ext3 as well - the window to hit the problem just
was much smaller, wasnt it?

Actually that part should possibly get backported.


Andres

On Tuesday 29 December 2009 01:30:17 david@lang.hm wrote:
> On Tue, 29 Dec 2009, Greg Stark wrote:
> > On Mon, Dec 28, 2009 at 10:54 PM, Andres Freund <andres@anarazel.de>
wrote:
> >> fsync everything in that pass.
> >> Including the directory - which was not done before and actually might
> >> be necessary in some cases.
> >
> > Er. Yes. At least on ext4 this is pretty important. I wish it weren't,
> > but it doesn't look like we're going to convince the ext4 developers
> > they're crazy any day soon and it would really suck for a database
> > created from a template to have files in it go missin.
>
> actually, as I understand it you need to do this on all filesystems except
> ext3, and on ext3 fsync is horribly slow because it writes out
> _everything_ that's pending, not just stuff related to the file you do the
> fsync on.
I dont think its all filesystems (ext2 should not be affected...), but generally
youre right. At least jfs, xfs are affected as well.

Its btw not necessarily nearly-safe and slow on ext3 as well (data=writeback).

Andres

On Tue, 29 Dec 2009, Greg Stark wrote:

> On Mon, Dec 28, 2009 at 10:54 PM, Andres Freund <andres@anarazel.de> wrote:
>> fsync everything in that pass.
>> Including the directory - which was not done before and actually might be
>> necessary in some cases.
>
> Er. Yes. At least on ext4 this is pretty important. I wish it weren't,
> but it doesn't look like we're going to convince the ext4 developers
> they're crazy any day soon and it would really suck for a database
> created from a template to have files in it go missin.

actually, as I understand it you need to do this on all filesystems except
ext3, and on ext3 fsync is horribly slow because it writes out
_everything_ that's pending, not just stuff related to the file you do the
fsync on.

David Lang

On Tue, 29 Dec 2009, Andres Freund wrote:

> On Tuesday 29 December 2009 01:30:17 david@lang.hm wrote:
>> On Tue, 29 Dec 2009, Greg Stark wrote:
>>> On Mon, Dec 28, 2009 at 10:54 PM, Andres Freund <andres@anarazel.de>
> wrote:
>>>> fsync everything in that pass.
>>>> Including the directory - which was not done before and actually might
>>>> be necessary in some cases.
>>>
>>> Er. Yes. At least on ext4 this is pretty important. I wish it weren't,
>>> but it doesn't look like we're going to convince the ext4 developers
>>> they're crazy any day soon and it would really suck for a database
>>> created from a template to have files in it go missin.
>>
>> actually, as I understand it you need to do this on all filesystems except
>> ext3, and on ext3 fsync is horribly slow because it writes out
>> _everything_ that's pending, not just stuff related to the file you do the
>> fsync on.
> I dont think its all filesystems (ext2 should not be affected...), but generally
> youre right. At least jfs, xfs are affected as well.

ext2 definantly needs the fsync on the directory as well as the file
(well, if the file metadata like size, change)

> Its btw not necessarily nearly-safe and slow on ext3 as well (data=writeback).

no, then it's just unsafe and slow ;-)

David Lang

Andres Freund wrote:
> As I said the real benefit only occurred after adding posix_fadvise(..,
> FADV_DONTNEED) which is somewhat plausible, because i.e. the directory entries
> don't need to get scheduled for every file and because the kernel can reorder a
> whole directory nearly sequentially. Without the advice it the kernel doesn't
> know in time that it should write that data back and it wont do it for 5
> seconds by default on linux or such...
>
I know they just fiddled with the logic in the last release, but for
most of the Linux kernels out there now pdflush wakes up every 5 seconds
by default.  But typically it only worries about writing things that
have been in the queue for 30 seconds or more until you've filled quite
a bit of memory, so that's also an interesting number.  I tried to
document the main tunables here and describe how they fit together at
http://www.westnet.com/~gsmith/content/linux-pdflush.htm

It would be interesting to graph the "Dirty" and "Writeback" figures in
/proc/meminfo over time with and without this patch in place.  That
should make it obvious what the kernel is doing differently in the two
cases.

--
Greg Smith    2ndQuadrant   Baltimore, MD
PostgreSQL Training, Services and Support
greg@2ndQuadrant.com  www.2ndQuadrant.com


On Tuesday 29 December 2009 01:46:21 Greg Smith wrote:
> Andres Freund wrote:
> > As I said the real benefit only occurred after adding posix_fadvise(..,
> > FADV_DONTNEED) which is somewhat plausible, because i.e. the directory
> > entries don't need to get scheduled for every file and because the kernel
> > can reorder a whole directory nearly sequentially. Without the advice it
> > the kernel doesn't know in time that it should write that data back and
> > it wont do it for 5 seconds by default on linux or such...
> It would be interesting to graph the "Dirty" and "Writeback" figures in
> /proc/meminfo over time with and without this patch in place.  That
> should make it obvious what the kernel is doing differently in the two
> cases.
I did some analysis using blktrace (usefull tool btw) and the results show that
the io pattern is *significantly* different.

For one with the direct fsyncing nearly no hardware queuing is used and for
another nearly no requests are merged on software side.

Short stats:

OLD:

Total (8,0):
 Reads Queued:           2,        8KiB     Writes Queued:        7854,    29672KiB
 Read Dispatches:        2,        8KiB     Write Dispatches:     1926,    29672KiB
 Reads Requeued:         0         Writes Requeued:         0
 Reads Completed:        2,        8KiB     Writes Completed:     2362,    29672KiB
 Read Merges:            0,        0KiB     Write Merges:         5492,    21968KiB
 PC Reads Queued:        0,        0KiB     PC Writes Queued:        0,        0KiB
 PC Read Disp.:        436,        0KiB     PC Write Disp.:          0,        0KiB
 PC Reads Req.:          0         PC Writes Req.:          0
 PC Reads Compl.:        0         PC Writes Compl.:     2362
 IO unplugs:          2395             Timer unplugs:         557


New:

Total (8,0):
 Reads Queued:           0,        0KiB     Writes Queued:        1716,     5960KiB
 Read Dispatches:        0,        0KiB     Write Dispatches:      324,     5960KiB
 Reads Requeued:         0         Writes Requeued:         0
 Reads Completed:        0,        0KiB     Writes Completed:      550,     5960KiB
 Read Merges:            0,        0KiB     Write Merges:         1166,     4664KiB
 PC Reads Queued:        0,        0KiB     PC Writes Queued:        0,        0KiB
 PC Read Disp.:        226,        0KiB     PC Write Disp.:          0,        0KiB
 PC Reads Req.:          0         PC Writes Req.:          0
 PC Reads Compl.:        0         PC Writes Compl.:      550
 IO unplugs:           503             Timer unplugs:          30


Andres

Andres,
Great job.  Looking through the emails and thinking about why this works I think this patch should significantly speedup 8.4 on most any file system(obviously some more than others) unless the system has significantly reduced memory or a slow single core. On a Celeron with 256 memory I suspect it'll crash out or just hit the swap  and be a worse bottleneck.  Anyone have something like this to test on?
-Michael

On Mon, Dec 28, 2009 at 9:05 PM, Andres Freund <andres@anarazel.de> wrote:
On Tuesday 29 December 2009 01:46:21 Greg Smith wrote:
> Andres Freund wrote:
> > As I said the real benefit only occurred after adding posix_fadvise(..,
> > FADV_DONTNEED) which is somewhat plausible, because i.e. the directory
> > entries don't need to get scheduled for every file and because the kernel
> > can reorder a whole directory nearly sequentially. Without the advice it
> > the kernel doesn't know in time that it should write that data back and
> > it wont do it for 5 seconds by default on linux or such...
> It would be interesting to graph the "Dirty" and "Writeback" figures in
> /proc/meminfo over time with and without this patch in place.  That
> should make it obvious what the kernel is doing differently in the two
> cases.
I did some analysis using blktrace (usefull tool btw) and the results show that
the io pattern is *significantly* different.

For one with the direct fsyncing nearly no hardware queuing is used and for
another nearly no requests are merged on software side.

Short stats:

OLD:

Total (8,0):
 Reads Queued:           2,        8KiB  Writes Queued:        7854,    29672KiB
 Read Dispatches:        2,        8KiB  Write Dispatches:     1926,    29672KiB
 Reads Requeued:         0               Writes Requeued:         0
 Reads Completed:        2,        8KiB  Writes Completed:     2362,    29672KiB
 Read Merges:            0,        0KiB  Write Merges:         5492,    21968KiB
 PC Reads Queued:        0,        0KiB  PC Writes Queued:        0,        0KiB
 PC Read Disp.:        436,        0KiB  PC Write Disp.:          0,        0KiB
 PC Reads Req.:          0               PC Writes Req.:          0
 PC Reads Compl.:        0               PC Writes Compl.:     2362
 IO unplugs:          2395               Timer unplugs:         557


New:

Total (8,0):
 Reads Queued:           0,        0KiB  Writes Queued:        1716,     5960KiB
 Read Dispatches:        0,        0KiB  Write Dispatches:      324,     5960KiB
 Reads Requeued:         0               Writes Requeued:         0
 Reads Completed:        0,        0KiB  Writes Completed:      550,     5960KiB
 Read Merges:            0,        0KiB  Write Merges:         1166,     4664KiB
 PC Reads Queued:        0,        0KiB  PC Writes Queued:        0,        0KiB
 PC Read Disp.:        226,        0KiB  PC Write Disp.:          0,        0KiB
 PC Reads Req.:          0               PC Writes Req.:          0
 PC Reads Compl.:        0               PC Writes Compl.:      550
 IO unplugs:           503               Timer unplugs:          30


Andres

On Tuesday 29 December 2009 03:53:12 Michael Clemmons wrote:
> Andres,
> Great job.  Looking through the emails and thinking about why this works I
> think this patch should significantly speedup 8.4 on most any file
> system(obviously some more than others) unless the system has significantly
> reduced memory or a slow single core. On a Celeron with 256 memory I
>  suspect it'll crash out or just hit the swap  and be a worse bottleneck.
>  Anyone have something like this to test on?
Why should it crash? The kernel should just block on writing and write out the
dirty memory before continuing?
Pg is not caching anything here...

Andres

Maybe not crash out but in this situation.
N=0
while(N>=0):
    CREATE DATABASE new_db_N;
Since the fsync is the part which takes the memory and time but is happening in the background want the fsyncs pile up in the background faster than can be run filling up the memory and stack.
This is very likely a mistake on my part about how postgres/processes actually works.
-Michael

On Mon, Dec 28, 2009 at 9:55 PM, Andres Freund <andres@anarazel.de> wrote:
On Tuesday 29 December 2009 03:53:12 Michael Clemmons wrote:
> Andres,
> Great job.  Looking through the emails and thinking about why this works I
> think this patch should significantly speedup 8.4 on most any file
> system(obviously some more than others) unless the system has significantly
> reduced memory or a slow single core. On a Celeron with 256 memory I
>  suspect it'll crash out or just hit the swap  and be a worse bottleneck.
>  Anyone have something like this to test on?
Why should it crash? The kernel should just block on writing and write out the
dirty memory before continuing?
Pg is not caching anything here...

Andres

On Tuesday 29 December 2009 04:04:06 Michael Clemmons wrote:
> Maybe not crash out but in this situation.
> N=0
> while(N>=0):
>     CREATE DATABASE new_db_N;
> Since the fsync is the part which takes the memory and time but is
>  happening in the background want the fsyncs pile up in the background
>  faster than can be run filling up the memory and stack.
> This is very likely a mistake on my part about how postgres/processes
The difference should not be visible outside the "CREATE DATABASE ..." at all.
Currently the process simplifiedly works like:

------------
for file in source directory:
    copy_file(source/file, target/file);
    fsync(target/file);
------------

I changed it to:

-------------
for file in source directory:
    copy_file(source/file, target/file);

    /*please dear kernel, write this out, but dont block*/
    posix_fadvise(target/file, FADV_DONTNEED);

for file in source directory:
    fsync(target/file);
-------------

If at any point in time there is not enough cache available to cache anything
copy_file() will just have to wait for the kernel to write out the data.
fsync() does not use memory itself.

Andres

On Tue, Dec 29, 2009 at 2:05 AM, Andres Freund <andres@anarazel.de> wrote:
>  Reads Completed:        2,        8KiB  Writes Completed:     2362,    29672KiB
> New:
>  Reads Completed:        0,        0KiB  Writes Completed:      550,     5960KiB

It looks like the new method is only doing 1/6th as much i/o. Do you
know what's going on there?


--
greg

On Tuesday 29 December 2009 11:48:10 Greg Stark wrote:
> On Tue, Dec 29, 2009 at 2:05 AM, Andres Freund <andres@anarazel.de> wrote:
> >  Reads Completed:        2,        8KiB  Writes Completed:     2362,
> >  29672KiB New:
> >  Reads Completed:        0,        0KiB  Writes Completed:      550,
> > 5960KiB
>
> It looks like the new method is only doing 1/6th as much i/o. Do you
> know what's going on there?
While I was surprised by the amount of difference I am not surprised at all
that there is a significant one - currently the fsync will write out a whole
bunch of useless stuff every time its called (all metadata, directory structure
and so on)

This is reproducible...

6MB sounds sensible for the operation btw - the template database is around
5MB.


Will try to analyze later what exactly causes the additional io.


Andres

On Monday 28 December 2009 23:59:43 Andres Freund wrote:
> On Monday 28 December 2009 23:54:51 Andres Freund wrote:
> > On Saturday 12 December 2009 21:38:41 Andres Freund wrote:
> > > On Saturday 12 December 2009 21:36:27 Michael Clemmons wrote:
> > > > If ppl think its worth it I'll create a ticket
> > >
> > > Thanks, no need. I will post a patch tomorrow or so.
> >
> > Well. It was a long day...
> >
> > Anyway.
> > In this patch I delay the fsync done in copy_file and simply do a second
> >  pass over the directory in copy_dir and fsync everything in that pass.
> > Including the directory - which was not done before and actually might be
> > necessary in some cases.
> > I added a posix_fadvise(..., FADV_DONTNEED) to make it more likely that
> > the copied file reaches storage before the fsync. Without the speed
> > benefits were quite a bit smaller and essentially random (which seems
> > sensible).
> >
> > This speeds up CREATE DATABASE from ~9 seconds to something around 0.8s
> > on my laptop.  Still slower than with fsync off (~0.25) but quite a
> > worthy improvement.
> >
> > The benefits are obviously bigger if the template database includes
> >  anything added.
>
> Obviously the patch would be helpfull.
And it should also be helpfull not to have annoying oversights in there. A
FreeDir(xldir); is missing at the end of copydir().

Andres

Looking at this patch for the commitfest I have a few questions.

1) You said you added an fsync of the new directory -- where is that I
don't see it anywhere.

2) Why does the second pass to do the fsyncs read through fromdir to
find all the filenames. I find that odd and counterintuitive. It would
be much more natural to just loop through the files in the new
directory. But I suppose it serves as an added paranoia check that the
files are in fact still there and we're not fsyncing any files we
didn't just copy. I think it should still work, we should have an
exclusive lock on the template database so there really ought to be no
differences between the directory trees.

3) It would be tempting to do the posix_fadvise on each chunk as we
copy it. That way we avoid poisoning the filesystem cache even as far
as a 1G file. This might actually be quite significant if we're built
without the 1G file chunk size. I'm a bit concerned that the code will
be a big more complex having to depend on a good off_t definition
though. Do we only use >1GB files on systems where off_t is capable of
handling >2^32 without gymnastics?

--
greg

On Mon, Jan 18, 2010 at 4:35 PM, Greg Stark <gsstark@mit.edu> wrote:
> Looking at this patch for the commitfest I have a few questions.

So I've touched this patch up a bit:

1) moved the posix_fadvise call to a new fd.c function
pg_fsync_start(fd,offset,nbytes) which initiates an fsync without
waiting on it. Currently it's only implemented with
posix_fadvise(DONT_NEED) but I want to look into using sync_file_range
in the future -- it looks like this call might be good enough for our
checkpoints.

2) advised each 64k chunk as we write it which should avoid poisoning
the cache if you do a large create database on an active system.

3) added the promised but afaict missing fsync of the directory -- i
think we should actually backpatch this.

Barring any objections shall I commit it like this?


--
greg


--
greg

Attachment
On Tue, Jan 19, 2010 at 2:52 PM, Greg Stark <gsstark@mit.edu> wrote:
> Barring any objections shall I commit it like this?

Actually before we get there could someone who demonstrated the
speedup verify that this patch still gets that same speedup?

--
greg

On Tuesday 19 January 2010 15:52:25 Greg Stark wrote:
> On Mon, Jan 18, 2010 at 4:35 PM, Greg Stark <gsstark@mit.edu> wrote:
> > Looking at this patch for the commitfest I have a few questions.
>
> So I've touched this patch up a bit:
>
> 1) moved the posix_fadvise call to a new fd.c function
> pg_fsync_start(fd,offset,nbytes) which initiates an fsync without
> waiting on it. Currently it's only implemented with
> posix_fadvise(DONT_NEED) but I want to look into using sync_file_range
> in the future -- it looks like this call might be good enough for our
> checkpoints.
>
> 2) advised each 64k chunk as we write it which should avoid poisoning
> the cache if you do a large create database on an active system.
>
> 3) added the promised but afaict missing fsync of the directory -- i
> think we should actually backpatch this.
Yes, that was a bit stupid from me - I added the fsync for directories which
get recursed into (by not checking if its a file) but not for the uppermost
level.
So all directories should get fsynced right now but the topmost one.

I will review the patch later when I finally will have some time off again...
~4h.

Thanks!

Andres

Greg Stark <gsstark@mit.edu> writes:
> 1) moved the posix_fadvise call to a new fd.c function
> pg_fsync_start(fd,offset,nbytes) which initiates an fsync without
> waiting on it. Currently it's only implemented with
> posix_fadvise(DONT_NEED) but I want to look into using sync_file_range
> in the future -- it looks like this call might be good enough for our
> checkpoints.

That function *seriously* needs documentation, in particular the fact
that it's a no-op on machines without the right kernel call.  The name
you've chosen is very bad for those semantics.  I'd pick something
else myself.  Maybe "pg_start_data_flush" or something like that?

Other than that quibble it seems basically sane.

            regards, tom lane

Hi Greg,

On Monday 18 January 2010 17:35:59 Greg Stark wrote:
> 2) Why does the second pass to do the fsyncs read through fromdir to
> find all the filenames. I find that odd and counterintuitive. It would
> be much more natural to just loop through the files in the new
> directory. But I suppose it serves as an added paranoia check that the
> files are in fact still there and we're not fsyncing any files we
> didn't just copy. I think it should still work, we should have an
> exclusive lock on the template database so there really ought to be no
> differences between the directory trees.
If it weren't safe we would already have a big problem....

Andres

Hi Greg,

On Tuesday 19 January 2010 15:52:25 Greg Stark wrote:
> On Mon, Jan 18, 2010 at 4:35 PM, Greg Stark <gsstark@mit.edu> wrote:
> > Looking at this patch for the commitfest I have a few questions.
>
> So I've touched this patch up a bit:
>
> 1) moved the posix_fadvise call to a new fd.c function
> pg_fsync_start(fd,offset,nbytes) which initiates an fsync without
> waiting on it. Currently it's only implemented with
> posix_fadvise(DONT_NEED) but I want to look into using sync_file_range
> in the future -- it looks like this call might be good enough for our
> checkpoints.
Why exactly should that depend on fsync? Sure, thats where most of the pain
comes from now but avoiding that cache poisoning wouldnt hurt otherwise as
well.

I would rather have it called pg_flush_cache_range or such...

> 2) advised each 64k chunk as we write it which should avoid poisoning
> the cache if you do a large create database on an active system.
>
> 3) added the promised but afaict missing fsync of the directory -- i
> think we should actually backpatch this.
I think as well. You need it during recursing as well though (where I had
added it) and not only for the final directory.

> Barring any objections shall I commit it like this?
Other than the two things above it looks fine to me.

Thanks,

Andres

On Tuesday 19 January 2010 15:57:14 Greg Stark wrote:
> On Tue, Jan 19, 2010 at 2:52 PM, Greg Stark <gsstark@mit.edu> wrote:
> > Barring any objections shall I commit it like this?
>
> Actually before we get there could someone who demonstrated the
> speedup verify that this patch still gets that same speedup?
At least on the three machines I tested last time the result is still in the
same ballpark.

Andres

Greg Stark wrote:
On Tue, Jan 19, 2010 at 2:52 PM, Greg Stark <gsstark@mit.edu> wrote: 
Barring any objections shall I commit it like this?   
Actually before we get there could someone who demonstrated the
speedup verify that this patch still gets that same speedup? 

I think the final version of this patch could use at least one more performance checking report that it does something useful.  We got a lot of data from Andres, but do we know that the improvements here hold for others too?  I can take a look at it later this week, I have some interest in this area.

-- 
Greg Smith    2ndQuadrant   Baltimore, MD
PostgreSQL Training, Services and Support
greg@2ndQuadrant.com  www.2ndQuadrant.com
Greg Stark wrote:
> Actually before we get there could someone who demonstrated the
> speedup verify that this patch still gets that same speedup?
>

Let's step back a second and get to the bottom of why some people are
seeing this and others aren't.  The original report here suggested this
was an ext4 issue.  As I pointed out recently on the performance list,
the reason for that is likely that the working write-barrier support for
ext4 means it's passing through the fsync to "lying" hard drives via a
proper cache flush, which didn't happen on your typical ext3 install.
Given that, I'd expect I could see the same issue with ext3 given a
drive with its write cache turned off, so that the theory I started
trying to prove before seeing the patch operate.

What I did was create a little test program that created 5 databases and
then dropped them:

\timing
create database a;
create database b;
create database c;
create database d;
create database e;
drop database a;
drop database b;
drop database c;
drop database d;
drop database e;

(All of the drop times were very close by the way; around 100ms, nothing
particularly interesting there)

If I have my system's boot drive (attached to the motherboard, not on
the caching controller) in its regular, lying mode with write cache on,
the creates take the following times:

Time: 713.982 ms  Time: 659.890 ms  Time: 590.842 ms  Time: 675.506 ms
Time: 645.521 ms

A second run gives similar results; seems quite repeatable for every
test I ran so I'll just show one run of each.

If I then turn off the write-cache on the drive:

$ sudo hdparm -W 0 /dev/sdb

And repeat, these times show up instead:

Time: 6781.205 ms  Time: 6805.271 ms  Time: 6947.037 ms  Time: 6938.644
ms  Time: 7346.838 ms

So there's the problem case reproduced, right on regular old ext3 and
Ubuntu Jaunty:  around 7 seconds to create a database, not real impressive.

Applying the last patch you attached, with the cache on, I see this:

Time: 396.105 ms  Time: 389.984 ms  Time: 469.800 ms  Time: 386.043 ms
Time: 441.269 ms

And if I then turn the write cache off, back to slow times, but much better:

Time: 2162.687 ms  Time: 2174.057 ms  Time: 2215.785 ms  Time: 2174.100
ms  Time: 2190.811 ms

That makes the average times I'm seeing on my server:

HEAD  Cached:  657 ms Uncached:  6964 ms
Patched Cached:  417 ms Uncached:  2183 ms

Modest speedup even with a caching drive, and a huge speedup in the case
when you have one with slow fsync.  Looks to me that if you address
Tom's concern about documentation and function naming, comitting this
patch will certainly deliver as promised on the performance side.  Maybe
2 seconds is still too long for some people, but it's at least a whole
lot better.

--
Greg Smith    2ndQuadrant   Baltimore, MD
PostgreSQL Training, Services and Support
greg@2ndQuadrant.com  www.2ndQuadrant.co


On Tue, Jan 19, 2010 at 3:25 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> That function *seriously* needs documentation, in particular the fact
> that it's a no-op on machines without the right kernel call.  The name
> you've chosen is very bad for those semantics.  I'd pick something
> else myself.  Maybe "pg_start_data_flush" or something like that?
>

I would like to make one token argument in favour of the name I
picked. If it doesn't convince I'll change it since we can always
revisit the API down the road.

I envision having two function calls, pg_fsync_start() and
pg_fsync_finish(). The latter will wait until the data synced in the
first call is actually synced. The fall-back if there's no
implementation of this would be for fsync_start() to be a noop (or
something unreliable like posix_fadvise) and fsync_finish() to just be
a regular fsync.

I think we can accomplish this with sync_file_range() but I need to
read up on how it actually works a bit more. In this case it doesn't
make a difference since when we call fsync_finish() it's going to be
for the entire file and nothing else will have been writing to these
files. But for wal writing and checkpointing it might have very
different performance characteristics.

The big objection to this is that then we don't really have an api for
FADV_DONT_NEED which is more about cache policy than about syncing to
disk. So for example a sequential scan might want to indicate that it
isn't planning on reading the buffers it's churning through but
doesn't want to force them to be written sooner than otherwise and is
never going to call fsync_finish().



--
greg

Andres Freund <andres@anarazel.de> writes:
> On Tuesday 02 February 2010 18:36:12 Robert Haas wrote:
>> I took a look at this patch today and I agree with Tom that
>> pg_fsync_start() is a very confusing name.  I don't know what the
>> right name is, but this doesn't fsync so I don't think it shuld have
>> fsync in the name.  Maybe something like pg_advise_abandon() or
>> pg_abandon_cache().  The current name is really wishful thinking:
>> you're hoping that it will make the kernel start the fsync, but it
>> might not.  I think pg_start_data_flush() is similarly optimistic.

> What about: pg_fsync_prepare().

prepare_for_fsync()?

            regards, tom lane

On Tuesday 02 February 2010 18:36:12 Robert Haas wrote:
> On Fri, Jan 29, 2010 at 1:56 PM, Greg Stark <gsstark@mit.edu> wrote:
> > On Tue, Jan 19, 2010 at 3:25 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> >> That function *seriously* needs documentation, in particular the fact
> >> that it's a no-op on machines without the right kernel call.  The name
> >> you've chosen is very bad for those semantics.  I'd pick something
> >> else myself.  Maybe "pg_start_data_flush" or something like that?
> >
> > I would like to make one token argument in favour of the name I
> > picked. If it doesn't convince I'll change it since we can always
> > revisit the API down the road.
> >
> > I envision having two function calls, pg_fsync_start() and
> > pg_fsync_finish(). The latter will wait until the data synced in the
> > first call is actually synced. The fall-back if there's no
> > implementation of this would be for fsync_start() to be a noop (or
> > something unreliable like posix_fadvise) and fsync_finish() to just be
> > a regular fsync.
> >
> > I think we can accomplish this with sync_file_range() but I need to
> > read up on how it actually works a bit more. In this case it doesn't
> > make a difference since when we call fsync_finish() it's going to be
> > for the entire file and nothing else will have been writing to these
> > files. But for wal writing and checkpointing it might have very
> > different performance characteristics.
> >
> > The big objection to this is that then we don't really have an api for
> > FADV_DONT_NEED which is more about cache policy than about syncing to
> > disk. So for example a sequential scan might want to indicate that it
> > isn't planning on reading the buffers it's churning through but
> > doesn't want to force them to be written sooner than otherwise and is
> > never going to call fsync_finish().
>
> I took a look at this patch today and I agree with Tom that
> pg_fsync_start() is a very confusing name.  I don't know what the
> right name is, but this doesn't fsync so I don't think it shuld have
> fsync in the name.  Maybe something like pg_advise_abandon() or
> pg_abandon_cache().  The current name is really wishful thinking:
> you're hoping that it will make the kernel start the fsync, but it
> might not.  I think pg_start_data_flush() is similarly optimistic.
What about: pg_fsync_prepare(). That gives the reason why were doing that and
doesnt promise that it is actually doing an fsync.
I dislike really having "cache" in the name, because the primary aim is not to
discard the cache...

Andres

On Tue, Feb 2, 2010 at 12:50 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Andres Freund <andres@anarazel.de> writes:
>> On Tuesday 02 February 2010 18:36:12 Robert Haas wrote:
>>> I took a look at this patch today and I agree with Tom that
>>> pg_fsync_start() is a very confusing name.  I don't know what the
>>> right name is, but this doesn't fsync so I don't think it shuld have
>>> fsync in the name.  Maybe something like pg_advise_abandon() or
>>> pg_abandon_cache().  The current name is really wishful thinking:
>>> you're hoping that it will make the kernel start the fsync, but it
>>> might not.  I think pg_start_data_flush() is similarly optimistic.
>
>> What about: pg_fsync_prepare().
>
> prepare_for_fsync()?

It still seems mis-descriptive to me.  Couldn't the same routine be
used simply to abandon undirtied data that we no longer care about
caching?

...Robert

On Tuesday 02 February 2010 19:14:40 Robert Haas wrote:
> On Tue, Feb 2, 2010 at 12:50 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> > Andres Freund <andres@anarazel.de> writes:
> >> On Tuesday 02 February 2010 18:36:12 Robert Haas wrote:
> >>> I took a look at this patch today and I agree with Tom that
> >>> pg_fsync_start() is a very confusing name.  I don't know what the
> >>> right name is, but this doesn't fsync so I don't think it shuld have
> >>> fsync in the name.  Maybe something like pg_advise_abandon() or
> >>> pg_abandon_cache().  The current name is really wishful thinking:
> >>> you're hoping that it will make the kernel start the fsync, but it
> >>> might not.  I think pg_start_data_flush() is similarly optimistic.
> >>
> >> What about: pg_fsync_prepare().
> >
> > prepare_for_fsync()?
>
> It still seems mis-descriptive to me.  Couldn't the same routine be
> used simply to abandon undirtied data that we no longer care about
> caching?
For now it could - but it very well might be converted to sync_file_range or
similar, which would have different "sideeffects".

As the potential code duplication is rather small I would prefer to describe
the prime effect not the sideeffects...

Andres

On Tue, Feb 2, 2010 at 1:34 PM, Andres Freund <andres@anarazel.de> wrote:
> For now it could - but it very well might be converted to sync_file_range or
> similar, which would have different "sideeffects".
>
> As the potential code duplication is rather small I would prefer to describe
> the prime effect not the sideeffects...

Hmm, in that case, I think the problem is that this function has no
comment explaining its intended charter.

...Robert

On Tuesday 02 February 2010 20:06:32 Robert Haas wrote:
> On Tue, Feb 2, 2010 at 1:34 PM, Andres Freund <andres@anarazel.de> wrote:
> > For now it could - but it very well might be converted to sync_file_range
> > or similar, which would have different "sideeffects".
> >
> > As the potential code duplication is rather small I would prefer to
> > describe the prime effect not the sideeffects...
>
> Hmm, in that case, I think the problem is that this function has no
> comment explaining its intended charter.
I agree there. Greg, do you want to update the patch with some comments or
shall I?

Andres

On Fri, Jan 29, 2010 at 1:56 PM, Greg Stark <gsstark@mit.edu> wrote:
> On Tue, Jan 19, 2010 at 3:25 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> That function *seriously* needs documentation, in particular the fact
>> that it's a no-op on machines without the right kernel call.  The name
>> you've chosen is very bad for those semantics.  I'd pick something
>> else myself.  Maybe "pg_start_data_flush" or something like that?
>>
>
> I would like to make one token argument in favour of the name I
> picked. If it doesn't convince I'll change it since we can always
> revisit the API down the road.
>
> I envision having two function calls, pg_fsync_start() and
> pg_fsync_finish(). The latter will wait until the data synced in the
> first call is actually synced. The fall-back if there's no
> implementation of this would be for fsync_start() to be a noop (or
> something unreliable like posix_fadvise) and fsync_finish() to just be
> a regular fsync.
>
> I think we can accomplish this with sync_file_range() but I need to
> read up on how it actually works a bit more. In this case it doesn't
> make a difference since when we call fsync_finish() it's going to be
> for the entire file and nothing else will have been writing to these
> files. But for wal writing and checkpointing it might have very
> different performance characteristics.
>
> The big objection to this is that then we don't really have an api for
> FADV_DONT_NEED which is more about cache policy than about syncing to
> disk. So for example a sequential scan might want to indicate that it
> isn't planning on reading the buffers it's churning through but
> doesn't want to force them to be written sooner than otherwise and is
> never going to call fsync_finish().

I took a look at this patch today and I agree with Tom that
pg_fsync_start() is a very confusing name.  I don't know what the
right name is, but this doesn't fsync so I don't think it shuld have
fsync in the name.  Maybe something like pg_advise_abandon() or
pg_abandon_cache().  The current name is really wishful thinking:
you're hoping that it will make the kernel start the fsync, but it
might not.  I think pg_start_data_flush() is similarly optimistic.

...Robert

Robert Haas <robertmhaas@gmail.com> writes:
> Hmm, in that case, I think the problem is that this function has no
> comment explaining its intended charter.

That's certainly a big problem, but a comment won't fix the fact that
the name is misleading.  We need both a comment and a name change.

            regards, tom lane

On Tue, Feb 2, 2010 at 2:33 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Robert Haas <robertmhaas@gmail.com> writes:
>> Hmm, in that case, I think the problem is that this function has no
>> comment explaining its intended charter.
>
> That's certainly a big problem, but a comment won't fix the fact that
> the name is misleading.  We need both a comment and a name change.

I think you're probably right, but it's not clear what the new name
should be until we have a comment explaining what the function is
responsible for.

...Robert

On Tue, Feb 2, 2010 at 7:45 PM, Robert Haas <robertmhaas@gmail.com> wrote:
> I think you're probably right, but it's not clear what the new name
> should be until we have a comment explaining what the function is
> responsible for.

So I wrote some comments but wasn't going to repost the patch with the
unchanged name without explanation... But I think you're right though
I was looking at it the other way around. I want to have an API for a
two-stage sync and of course if I do that I'll comment it to explain
that clearly.

The gist of the comments was that the function is preparing to fsync
to initiate the i/o early and allow the later fsync to fast -- but
also at the same time have the beneficial side-effect of avoiding
cache poisoning. It's not clear that the two are necessarily linked
though. Perhaps we need two separate apis, though it'll be hard to
keep them separate on all platforms.

--
greg

On 02/03/10 12:53, Greg Stark wrote:
> On Tue, Feb 2, 2010 at 7:45 PM, Robert Haas<robertmhaas@gmail.com>  wrote:
>> I think you're probably right, but it's not clear what the new name
>> should be until we have a comment explaining what the function is
>> responsible for.
>
> So I wrote some comments but wasn't going to repost the patch with the
> unchanged name without explanation... But I think you're right though
> I was looking at it the other way around. I want to have an API for a
> two-stage sync and of course if I do that I'll comment it to explain
> that clearly.
>
> The gist of the comments was that the function is preparing to fsync
> to initiate the i/o early and allow the later fsync to fast -- but
> also at the same time have the beneficial side-effect of avoiding
> cache poisoning. It's not clear that the two are necessarily linked
> though. Perhaps we need two separate apis, though it'll be hard to
> keep them separate on all platforms.
I vote for two seperate apis - sure, there will be some unfortunate
overlap for most unixoid platforms but its sure better possibly to allow
adding more platforms later at a centralized place than having to
analyze every place where the api is used.

Andres

On Wed, Feb 3, 2010 at 6:53 AM, Greg Stark <gsstark@mit.edu> wrote:
> On Tue, Feb 2, 2010 at 7:45 PM, Robert Haas <robertmhaas@gmail.com> wrote:
>> I think you're probably right, but it's not clear what the new name
>> should be until we have a comment explaining what the function is
>> responsible for.
>
> So I wrote some comments but wasn't going to repost the patch with the
> unchanged name without explanation... But I think you're right though
> I was looking at it the other way around. I want to have an API for a
> two-stage sync and of course if I do that I'll comment it to explain
> that clearly.
>
> The gist of the comments was that the function is preparing to fsync
> to initiate the i/o early and allow the later fsync to fast -- but
> also at the same time have the beneficial side-effect of avoiding
> cache poisoning. It's not clear that the two are necessarily linked
> though. Perhaps we need two separate apis, though it'll be hard to
> keep them separate on all platforms.

Well, maybe we should start with a discussion of what kernel calls
you're aware of on different platforms and then we could try to put an
API around it.  I mean, right now all you've got is
POSIX_FADV_DONTNEED, so given just that I feel like the API could
simply be pg_dontneed() or something.  It's hard to design a general
framework based on one example.

...Robert

On 02/03/10 14:42, Robert Haas wrote:
> On Wed, Feb 3, 2010 at 6:53 AM, Greg Stark<gsstark@mit.edu>  wrote:
>> On Tue, Feb 2, 2010 at 7:45 PM, Robert Haas<robertmhaas@gmail.com>  wrote:
>>> I think you're probably right, but it's not clear what the new name
>>> should be until we have a comment explaining what the function is
>>> responsible for.
>>
>> So I wrote some comments but wasn't going to repost the patch with the
>> unchanged name without explanation... But I think you're right though
>> I was looking at it the other way around. I want to have an API for a
>> two-stage sync and of course if I do that I'll comment it to explain
>> that clearly.
>>
>> The gist of the comments was that the function is preparing to fsync
>> to initiate the i/o early and allow the later fsync to fast -- but
>> also at the same time have the beneficial side-effect of avoiding
>> cache poisoning. It's not clear that the two are necessarily linked
>> though. Perhaps we need two separate apis, though it'll be hard to
>> keep them separate on all platforms.
>
> Well, maybe we should start with a discussion of what kernel calls
> you're aware of on different platforms and then we could try to put an
> API around it.
In linux there is sync_file_range. On newer Posixish systems one can
emulate that with mmap() and msync() (in batches obviously).

No idea about windows.

Andres

Andres Freund wrote:
> On 02/03/10 14:42, Robert Haas wrote:
>> Well, maybe we should start with a discussion of what kernel calls
>> you're aware of on different platforms and then we could try to put an
>> API around it.
> In linux there is sync_file_range. On newer Posixish systems one can
> emulate that with mmap() and msync() (in batches obviously).
>
> No idea about windows.

There's a series of parameters you can pass into CreateFile:
http://msdn.microsoft.com/en-us/library/aa363858(VS.85).aspx

A lot of these are already mapped inside of src/port/open.c in a pretty
straightforward way from the POSIX-oriented interface:

O_RDWR,O_WRONLY -> GENERIC_WRITE, GENERIC_READ
O_RANDOM -> FILE_FLAG_RANDOM_ACCESS
O_SEQUENTIAL -> FILE_FLAG_SEQUENTIAL_SCAN
O_SHORT_LIVED -> FILE_ATTRIBUTE_TEMPORARY
O_TEMPORARY -> FILE_FLAG_DELETE_ON_CLOSE
O_DIRECT -> FILE_FLAG_NO_BUFFERING
O_DSYNC -> FILE_FLAG_WRITE_THROUGH

You have to read the whole "Caching Behavior" section to see exactly how
all of those interact, and even then notes like
http://support.microsoft.com/kb/99794 are needed to follow the fine
points of things like FILE_FLAG_NO_BUFFERING vs. FILE_FLAG_WRITE_THROUGH.

So anything that's setting those POSIX open flags better than before is
getting the benefit of that improvement on Windows, too.  But that's not
quite the same as the changes using fadvise to provide better targeted
cache control hints.

I'm getting the impression that doing much better on Windows might fall
into the same sort of category as Solaris, where the primary interface
for this sort of thing is to use an AIO implementation instead:
http://msdn.microsoft.com/en-us/library/aa365683(VS.85).aspx

The effective_io_concurrency feature had proof of concept test programs
that worked using AIO, but actually following through on that
implementation would require a major restructuring of how the database
interacts with the OS in terms of reads and writes of blocks.  It looks
to me like doing something similar to sync_file_range on Windows would
be similarly difficult.

--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
greg@2ndQuadrant.com   www.2ndQuadrant.us


On Saturday 06 February 2010 06:03:30 Greg Smith wrote:
> Andres Freund wrote:
> > On 02/03/10 14:42, Robert Haas wrote:
> >> Well, maybe we should start with a discussion of what kernel calls
> >> you're aware of on different platforms and then we could try to put an
> >> API around it.
> >
> > In linux there is sync_file_range. On newer Posixish systems one can
> > emulate that with mmap() and msync() (in batches obviously).
> >
> > No idea about windows.
> The effective_io_concurrency feature had proof of concept test programs
> that worked using AIO, but actually following through on that
> implementation would require a major restructuring of how the database
> interacts with the OS in terms of reads and writes of blocks.  It looks
> to me like doing something similar to sync_file_range on Windows would
> be similarly difficult.
Looking a bit arround it seems one could achieve something approximediately
similar to pg_prepare_fsync() by using
CreateFileMapping && MapViewOfFile && FlushViewOfFile

If I understand it correctly that will flush, but not wait. Unfortunately you
cant event make it wait, so its not possible to implement sync_file_range or
similar fully.

Andres

On Sat, Feb 6, 2010 at 7:03 AM, Andres Freund <andres@anarazel.de> wrote:
> On Saturday 06 February 2010 06:03:30 Greg Smith wrote:
>> Andres Freund wrote:
>> > On 02/03/10 14:42, Robert Haas wrote:
>> >> Well, maybe we should start with a discussion of what kernel calls
>> >> you're aware of on different platforms and then we could try to put an
>> >> API around it.
>> >
>> > In linux there is sync_file_range. On newer Posixish systems one can
>> > emulate that with mmap() and msync() (in batches obviously).
>> >
>> > No idea about windows.
>> The effective_io_concurrency feature had proof of concept test programs
>> that worked using AIO, but actually following through on that
>> implementation would require a major restructuring of how the database
>> interacts with the OS in terms of reads and writes of blocks.  It looks
>> to me like doing something similar to sync_file_range on Windows would
>> be similarly difficult.
> Looking a bit arround it seems one could achieve something approximediately
> similar to pg_prepare_fsync() by using
> CreateFileMapping && MapViewOfFile && FlushViewOfFile
>
> If I understand it correctly that will flush, but not wait. Unfortunately you
> cant event make it wait, so its not possible to implement sync_file_range or
> similar fully.

Well it seems that what we're trying to implement is more like
it_would_be_nice_if_you_would_start_syncing_this_file_range_but_its_ok_if_you_dont(),
so maybe that would work.

Anyway, is there something that we can agree on and get committed here
for 9.0, or should we postpone this to 9.1?  It seems simple enough
that we ought to be able to get it done, but we're running out of time
and we don't seem to have a clear vision here yet...

...Robert

Robert Haas wrote:
> Well it seems that what we're trying to implement is more like
> it_would_be_nice_if_you_would_start_syncing_this_file_range_but_its_ok_if_you_dont(),
> so maybe that would work.
>
> Anyway, is there something that we can agree on and get committed here
> for 9.0, or should we postpone this to 9.1?  It seems simple enough
> that we ought to be able to get it done, but we're running out of time
> and we don't seem to have a clear vision here yet...
>

This is turning into yet another one of those situations where something
simple and useful is being killed by trying to generalize it way more
than it needs to be, given its current goals and its lack of external
interfaces.  There's no catversion bump or API breakage to hinder future
refactoring if this isn't optimally designed internally from day one.

The feature is valuable and there seems at least one spot where it may
be resolving the possibility of a subtle OS interaction bug by being
more thorough in the way that it writes and syncs.  The main contention
seems to be over naming and completely optional additional abstraction.
I consider the whole "let's make this cover every type of complicated
sync on every platform" goal interesting and worthwhile, but it's
completely optional for this release.  The stuff being fretted over now
is ultimately an internal interface that can be refactored at will in
later releases with no user impact.

If the goal here could be shifted back to finding the minimal level of
abstraction that doesn't seem completely wrong, then updating the
function names and comments to match that more closely, this could
return to committable.  That's all I thought was left to do when I moved
it to "ready for committer", and as far as I've seen this expanded scope
of discussion has just moved backwards from that point.

--
Greg Smith    2ndQuadrant   Baltimore, MD
PostgreSQL Training, Services and Support
greg@2ndQuadrant.com  www.2ndQuadrant.com


Greg Smith <greg@2ndquadrant.com> writes:
> This is turning into yet another one of those situations where something
> simple and useful is being killed by trying to generalize it way more
> than it needs to be, given its current goals and its lack of external
> interfaces.  There's no catversion bump or API breakage to hinder future
> refactoring if this isn't optimally designed internally from day one.

I agree that it's too late in the cycle for any major redesign of the
patch.  But is it too much to ask to use a less confusing name for the
function?

            regards, tom lane

On Sun, Feb 7, 2010 at 11:24 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Greg Smith <greg@2ndquadrant.com> writes:
>> This is turning into yet another one of those situations where something
>> simple and useful is being killed by trying to generalize it way more
>> than it needs to be, given its current goals and its lack of external
>> interfaces.  There's no catversion bump or API breakage to hinder future
>> refactoring if this isn't optimally designed internally from day one.
>
> I agree that it's too late in the cycle for any major redesign of the
> patch.  But is it too much to ask to use a less confusing name for the
> function?

+1.  Let's just rename the thing, add some comments, and call it good.

...Robert

On Sunday 07 February 2010 19:23:10 Robert Haas wrote:
> On Sun, Feb 7, 2010 at 11:24 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> > Greg Smith <greg@2ndquadrant.com> writes:
> >> This is turning into yet another one of those situations where something
> >> simple and useful is being killed by trying to generalize it way more
> >> than it needs to be, given its current goals and its lack of external
> >> interfaces.  There's no catversion bump or API breakage to hinder future
> >> refactoring if this isn't optimally designed internally from day one.
> >
> > I agree that it's too late in the cycle for any major redesign of the
> > patch.  But is it too much to ask to use a less confusing name for the
> > function?
>
> +1.  Let's just rename the thing, add some comments, and call it good.
Will post a updated patch in the next hours unless somebody beats me too it.

Andres

On Sunday 07 February 2010 19:27:02 Andres Freund wrote:
> On Sunday 07 February 2010 19:23:10 Robert Haas wrote:
> > On Sun, Feb 7, 2010 at 11:24 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> > > Greg Smith <greg@2ndquadrant.com> writes:
> > >> This is turning into yet another one of those situations where
> > >> something simple and useful is being killed by trying to generalize
> > >> it way more than it needs to be, given its current goals and its lack
> > >> of external interfaces.  There's no catversion bump or API breakage
> > >> to hinder future refactoring if this isn't optimally designed
> > >> internally from day one.
> > >
> > > I agree that it's too late in the cycle for any major redesign of the
> > > patch.  But is it too much to ask to use a less confusing name for the
> > > function?
> >
> > +1.  Let's just rename the thing, add some comments, and call it good.
>
> Will post a updated patch in the next hours unless somebody beats me too
> it.
Here we go.

I left the name at my suggestion pg_fsync_prepare instead of Tom's
prepare_for_fsync because it seemed more consistend with the naming in the
rest of the file. Obviously feel free to adjust.

I personally think the fsync on the directory should be added to the stable
branches - other opinions?
If wanted I can prepare patches for that.

Andres

Attachment
Andres Freund escribió:

> I personally think the fsync on the directory should be added to the stable
> branches - other opinions?
> If wanted I can prepare patches for that.

Yeah, it seems there are two patches here -- one is the addition of
fsync_fname() and the other is the fsync_prepare stuff.

--
Alvaro Herrera                                http://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

On Sun, Feb 7, 2010 at 10:09 PM, Alvaro Herrera
<alvherre@commandprompt.com> wrote:
> Andres Freund escribió:
>> I personally think the fsync on the directory should be added to the stable
>> branches - other opinions?
>> If wanted I can prepare patches for that.
>
> Yeah, it seems there are two patches here -- one is the addition of
> fsync_fname() and the other is the fsync_prepare stuff.

Andres, you want to take a crack at splitting this up?

...Robert

On Monday 08 February 2010 05:53:23 Robert Haas wrote:
> On Sun, Feb 7, 2010 at 10:09 PM, Alvaro Herrera
>
> <alvherre@commandprompt.com> wrote:
> > Andres Freund escribió:
> >> I personally think the fsync on the directory should be added to the
> >> stable branches - other opinions?
> >> If wanted I can prepare patches for that.
> >
> > Yeah, it seems there are two patches here -- one is the addition of
> > fsync_fname() and the other is the fsync_prepare stuff.
>
> Andres, you want to take a crack at splitting this up?
Will do. Later today or tomorrow morning.

Andres

On Mon, Feb 8, 2010 at 4:53 AM, Robert Haas <robertmhaas@gmail.com> wrote:
> On Sun, Feb 7, 2010 at 10:09 PM, Alvaro Herrera
>> Yeah, it seems there are two patches here -- one is the addition of
>> fsync_fname() and the other is the fsync_prepare stuff.

Sorry, I'm just catching up on my mail from FOSDEM this past weekend.

I had come to the same conclusion as Greg that I might as well just
commit it with Tom's "pg_flush_data()" name and we can decide later if
and when we have pg_fsync_start()/pg_fsync_finish() whether it's worth
keeping two apis or not.

So I was just going to commit it like that but I discovered last week
that I don't have cvs write access set up yet. I'll commit it as soon
as I generate a new ssh key and Dave installs it, etc. I intentionally
picked a small simple patch that nobody was waiting on because I knew
there was a risk of delays like this and the paperwork. I'm nearly
there.

--
greg

On Monday 08 February 2010 19:34:01 Greg Stark wrote:
> On Mon, Feb 8, 2010 at 4:53 AM, Robert Haas <robertmhaas@gmail.com> wrote:
> > On Sun, Feb 7, 2010 at 10:09 PM, Alvaro Herrera
> >
> >> Yeah, it seems there are two patches here -- one is the addition of
> >> fsync_fname() and the other is the fsync_prepare stuff.
>
> Sorry, I'm just catching up on my mail from FOSDEM this past weekend.
>
> I had come to the same conclusion as Greg that I might as well just
> commit it with Tom's "pg_flush_data()" name and we can decide later if
> and when we have pg_fsync_start()/pg_fsync_finish() whether it's worth
> keeping two apis or not.
>
> So I was just going to commit it like that but I discovered last week
> that I don't have cvs write access set up yet. I'll commit it as soon
> as I generate a new ssh key and Dave installs it, etc. I intentionally
> picked a small simple patch that nobody was waiting on because I knew
> there was a risk of delays like this and the paperwork. I'm nearly
> there.
Do you still want me to split the patches into two or do you want to do it
yourself?
One in multiple versions for the directory fsync and another one for 9.0?

Andres

On Monday 08 February 2010 05:53:23 Robert Haas wrote:
> On Sun, Feb 7, 2010 at 10:09 PM, Alvaro Herrera
>
> <alvherre@commandprompt.com> wrote:
> > Andres Freund escribió:
> >> I personally think the fsync on the directory should be added to the
> >> stable branches - other opinions?
> >> If wanted I can prepare patches for that.
> >
> > Yeah, it seems there are two patches here -- one is the addition of
> > fsync_fname() and the other is the fsync_prepare stuff.
>
> Andres, you want to take a crack at splitting this up?
I hope I didnt duplicate Gregs work, but I didnt hear back from him, so...

Everything <8.1 is hopeless because cp is used there... I didnt see it worth
to replace that. The patch applies cleanly for 8.1 to 8.4 and survives the
regression tests

Given pg's heavy commit model I didnt see a point to split the patch for 9.0
as well...

Andres

On Wed, Feb 10, 2010 at 9:27 PM, Andres Freund <andres@anarazel.de> wrote:
> On Monday 08 February 2010 05:53:23 Robert Haas wrote:
>> On Sun, Feb 7, 2010 at 10:09 PM, Alvaro Herrera
>>
>> <alvherre@commandprompt.com> wrote:
>> > Andres Freund escribió:
>> >> I personally think the fsync on the directory should be added to the
>> >> stable branches - other opinions?
>> >> If wanted I can prepare patches for that.
>> >
>> > Yeah, it seems there are two patches here -- one is the addition of
>> > fsync_fname() and the other is the fsync_prepare stuff.
>>
>> Andres, you want to take a crack at splitting this up?
> I hope I didnt duplicate Gregs work, but I didnt hear back from him, so...
>
> Everything <8.1 is hopeless because cp is used there... I didnt see it worth
> to replace that. The patch applies cleanly for 8.1 to 8.4 and survives the
> regression tests
>
> Given pg's heavy commit model I didnt see a point to split the patch for 9.0
> as well...

I'd probably argue for committing this patch to both HEAD and the
back-branches, and doing a second commit with the remaining stuff for
HEAD only, but I don't care very much.

Greg Stark, have you managed to get your access issues sorted out?  If
you like, I can do the actual commit on this one.

...Robert


On Fri, Feb 12, 2010 at 3:49 PM, Robert Haas <robertmhaas@gmail.com> wrote:
> Greg Stark, have you managed to get your access issues sorted out?  If

Yep, will look at this today.


--
greg


On Sun, Feb 14, 2010 at 2:03 PM, Greg Stark <gsstark@mit.edu> wrote:
> On Fri, Feb 12, 2010 at 3:49 PM, Robert Haas <robertmhaas@gmail.com> wrote:
>> Greg Stark, have you managed to get your access issues sorted out?  If
>
> Yep, will look at this today.

So I think we have a bigger problem than just copydir.c. It seems to
me we should be fsyncing the table space data directories on every
checkpoint. Otherwise any newly created relations or removed relations
could disappear even though the data in them was fsynced. I'm thinking
I should add an _mdfd_opentblspc(reln) call which returns a file
descriptor for the tablespace and have mdsync() use that to sync the
directory whenever it fsyncs a relation. It would be nice to remember
which tablespaces have been fsynced and only fsync them once though,
that would need another hash table just for tablespaces.

We probably also need to fsync the pg_xlog directory every time we
create or rename an xlog segment.

Are there any other places we do directory operations which we need to
be permanent?


--
greg


Greg Stark <gsstark@mit.edu> writes:
> So I think we have a bigger problem than just copydir.c. It seems to
> me we should be fsyncing the table space data directories on every
> checkpoint.

Is there any evidence that anyone anywhere has ever lost data because
of a lack of directory fsyncs?  I sure don't recall any bug reports
that seem to match that theory.

It seems to me that we're talking about a huge hit in both code
complexity and performance to deal with a problem that doesn't actually
occur in the field; and which furthermore is trivially solved on any
modern filesystem by choosing the right filesystem options.  Why don't
we just document those options, instead?
        regards, tom lane


On Sunday 14 February 2010 18:11:39 Tom Lane wrote:
> Greg Stark <gsstark@mit.edu> writes:
> > So I think we have a bigger problem than just copydir.c. It seems to
> > me we should be fsyncing the table space data directories on every
> > checkpoint.
> 
> Is there any evidence that anyone anywhere has ever lost data because
> of a lack of directory fsyncs?  I sure don't recall any bug reports
> that seem to match that theory.
I have actually seen the issue during create database at least. In a 
virtualized hw though...
~1GB template database, lots and lots of small tables, the crash occured maybe 
a minute after CREATE DB, filesystem was xfs, kernel 2.6.30.y.
> It seems to me that we're talking about a huge hit in both code
> complexity and performance to deal with a problem that doesn't actually
> occur in the field; and which furthermore is trivially solved on any
> modern filesystem by choosing the right filesystem options.  Why don't
> we just document those options, instead?
Which options would that be? I am not aware that there any for any of the 
recent linux filesystems.
Well, except "sync" that is, but that sure would be more of a performance hit 
than fsyncing the directory...

Andres


Andres Freund <andres@anarazel.de> writes:
> On Sunday 14 February 2010 18:11:39 Tom Lane wrote:
>> It seems to me that we're talking about a huge hit in both code
>> complexity and performance to deal with a problem that doesn't actually
>> occur in the field; and which furthermore is trivially solved on any
>> modern filesystem by choosing the right filesystem options.  Why don't
>> we just document those options, instead?

> Which options would that be? I am not aware that there any for any of the 
> recent linux filesystems.

Shouldn't journaling of metadata be sufficient?
        regards, tom lane


Re: Re: Faster CREATE DATABASE by delaying fsync

From
Florian Weimer
Date:
* Tom Lane:

>> Which options would that be? I am not aware that there any for any of the 
>> recent linux filesystems.
>
> Shouldn't journaling of metadata be sufficient?

You also need to enforce ordering between the directory update and the
file update.  The file metadata is flushed with fsync(), but the
directory isn't.  On some systems, all directory operations are
synchronous, but not on Linux.


Re: Re: Faster CREATE DATABASE by delaying fsync

From
Mark Mielke
Date:
On 02/14/2010 03:24 PM, Florian Weimer wrote:
> * Tom Lane:
>    
>>> Which options would that be? I am not aware that there any for any of the
>>> recent linux filesystems.
>>>        
>> Shouldn't journaling of metadata be sufficient?
>>      
> You also need to enforce ordering between the directory update and the
> file update.  The file metadata is flushed with fsync(), but the
> directory isn't.  On some systems, all directory operations are
> synchronous, but not on Linux.
>    
       dirsync              All directory updates within the filesystem should be 
done  syn-              chronously.   This  affects  the  following system calls: 
creat,              link, unlink, symlink, mkdir, rmdir, mknod and rename.

The widely reported problems, though, did not tend to be a problem with 
directory changes written too late - but directory changes being written 
too early. That is, the directory change is written to disk, but the 
file content is not. This is likely because of the "ordered journal" 
mode widely used in ext3/ext4 where metadata changes are journalled, but 
file pages are not journalled. Therefore, it is important for some 
operations, that the file pages are pushed to disk using fsync(file), 
before the metadata changes are journalled.

In theory there is some open hole where directory updates need to be 
synchronized with file updates, as POSIX doesn't enforce this ordering, 
and we can't trust that all file systems implicitly order things 
correctly, but in practice, I don't see this sort of problem happening.

If you are concerned, enable dirsync.

Cheers,
mark

-- 
Mark Mielke<mark@mielke.cc>



Re: Re: Faster CREATE DATABASE by delaying fsync

From
Andres Freund
Date:
On Sunday 14 February 2010 21:41:02 Mark Mielke wrote:
> On 02/14/2010 03:24 PM, Florian Weimer wrote:
> > * Tom Lane:
> >>> Which options would that be? I am not aware that there any for any of
> >>> the recent linux filesystems.
> >> 
> >> Shouldn't journaling of metadata be sufficient?
> > 
> > You also need to enforce ordering between the directory update and the
> > file update.  The file metadata is flushed with fsync(), but the
> > directory isn't.  On some systems, all directory operations are
> > synchronous, but not on Linux.
> 
>         dirsync
>                All directory updates within the filesystem should be
> done  syn-
>                chronously.   This  affects  the  following system calls:
> creat,
>                link, unlink, symlink, mkdir, rmdir, mknod and rename.
> 
> The widely reported problems, though, did not tend to be a problem with
> directory changes written too late - but directory changes being written
> too early. That is, the directory change is written to disk, but the
> file content is not. This is likely because of the "ordered journal"
> mode widely used in ext3/ext4 where metadata changes are journalled, but
> file pages are not journalled. Therefore, it is important for some
> operations, that the file pages are pushed to disk using fsync(file),
> before the metadata changes are journalled.
Well, but thats not a problem with pg as it fsyncs the file contents.

> In theory there is some open hole where directory updates need to be
> synchronized with file updates, as POSIX doesn't enforce this ordering,
> and we can't trust that all file systems implicitly order things
> correctly, but in practice, I don't see this sort of problem happening.
I can try to reproduce it if you want...

> If you are concerned, enable dirsync.
If the filesystem already behaves that way a fsync on it should be fairly 
cheap. If it doesnt behave that way doing it is correct...

Besides there is no reason to fsync the directory before the checkpoint, so 
dirsync would require a higher cost than doing it correctly.

Andres


On Sun, Feb 14, 2010 at 10:31 AM, Greg Stark <gsstark@mit.edu> wrote:
> On Sun, Feb 14, 2010 at 2:03 PM, Greg Stark <gsstark@mit.edu> wrote:
>> On Fri, Feb 12, 2010 at 3:49 PM, Robert Haas <robertmhaas@gmail.com> wrote:
>>> Greg Stark, have you managed to get your access issues sorted out?  If
>>
>> Yep, will look at this today.
>
> So I think we have a bigger problem than just copydir.c. It seems to
> me we should be fsyncing the table space data directories on every
> checkpoint. Otherwise any newly created relations or removed relations
> could disappear even though the data in them was fsynced. I'm thinking
> I should add an _mdfd_opentblspc(reln) call which returns a file
> descriptor for the tablespace and have mdsync() use that to sync the
> directory whenever it fsyncs a relation. It would be nice to remember
> which tablespaces have been fsynced and only fsync them once though,
> that would need another hash table just for tablespaces.
>
> We probably also need to fsync the pg_xlog directory every time we
> create or rename an xlog segment.
>
> Are there any other places we do directory operations which we need to
> be permanent?

I agree with Tom that we need to see some actual reproducible test
cases where this is an issue before we go too crazy with it.  In
theory what you're talking about could also happen when extending a
relation, if we extend into a new file; but I think we need to
convince ourselves that it really happens before we make any more
changes.

On a pragmatic note, if this does turn out to be a problem, it's a
bug: and we can and do fix bugs whenever we discover them.  But the
other part of this patch - to speed up createdb - is a feature - and
we are very rapidly running out of time for 9.0 features.  So I'd like
to vote for getting the feature part of this committed (assuming it's
in good shape, of course) and we can continue to investigate the other
issues but without quite as much urgency.

...Robert


On Sunday 14 February 2010 21:57:08 Robert Haas wrote:
> On Sun, Feb 14, 2010 at 10:31 AM, Greg Stark <gsstark@mit.edu> wrote:
> > On Sun, Feb 14, 2010 at 2:03 PM, Greg Stark <gsstark@mit.edu> wrote:
> >> On Fri, Feb 12, 2010 at 3:49 PM, Robert Haas <robertmhaas@gmail.com> 
wrote:
> >>> Greg Stark, have you managed to get your access issues sorted out?  If
> >> 
> >> Yep, will look at this today.
> > 
> > So I think we have a bigger problem than just copydir.c. It seems to
> > me we should be fsyncing the table space data directories on every
> > checkpoint. Otherwise any newly created relations or removed relations
> > could disappear even though the data in them was fsynced. I'm thinking
> > I should add an _mdfd_opentblspc(reln) call which returns a file
> > descriptor for the tablespace and have mdsync() use that to sync the
> > directory whenever it fsyncs a relation. It would be nice to remember
> > which tablespaces have been fsynced and only fsync them once though,
> > that would need another hash table just for tablespaces.
> > 
> > We probably also need to fsync the pg_xlog directory every time we
> > create or rename an xlog segment.
> > 
> > Are there any other places we do directory operations which we need to
> > be permanent?
> 
> I agree with Tom that we need to see some actual reproducible test
> cases where this is an issue before we go too crazy with it.  In
> theory what you're talking about could also happen when extending a
> relation, if we extend into a new file; but I think we need to
> convince ourselves that it really happens before we make any more
> changes.
Ok, will try to reproduce.

> On a pragmatic note, if this does turn out to be a problem, it's a
> bug: and we can and do fix bugs whenever we discover them.  But the
> other part of this patch - to speed up createdb - is a feature - and
> we are very rapidly running out of time for 9.0 features.  So I'd like
> to vote for getting the feature part of this committed (assuming it's
> in good shape, of course) and we can continue to investigate the other
> issues but without quite as much urgency.
Sound sensible.

Andres


On Sun, Feb 14, 2010 at 8:57 PM, Robert Haas <robertmhaas@gmail.com> wrote:
> On a pragmatic note, if this does turn out to be a problem, it's a
> bug: and we can and do fix bugs whenever we discover them.  But the
> other part of this patch - to speed up createdb - is a feature - and
> we are very rapidly running out of time for 9.0 features.  So I'd like
> to vote for getting the feature part of this committed (assuming it's
> in good shape, of course) and we can continue to investigate the other
> issues but without quite as much urgency.

No problem, I already committed the part that overlaps so I can commit
the rest now. I just want to take extra care given how much wine I've
already had tonight...

Incidentally, sorry Andres, I forgot to credit you in the first commit.
--
greg


Re: Re: Faster CREATE DATABASE by delaying fsync

From
Mark Mielke
Date:
On 02/14/2010 03:49 PM, Andres Freund wrote:
> On Sunday 14 February 2010 21:41:02 Mark Mielke wrote:
>    
>> The widely reported problems, though, did not tend to be a problem with
>> directory changes written too late - but directory changes being written
>> too early. That is, the directory change is written to disk, but the
>> file content is not. This is likely because of the "ordered journal"
>> mode widely used in ext3/ext4 where metadata changes are journalled, but
>> file pages are not journalled. Therefore, it is important for some
>> operations, that the file pages are pushed to disk using fsync(file),
>> before the metadata changes are journalled.
>>      
> Well, but thats not a problem with pg as it fsyncs the file contents.
>    

Exactly. Not a problem.

>> If you are concerned, enable dirsync.
>>      
> If the filesystem already behaves that way a fsync on it should be fairly
> cheap. If it doesnt behave that way doing it is correct...
>    

Well, I disagree, as the whole point of this thread is that fsync() is 
*not* cheap. :-)

> Besides there is no reason to fsync the directory before the checkpoint, so
> dirsync would require a higher cost than doing it correctly.
>    

Using "ordered" metadata journaling has approximately the same effect. 
Provided that the data is fsync()'d before the metadata is required, 
either the metadata is recorded in the journal, in which case the data 
is accessible, or the metadata is NOT recorded in the journal, in which 
case, the files will appear missing. The races that theoretically exist 
would be in situations where the data of one file references a separate 
file that does not yet exist.

You said you would try and reproduce - are you going to try and 
reproduce on ext3/ext4 with ordered journalling enabled? I think 
reproducing outside of a case such as CREATE DATABASE would be 
difficult. It would have to be something like:
    open(O_CREAT)/write()/fsync()/close() of new data file, where data 
gets written, but directory data is not yet written out to journal    open()/.../write()/fsync()/close() of existing
fileto point to new 
 
data file, but directory data is still not yet written out to journal    crash

In this case, "dirsync" should be effective at closing this hole.

As for cost? Well, most PostgreSQL data is stored within file content, 
not directory metadata. I think "dirsync" might slow down some 
operations like CREATE DATABASE or "rm -fr", but I would not expect it 
to effect day-to-day performance of the database under real load. Many 
operating systems enable the equivalent of "dirsync" by default. I 
believe Solaris does this, for example, and other than slowing down "rm 
-fr", I don't recall any real complaints about the cost of "dirsync".

After writing the above, I'm seriously considering adding "dirsync" to 
my /db mounts that hold PostgreSQL and MySQL data.

Cheers,
mark

-- 
Mark Mielke<mark@mielke.cc>