Thread: adding support for posix_fadvise()

adding support for posix_fadvise()

From
Neil Conway
Date:
A couple days ago, Manfred Spraul mentioned the posix_fadvise() API on
-hackers:

http://www.opengroup.org/onlinepubs/007904975/functions/posix_fadvise.html

I'm working on making use of posix_fadvise() where appropriate. I can
think of the following places where this would be useful:

(1) As Manfred originally noted, when we advance to a new XLOG segment,
we can use POSIX_FADV_DONTNEED to let the kernel know we won't be
accessing the old WAL segment anymore. I've attached a quick kludge of a
patch that implements this. I haven't done any benchmarking of it yet,
though (comments or benchmark results are welcome).

(2) ISTM that we can set POSIX_FADV_RANDOM for *all* indexes, since the
vast majority of the accesses to them shouldn't be sequential. Are there
any situations in which this assumption doesn't hold? (Perhaps B+-tree
bulk loading, or CLUSTER?) Should this be done per-index-AM, or
globally?

(3) When doing VACUUM, ANALYZE, or large sequential scans (for some
reasonable definition of "large"), we can use POSIX_FADV_SEQUENTIAL.

(4) Various other components, such as tuplestore, tuplesort, and any
utility commands that need to scan through an entire user relation for
some reason. Once we've got the APIs for doing this worked out, it
should be relatively easy to add other uses of posix_fadvise().

(5) I'm hesitant to make use of POSIX_FADV_DONTNEED in VACUUM, as has
been suggested elsewhere. The problem is that it's all-or-nothing: if
the VACUUM happens to look at hot pages, these will be flushed from the
page cache, so the net result may be a loss.

So what API is desirable for uses 2-4? I'm thinking of adding a new
function to the smgr API, smgradvise(). Given a Relation and an advice,
this would:

(a) propagate the advice for this relation to all the open FDs for the
relation

(b) store the new advice somewhere so that new FDs for the relation can
have this advice set for them: clients should just be able to call
smgradvise() without needing to worry if someone else has already called
smgropen() for the relation in the past. One problem is how to store
this: I don't think it can be a field of RelationData, since that is
transient. Any suggestions?

Note that I'm assuming that we don't need to set advice on sub-sections
of a relation, although the posix_fadvise() API allows it -- does anyone
think that would be useful?

One potential issue is that when one process calls posix_fadvise() on a
particular FD, I'd expect that other processes accessing the same file
will be affected. For example, enabling FADV_SEQUENTIAL while we're
vacuuming a relation will mean that another client doing a concurrent
SELECT on the relation will see different readahead behavior. That
doesn't seem like a major problem though.

BTW, posix_fadvise() is currently only supported on Linux 2.6 w/ a
recent version of glibc (BSD hackers, if you're listening,
posix_fadvise() would be a very cool thing to have :P). So we'll need to
do the appropriate configure magic to ensure we only use it where its
available. Thankfully, it is a POSIX standard, so I would expect that in
the years to come it will be available on more platforms.

Any comments would be welcome.

-Neil




Re: adding support for posix_fadvise()

From
Neil Conway
Date:
On Mon, 2003-11-03 at 01:07, Neil Conway wrote:
> (1) As Manfred originally noted, when we advance to a new XLOG segment,
> we can use POSIX_FADV_DONTNEED to let the kernel know we won't be
> accessing the old WAL segment anymore. I've attached a quick kludge of a
> patch that implements this. I haven't done any benchmarking of it yet,
> though (comments or benchmark results are welcome).

Woops, the patch is attached.

-Neil


Attachment

Re: adding support for posix_fadvise()

From
Hannu Krosing
Date:
Neil Conway kirjutas E, 03.11.2003 kell 08:07:
> A couple days ago, Manfred Spraul mentioned the posix_fadvise() API on
> -hackers:
> 
> http://www.opengroup.org/onlinepubs/007904975/functions/posix_fadvise.html
> 
> I'm working on making use of posix_fadvise() where appropriate. I can
> think of the following places where this would be useful:
> 
> (1) As Manfred originally noted, when we advance to a new XLOG segment,
> we can use POSIX_FADV_DONTNEED to let the kernel know we won't be
> accessing the old WAL segment anymore. I've attached a quick kludge of a
> patch that implements this. I haven't done any benchmarking of it yet,
> though (comments or benchmark results are welcome).
> 
> (2) ISTM that we can set POSIX_FADV_RANDOM for *all* indexes, since the
> vast majority of the accesses to them shouldn't be sequential. Are there
> any situations in which this assumption doesn't hold? (Perhaps B+-tree
> bulk loading, or CLUSTER?) Should this be done per-index-AM, or
> globally?

Perhaps we could do it for all _leaf_ nodes, the root and intermediate
nodes are usually better kept in cache.

> (3) When doing VACUUM, ANALYZE, or large sequential scans (for some
> reasonable definition of "large"), we can use POSIX_FADV_SEQUENTIAL.

perhaps just sequential scans without "large" ?

> (4) Various other components, such as tuplestore, tuplesort, and any
> utility commands that need to scan through an entire user relation for
> some reason. Once we've got the APIs for doing this worked out, it
> should be relatively easy to add other uses of posix_fadvise().
> 
> (5) I'm hesitant to make use of POSIX_FADV_DONTNEED in VACUUM, as has
> been suggested elsewhere. The problem is that it's all-or-nothing: if
> the VACUUM happens to look at hot pages, these will be flushed from the
> page cache, so the net result may be a loss.

True. POSIX_FADV_DONTNEED should be only used if the page was retrieved
by VACUUM.

> So what API is desirable for uses 2-4? I'm thinking of adding a new
> function to the smgr API, smgradvise(). Given a Relation and an advice,
> this would:
> 
> (a) propagate the advice for this relation to all the open FDs for the
> relation
> 
> (b) store the new advice somewhere so that new FDs for the relation can
> have this advice set for them: clients should just be able to call
> smgradvise() without needing to worry if someone else has already called
> smgropen() for the relation in the past. One problem is how to store
> this: I don't think it can be a field of RelationData, since that is
> transient. Any suggestions?

also, you may want to restore old FADV* after you are done - just
running one seqscan should probably not leave the relation in
POSIX_FADV_SEQUENTIAL mode forever.

> Note that I'm assuming that we don't need to set advice on sub-sections
> of a relation, although the posix_fadvise() API allows it -- does anyone
> think that would be useful?
>
> One potential issue is that when one process calls posix_fadvise() on a
> particular FD, I'd expect that other processes accessing the same file
> will be affected. For example, enabling FADV_SEQUENTIAL while we're
> vacuuming a relation will mean that another client doing a concurrent
> SELECT on the relation will see different readahead behavior. That
> doesn't seem like a major problem though.
> 
> BTW, posix_fadvise() is currently only supported on Linux 2.6 w/ a
> recent version of glibc (BSD hackers, if you're listening,
> posix_fadvise() would be a very cool thing to have :P). So we'll need to
> do the appropriate configure magic to ensure we only use it where its
> available. Thankfully, it is a POSIX standard, so I would expect that in
> the years to come it will be available on more platforms.
> 
> Any comments would be welcome.
> 
> -Neil
> 
> 
> 
> ---------------------------(end of broadcast)---------------------------
> TIP 7: don't forget to increase your free space map settings


Re: adding support for posix_fadvise()

From
Neil Conway
Date:
On Mon, 2003-11-03 at 04:21, Hannu Krosing wrote:
> Neil Conway kirjutas E, 03.11.2003 kell 08:07:
> > (2) ISTM that we can set POSIX_FADV_RANDOM for *all* indexes, since the
> > vast majority of the accesses to them shouldn't be sequential.
> 
> Perhaps we could do it for all _leaf_ nodes, the root and intermediate
> nodes are usually better kept in cache.

POSIX_FADV_RANDOM doesn't effect the page cache, it just determines how
aggressive the kernel is when doing readahead (at least on Linux, but
I'd expect to see other kernels implement similar behavior). In other
words, using FADV_RANDOM shouldn't decrease the chance that interior
B+-tree nodes are kept in the page cache.

> True. POSIX_FADV_DONTNEED should be only used if the page was retrieved
> by VACUUM.

Right -- we'd like pages touched by VACUUM to be flushed from the page
cache if that page wasn't previously in *either* the PostgreSQL buffer
pool or the kernel's page cache. We can implement the former easily
enough, but I don't see any feasible way to do the latter: on a high-end
machine with gigabytes of RAM but a relatively small shared_buffers
(which is the configuration we recommend), there may be plenty of hot
pages that aren't in the PostgreSQL buffer pool but are in the page
cache.

> also, you may want to restore old FADV* after you are done - just
> running one seqscan should probably not leave the relation in
> POSIX_FADV_SEQUENTIAL mode forever.

Right, I forgot to mention that. The API doesn't provide a means to get
the current advice for an FD. So when we're finished doing whatever
operation we set some advice for, we'll need to just reset the file to
FADV_NORMAL and hope that it doesn't overrule some advise just set by
someone else. Either that, or we can manually keep track of all the
advise we're setting ourselves, but that seems a hassle.

-Neil




Re: adding support for posix_fadvise()

From
Andrew Sullivan
Date:
On Mon, Nov 03, 2003 at 08:50:00AM -0500, Neil Conway wrote:

> pool or the kernel's page cache. We can implement the former easily
> enough, but I don't see any feasible way to do the latter: on a high-end
> machine with gigabytes of RAM but a relatively small shared_buffers
> (which is the configuration we recommend), there may be plenty of hot

I wonder if the limitations that are on one's ability to evaluate
effectively what is in the OS's filesystem cache is the real reason
all those Other systems (of Databases, Big, too) have stayed with
their old design of managing it all themselves (raw filesystems and
all the buffering handled by the back end).  Maybe that's not just an
historical argument whereby they happen to have the code around. 
After all, it can't be cheap to maintain.  Not that I'm advocating
writing such a system -- I sure couldn't do the work, to begin with.

A 


-- 
----
Andrew Sullivan                         204-4141 Yonge Street
Afilias Canada                        Toronto, Ontario Canada
<andrew@libertyrms.info>                              M2P 2A8                                        +1 416 646 3304
x110



Re: adding support for posix_fadvise()

From
Tom Lane
Date:
Neil Conway <neilc@samurai.com> writes:
> So what API is desirable for uses 2-4? I'm thinking of adding a new
> function to the smgr API, smgradvise().

It's a little premature to be inventing APIs when you have no evidence
that this will make any useful performance difference.  I'd recommend a
quick hack to get proof of concept before you bother with nice APIs.

> Given a Relation and an advice, this would:
> (a) propagate the advice for this relation to all the open FDs for the
> relation

"All"?  You cannot affect the FDs being used by other backends.  It's
fairly unclear to me what the posix_fadvise function is really going
to do for files that are being accessed by multiple processes.  For
instance, is there any value in setting POSIX_FADV_DONTNEED on a WAL
file, given that every other backend is going to have that same file
open?  I would expect that rational kernel behavior would be to ignore
this advice unless it's set by the last backend to have the file open
--- but I'm not sure we can synchronize the closing of old WAL segments
well enough to know which backend is the last to close the file.

A related problem is that the smgr uses the same FD to access the same
relation no matter how many scans are in progress.  Think about a
complex query that is doing both a seqscan and an indexscan on the same
relation (a self-join could easily do this).  You'd really need to
change this if you want POSIX_FADV_SEQUENTIAL and POSIX_FADV_RANDOM to
get set usefully.

In short I think you need to do some more thinking about what the scope
of the advice flags is going to be ...

> (b) store the new advice somewhere so that new FDs for the relation can
> have this advice set for them: clients should just be able to call
> smgradvise() without needing to worry if someone else has already called
> smgropen() for the relation in the past. One problem is how to store
> this: I don't think it can be a field of RelationData, since that is
> transient. Any suggestions?

Something Vadim had wanted to do for years is to decouple the smgr and
lower levels from the existing Relation cache, and have a low-level
notion of "open relation" that only requires having the "RelFileNode"
value to open it.  This would allow eliminating the concept of blind
write, which would be a Very Good Thing.  It would make sense to
associate the advice setting with such low-level relations.  One
possible way to handle the multiple-scan issue is to make the desired
advice part of the low-level open() call, so that you actually have
different low-level relations for seq and random access to a relation.
Not sure if this works cleanly when you take into account issues like
smgrunlink, but it's something to think about.
        regards, tom lane


Re: adding support for posix_fadvise()

From
Tom Lane
Date:
Neil Conway <neilc@samurai.com> writes:
> POSIX_FADV_RANDOM doesn't effect the page cache, it just determines how
> aggressive the kernel is when doing readahead (at least on Linux, but
> I'd expect to see other kernels implement similar behavior).

I would expect POSIX_FADV_SEQUENTIAL to reduce the chance that a page
will be kept in buffer cache after it's been used.
        regards, tom lane


Re: adding support for posix_fadvise()

From
Neil Conway
Date:
On Mon, 2003-11-03 at 10:01, Tom Lane wrote:
> Neil Conway <neilc@samurai.com> writes:
> > POSIX_FADV_RANDOM doesn't effect the page cache, it just determines how
> > aggressive the kernel is when doing readahead (at least on Linux, but
> > I'd expect to see other kernels implement similar behavior).
> 
> I would expect POSIX_FADV_SEQUENTIAL to reduce the chance that a page
> will be kept in buffer cache after it's been used.

I don't think that can be reasonably implied from the POSIX text, which
is merely:

POSIX_FADV_SEQUENTIAL       Specifies that the application expects to access the specified       data sequentially from
loweroffsets to higher offsets.
 

The present Linux implementation doesn't do this, AFAICS -- all it does
it increase the readahead for this file:
http://lxr.linux.no/source/mm/fadvise.c?v=2.6.0-test7

-Neil




Re: adding support for posix_fadvise()

From
Tom Lane
Date:
Neil Conway <neilc@samurai.com> writes:
> On Mon, 2003-11-03 at 10:01, Tom Lane wrote:
>> I would expect POSIX_FADV_SEQUENTIAL to reduce the chance that a page
>> will be kept in buffer cache after it's been used.

> I don't think that can be reasonably implied from the POSIX text, which
> is merely:

> POSIX_FADV_SEQUENTIAL
>         Specifies that the application expects to access the specified
>         data sequentially from lower offsets to higher offsets.

Why not?  The advice says that you're going to access the data
sequentially in the forward direction.  If you're not going to back up,
there is no point in keeping pages in cache after they've been read.

A reasonable implementation of the POSIX semantics would need to balance
this consideration against the likelihood that some other process would
want to access some of these pages later.  But I would certainly expect
it to reduce the probability of keeping the pages in cache.

> The present Linux implementation doesn't do this, AFAICS -- 

So it only does part of what it could do.  No surprise...
        regards, tom lane


Re: adding support for posix_fadvise()

From
Neil Conway
Date:
On Mon, 2003-11-03 at 11:11, Tom Lane wrote:
> Why not?  The advice says that you're going to access the data
> sequentially in the forward direction.  If you're not going to back up,
> there is no point in keeping pages in cache after they've been read.

The advice says: "I'm going to read this data sequentially, going
forward." It doesn't say: "I'm only going to read the data once, and
then not access it again" (ISTM that's what FADV_NOREUSE is for). For
example, the following is a perfectly reasonable sequential access
pattern:
a,b,c,a,b,c,a,b,c,a,b,c

(i.e. repeatedly scanning through a large file, say for a data-analysis
app that does multiple passes over the input data). It might not be a
particularly common database reference pattern, but just because an app
is doing a sequential read says little about the temporal locality of
references to the pages in question.

-Neil




Re: adding support for posix_fadvise()

From
Hannu Krosing
Date:
Neil Conway kirjutas E, 03.11.2003 kell 18:59:
> On Mon, 2003-11-03 at 11:11, Tom Lane wrote:
> > Why not?  The advice says that you're going to access the data
> > sequentially in the forward direction.  If you're not going to back up,
> > there is no point in keeping pages in cache after they've been read.
> 
> The advice says: "I'm going to read this data sequentially, going
> forward." It doesn't say: "I'm only going to read the data once, and
> then not access it again" (ISTM that's what FADV_NOREUSE is for).

They seem like independent features. 

Can you use combinations like ( FADV_NOREUS | FADV_SEQUENTIAL )

(I obviously have'nt read the spec)

----------------
Hannu



Re: adding support for posix_fadvise()

From
Neil Conway
Date:
On Mon, 2003-11-03 at 12:17, Hannu Krosing wrote:
> Can you use combinations like ( FADV_NOREUS | FADV_SEQUENTIAL )

You can do an fadvise() for FADV_SEQUENTIAL, and then another fadvise()
for FADV_NOREUSE.

-Neil




Re: adding support for posix_fadvise()

From
Manfred Spraul
Date:
Neil Conway wrote:

>The present Linux implementation doesn't do this, AFAICS -- all it does
>it increase the readahead for this file:
>  
>
AFAIK Linux uses a modified LRU that automatically puts pages that were 
touched only once at a lower priority than frequently accessed pages.

Neil: what about calling posix_fadvise for the whole file immediately 
after issue_xlog_fsync() in XLogWrite? According to the comment, it's 
guaranteed that this will happen only once.
Or:  add an posix_fadvise into issue_xlog_fsync(), for the range just 
sync'ed.

Btw, how much xlog traffic does a busy postgres site generate?

--   Manfred



Re: adding support for posix_fadvise()

From
Neil Conway
Date:
On Mon, 2003-11-03 at 09:38, Tom Lane wrote:
> Neil Conway <neilc@samurai.com> writes:
> > Given a Relation and an advice, this would:
> > (a) propagate the advice for this relation to all the open FDs for the
> > relation
> 
> "All"?  You cannot affect the FDs being used by other backends.

Sorry, I meant just the FDs opened by this backend.

> It's fairly unclear to me what the posix_fadvise function is really
> going to do for files that are being accessed by multiple processes.

In a thread on lkml[1], Andrew Morton comments:
       Note that it applies to a file descriptor. If       posix_fadvise(FADV_DONTNEED) is called against a file
descriptor,and someone else has an fd open against the same       file, that other user gets their foot shot off.
That'sOK.
 

I would imagine that by "getting their foot" shot off, Andrew is saying
that FADV_DONTNEED by one process affects any other processes accessing
the same file via a different FD. If I'm misunderstanding what's going
on here, please let me know.

> For instance, is there any value in setting POSIX_FADV_DONTNEED on a
> WAL file, given that every other backend is going to have that same
> file open?

My understanding is that yes, there is value in doing this, for the
reasons mentioned above.

> A related problem is that the smgr uses the same FD to access the same
> relation no matter how many scans are in progress.

Interesting ... I'll have to think some more about this. Thanks for the
suggestions and comments.

-Neil

[1] - http://www.ussg.iu.edu/hypermail/linux/kernel/0203.2/0361.html

The rest of the thread includes an interesting discussion -- I recommend
reading it. The lkml folks actually speculate about what we (OSS DBMS
developers) would find useful in fadvise(), amusingly enough... The
thread starts here:

http://www.ussg.iu.edu/hypermail/linux/kernel/0203.2/0230.html

Finally, Andrew Morton provides some more clarification on what happens
when multiple processes are accessing a file that is fadvise()'d:

http://www.ussg.iu.edu/hypermail/linux/kernel/0203.2/0476.html




Re: adding support for posix_fadvise()

From
Neil Conway
Date:
On Mon, 2003-11-03 at 14:24, Manfred Spraul wrote:
> Neil: what about calling posix_fadvise for the whole file immediately 
> after issue_xlog_fsync() in XLogWrite? According to the comment, it's 
> guaranteed that this will happen only once.
> Or:  add an posix_fadvise into issue_xlog_fsync(), for the range just 
> sync'ed.

I'll try those, in case it makes any difference. My guess/hope is that
it won't (as mentioned earlier), but we'll see.

> Btw, how much xlog traffic does a busy postgres site generate?

No idea. Can anyone recommend what kind of benchmark would be be
appropriate?

-Neil




Re: adding support for posix_fadvise()

From
Tom Lane
Date:
Neil Conway <neilc@samurai.com> writes:
> On Mon, 2003-11-03 at 11:11, Tom Lane wrote:
>> Why not?  The advice says that you're going to access the data
>> sequentially in the forward direction.  If you're not going to back up,
>> there is no point in keeping pages in cache after they've been read.

> The advice says: "I'm going to read this data sequentially, going
> forward." It doesn't say: "I'm only going to read the data once, and
> then not access it again" (ISTM that's what FADV_NOREUSE is for).

I'd believe that interpretation if the spec specifically allowed for
applying multiple "advice" values to the same fd.  However, given the
way the API is written, it sure looks like the intention is that only
the most recent advice value is valid for any one (portion of a) file.
If the intention was that you could specify both FADV_SEQUENTIAL and
FADV_NOREUSE, the usual Unix-y way to handle it would have been to
define these constants as bit mask values and specify that the parameter
to the syscall is a bitwise OR of multiple flags.  The way you are
interpreting it, there is no way to cancel an FADV_NOREUSE setting,
since there is no value that is the opposite setting.
        regards, tom lane