Re: 2nd Level Buffer Cache - Mailing list pgsql-hackers

From Merlin Moncure
Subject Re: 2nd Level Buffer Cache
Date
Msg-id AANLkTimCiwB-kxYqvTAy5-hCRwRjY1XQknfMkpMjhmdA@mail.gmail.com
Whole thread Raw
In response to Re: 2nd Level Buffer Cache  (Greg Stark <gsstark@mit.edu>)
Responses Re: 2nd Level Buffer Cache
Re: 2nd Level Buffer Cache
Re: 2nd Level Buffer Cache
List pgsql-hackers
On Mon, Mar 21, 2011 at 2:08 PM, Greg Stark <gsstark@mit.edu> wrote:
> On Mon, Mar 21, 2011 at 3:54 PM, Merlin Moncure <mmoncure@gmail.com> wrote:
>> Can't you make just one large mapping and lock it in 8k regions? I
>> thought the problem with mmap was not being able to detect other
>> processes (http://www.mail-archive.com/pgsql-general@postgresql.org/msg122301.html)
>> compatibility issues (possibly obsolete), etc.
>
> I was assuming that locking part of a mapping would force the kernel
> to split the mapping. It has to record the locked state somewhere so
> it needs a data structure that represents the size of the locked
> section and that would, I assume, be the mapping.
>
> It's possible the kernel would not in fact fall over too badly doing
> this. At some point I'll go ahead and do experiments on it. It's a bit
> fraught though as it the performance may depend on the memory
> management features of the chipset.
>
> That said, that's only part of the battle. On 32bit you can't map the
> whole database as your database could easily be larger than your
> address space. I have some ideas on how to tackle that but the
> simplest test would be to just mmap 8kB chunks everywhere.

Even on 64 bit systems you only have 48 bit address space which is not
a theoretical  limitation.  However, at least on linux you can map in
and map out pretty quick (10 microseconds paired on my linux vm) so
that's not so big of a deal.  Dealing with rapidly growing files is a
problem.  That said, probably you are not going to want to reserve
multiple gigabytes in 8k non contiguous chunks.

> But it's worse than that. Since you're not responsible for flushing
> blocks to disk any longer you need some way to *unlock* a block when
> it's possible to be flushed. That means when you flush the xlog you
> have to somehow find all the blocks that might no longer need to be
> locked and atomically unlock them. That would require new
> infrastructure we don't have though it might not be too hard.
>
> What would be nice is a mlock_until() where you eventually issue a
> call to tell the kernel what point in time you've reached and it
> unlocks everything older than that time.

I wonder if there is any reason to mlock at all...if you are going to
'do' mmap, can't you just hide under current lock architecture for
actual locking and do direct memory access without mlock?

merlin


pgsql-hackers by date:

Previous
From: Josh Berkus
Date:
Subject: Re: 2nd Level Buffer Cache
Next
From: Dave Page
Date:
Subject: Re: Chinese initdb on Windows