>
> While having some spare two hours I was just looking at the current code
> of postgres. I was trying to estimate how would it fit to the current
> postgres guts.
>
> Finally I've found more proofs that memory mapping would do a lot to
> current performance, but I must admit that current storage manager is
> pretty read/write oriented. It would be easier to integrate memory
> mapping into buffer manager. Actually buffer manager role is to map some
> parts of files into memory buffers. However it takes a lot to get
> through several layers (smgr and finally md).
>
> I noticed that one of the very important features of mmaping is that you
> can sync the buffer (even some part of it), not the whole file. So if
> there would be some kind of page level locking, it would be absolutly
> necessary to make sure that only committed pages are synced and we don't
> overload the IO with unfinished things.
We really don't need to worry about it. Our goal it to control flushing
of pg_log to disk. If we control that, we don't care if the non-pg_log
pages go to disk. In a crash, any non-synced pg_log transactions are
rolled-back.
We are spoiled because we have just one compact central file to worry
about sync-ing.
>
> Also, I think that there is no need to create buffers in shared memory.
> I have just tested that if you map files with MAP_SHARED attribute set,
> then each proces is working on exactly the same copy of memory.
>
> I have also noticed more interesting things, maybe somebody would
> clarify on that since I'm not so literate with mmaping. First thing I
> was wondering about was how would we deal with open descriptor limits if
> we use direct buffer-to-file mappings. While currently buffers are
> isolated from files it's possible to close some descriptors without
> throwing buffers. However it seems (tried it) that memory mapping works
> even after a file descriptor is closed. So, is this possible to cross
> the limit of open files by using memory mapping? Or maybe the descriptor
> remains open until munmap call? Or maybe it's just a Linux feature?
Not sure about this, but the open file limit is not a restriction for us
very often, it is. It is a per-backend issue, and I can't imagine cases
where a backend has more than 64 file descriptors open. If so, you can
increase the kernel limits, usually.
--
Bruce Momjian | 830 Blythe Avenue
maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026
+ If your life is a hard drive, | (610) 353-9879(w)
+ Christ can be your backup. | (610) 853-3000(h)