Re: [PoC] Non-volatile WAL buffer - Mailing list pgsql-hackers

From Tomas Vondra
Subject Re: [PoC] Non-volatile WAL buffer
Date
Msg-id f07c8a5f-7166-07d2-0858-34b6dcbfa7c7@enterprisedb.com
Whole thread Raw
In response to [PoC] Non-volatile WAL buffer  (Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>)
Responses Re: [PoC] Non-volatile WAL buffer  (Tomas Vondra <tomas.vondra@enterprisedb.com>)
List pgsql-hackers
Hi,

On 10/30/20 6:57 AM, Takashi Menjo wrote:
> Hi Heikki,
> 
>> I had a new look at this thread today, trying to figure out where 
>> we are.
> 
> I'm a bit confused.
>> 
>> One thing we have established: mmap()ing WAL files performs worse 
>> than the current method, if pg_wal is not on a persistent memory 
>> device. This is because the kernel faults in existing content of 
>> each page, even though we're overwriting everything.
> 
> Yes. In addition, after a certain page (in the sense of OS page) is 
> msync()ed, another page fault will occur again when something is 
> stored into that page.
> 
>> That's unfortunate. I was hoping that mmap() would be a good option
>> even without persistent memory hardware. I wish we could tell the
>> kernel to zero the pages instead of reading them from the file.
>> Maybe clear the file with ftruncate() before mmapping it?
> 
> The area extended by ftruncate() appears as if it were zero-filled 
> [1]. Please note that it merely "appears as if." It might not be 
> actually zero-filled as data blocks on devices, so pre-allocating 
> files should improve transaction performance. At least, on Linux 5.7
>  and ext4, it takes more time to store into the mapped file just 
> open(O_CREAT)ed and ftruncate()d than into the one filled already and
> actually.
> 

Does is really matter that it only appears zero-filled? I think Heikki's
point was that maybe ftruncate() would prevent the kernel from faulting
the existing page content when we're overwriting it.

Not sure I understand what the benchmark with ext4 was doing, exactly.
How was that measured? Might be interesting to have some simple
benchmarking tool to demonstrate this (I believe a small standalone tool
written in C should do the trick).

>> That should not be problem with a real persistent memory device, 
>> however (or when emulating it with DRAM). With DAX, the storage is 
>> memory-mapped directly and there is no page cache, and no 
>> pre-faulting.
> 
> Yes, with filesystem DAX, there is no page cache for file data. A 
> page fault still occurs but for each 2MiB DAX hugepage, so its 
> overhead decreases compared with 4KiB page fault. Such a DAX
> hugepage fault is only applied to DAX-mapped files and is different
> from a general transparent hugepage fault.
> 

I don't follow - if there are page faults even when overwriting all the
data, I'd say it's still an issue even with 2MB DAX pages. How big is
the difference between 4kB and 2MB pages?

Not sure I understand how is this different from general THP fault?

>> Because of that, I'm baffled by what the 
>> v4-0002-Non-volatile-WAL-buffer.patch does. If I understand it 
>> correctly, it puts the WAL buffers in a separate file, which is 
>> stored on the NVRAM. Why? I realize that this is just a Proof of 
>> Concept, but I'm very much not interested in anything that requires
>> the DBA to manage a second WAL location. Did you test the mmap()
>> patches with persistent memory hardware? Did you compare that with
>> the pmem patchset, on the same hardware? If there's a meaningful
>> performance difference between the two, what's causing it?

> Yes, this patchset puts the WAL buffers into the file specified by 
> "nvwal_path" in postgresql.conf.
> 
> Why this patchset puts the buffers into the separated file, not 
> existing segment files in PGDATA/pg_wal, is because it reduces the 
> overhead due to system calls such as open(), mmap(), munmap(), and 
> close(). It open()s and mmap()s the file "nvwal_path" once, and keeps
> that file mapped while running. On the other hand, as for the 
> patchset mmap()ing the segment files, a backend process should 
> munmap() and close() the current mapped file and open() and mmap() 
> the new one for each time the inserting location for that process 
> goes over segments. This causes the performance difference between 
> the two.
> 

I kinda agree with Heikki here - having to manage yet another location
for WAL data is rather inconvenient. We should aim not to make the life
of DBAs unnecessarily difficult, IMO.

I wonder how significant the syscall overhead is - can you show share
some numbers? I don't see any such results in this thread, so I'm not
sure if it means losing 1% or 10% throughput.

Also, maybe there are alternative ways to reduce the overhead? For
example, we can increase the size of the WAL segment, and with 1GB
segments we'd do 1/64 of syscalls. Or maybe we could do some of this
asynchronously - request a segment ahead, and let another process do the
actual work etc. so that the running process does not wait.


Do I understand correctly that the patch removes "regular" WAL buffers
and instead writes the data into the non-volatile PMEM buffer, without
writing that to the WAL segments at all (unless in archiving mode)?

Firstly, I guess many (most?) instances will have to write the WAL
segments anyway because of PITR/backups, so I'm not sure we can save
much here.

But more importantly - doesn't that mean the nvwal_size value is
essentially a hard limit? With max_wal_size, it's a soft limit i.e.
we're allowed to temporarily use more WAL when needed. But with a
pre-allocated file, that's clearly not possible. So what would happen in
those cases?

Also, is it possible to change nvwal_size? I haven't tried, but I wonder
what happens with the current contents of the file.


regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



pgsql-hackers by date:

Previous
From: Tomas Vondra
Date:
Subject: Re: [PoC] Non-volatile WAL buffer
Next
From: "tsunakawa.takay@fujitsu.com"
Date:
Subject: RE: POC: postgres_fdw insert batching