Thread: Proposal for Updating CRC32C with AVX-512 Algorithm.
Hi, Comparing the current SSE4.2 implementation of the CRC32C algorithm in Postgres, to an optimized AVX-512 algorithm [0] weobserved significant gains. The result was a ~6.6X average multiplier of increased performance measured on 3 differentIntel products. Details below. The AVX-512 algorithm in C is a port of the ISA-L library [1] assembler code. Workload call size distribution details (write heavy): * Average was approximately around 1,010 bytes per call * ~80% of the calls were under 256 bytes * ~20% of the calls were greater than or equal to 256 bytes up to the max buffer size of 8192 The 256 bytes is important because if the buffer is smaller, it makes sense fallback to the existing implementation. Thisis because the AVX-512 algorithm needs a minimum of 256 bytes to operate. Using the above workload data distribution, at 0% calls < 256 bytes, a 841% improvement on average for crc32c functionality was observed. at 50% calls < 256 bytes, a 758% improvement on average for crc32c functionality was observed. at 90% calls < 256 bytes, a 44% improvement on average for crc32c functionality was observed. at 97.6% calls < 256 bytes, the workload's crc32c performance breaks-even. at 100% calls < 256 bytes, a 14% regression is seen when using AVX-512 implementation. The results above are averages over 3 machines, and were measured on: Intel Saphire Rapids bare metal, and using EC2 on AWScloud: Intel Saphire Rapids (m7i.2xlarge) and Intel Ice Lake (m6i.2xlarge). Summary Data (Saphire Rapids bare metal, AWS m7i-2xl, and AWS m6i-2xl): +---------------------+-------------------+-------------------+-------------------+--------------------+ | Rates in Bytes/us | Bare Metal | AWS m6i-2xl | AWS m7i-2xl | | | (Larger is Better) +---------+---------+---------+---------+---------+---------+ Overall Multiplier | | | SSE 4.2 | AVX-512 | SSE 4.2 | AVX-512 | SSE 4.2 | AVX-512 | | +---------------------+---------+---------+---------+---------+---------+---------+--------------------+ | Numbers 256-8192 | 12,046 | 83,196 | 7,471 | 39,965 | 11,867 | 84,589 | 6.62 | +---------------------+---------+---------+---------+---------+---------+---------+--------------------+ | Numbers 64 - 255 | 16,865 | 15,909 | 9,209 | 7,363 | 12,496 | 10,046 | 0.86 | +---------------------+---------+---------+---------+---------+---------+---------+--------------------+ | Weighted Multiplier [*] | 1.44 | +-----------------------------+--------------------+ There was no evidence of AVX-512 frequency throttling from perf data, which stayed steady during the test. Feedback on this proposed improvement is appreciated. Some questions: 1) This AVX-512 ISA-L derived code uses BSD-3 license [2]. Is this compatible with the PostgreSQL License [3]? They bothappear to be very permissive licenses, but I am not an expert on licenses. 2) Is there a preferred benchmark I should run to test this change? If licensing is a non-issue, I can post the initial patch along with my Postgres benchmark function patch for further review. Thanks, Paul [0] https://www.researchgate.net/publication/263424619_Fast_CRC_computation#full-text [1] https://github.com/intel/isa-l [2] https://opensource.org/license/bsd-3-clause [3] https://opensource.org/license/postgresql [*] Weights used were 90% of requests less than 256 bytes, 10% greater than or equal to 256 bytes.
Hi, forgive the top-post but I have not seen any response to this post? Thanks, Paul > -----Original Message----- > From: Amonson, Paul D > Sent: Wednesday, May 1, 2024 8:56 AM > To: pgsql-hackers@lists.postgresql.org > Cc: Nathan Bossart <nathandbossart@gmail.com>; Shankaran, Akash > <akash.shankaran@intel.com> > Subject: Proposal for Updating CRC32C with AVX-512 Algorithm. > > Hi, > > Comparing the current SSE4.2 implementation of the CRC32C algorithm in > Postgres, to an optimized AVX-512 algorithm [0] we observed significant > gains. The result was a ~6.6X average multiplier of increased performance > measured on 3 different Intel products. Details below. The AVX-512 algorithm > in C is a port of the ISA-L library [1] assembler code. > > Workload call size distribution details (write heavy): > * Average was approximately around 1,010 bytes per call > * ~80% of the calls were under 256 bytes > * ~20% of the calls were greater than or equal to 256 bytes up to the max > buffer size of 8192 > > The 256 bytes is important because if the buffer is smaller, it makes sense > fallback to the existing implementation. This is because the AVX-512 algorithm > needs a minimum of 256 bytes to operate. > > Using the above workload data distribution, > at 0% calls < 256 bytes, a 841% improvement on average for crc32c > functionality was observed. > at 50% calls < 256 bytes, a 758% improvement on average for crc32c > functionality was observed. > at 90% calls < 256 bytes, a 44% improvement on average for crc32c > functionality was observed. > at 97.6% calls < 256 bytes, the workload's crc32c performance breaks-even. > at 100% calls < 256 bytes, a 14% regression is seen when using AVX-512 > implementation. > > The results above are averages over 3 machines, and were measured on: Intel > Saphire Rapids bare metal, and using EC2 on AWS cloud: Intel Saphire Rapids > (m7i.2xlarge) and Intel Ice Lake (m6i.2xlarge). > > Summary Data (Saphire Rapids bare metal, AWS m7i-2xl, and AWS m6i-2xl): > +---------------------+-------------------+-------------------+-------------------+--------- > -----------+ > | Rates in Bytes/us | Bare Metal | AWS m6i-2xl | AWS m7i-2xl | > | > | (Larger is Better) +---------+---------+---------+---------+---------+---------+ > Overall Multiplier | > | | SSE 4.2 | AVX-512 | SSE 4.2 | AVX-512 | SSE 4.2 | AVX-512 | > | > +---------------------+---------+---------+---------+---------+---------+---------+------- > -------------+ > | Numbers 256-8192 | 12,046 | 83,196 | 7,471 | 39,965 | 11,867 | > 84,589 | 6.62 | > +---------------------+---------+---------+---------+---------+---------+---------+------- > -------------+ > | Numbers 64 - 255 | 16,865 | 15,909 | 9,209 | 7,363 | 12,496 | > 10,046 | 0.86 | > +---------------------+---------+---------+---------+---------+---------+---------+------- > -------------+ > | Weighted Multiplier [*] | 1.44 | > +-----------------------------+--------------------+ > There was no evidence of AVX-512 frequency throttling from perf data, which > stayed steady during the test. > > Feedback on this proposed improvement is appreciated. Some questions: > 1) This AVX-512 ISA-L derived code uses BSD-3 license [2]. Is this compatible > with the PostgreSQL License [3]? They both appear to be very permissive > licenses, but I am not an expert on licenses. > 2) Is there a preferred benchmark I should run to test this change? > > If licensing is a non-issue, I can post the initial patch along with my Postgres > benchmark function patch for further review. > > Thanks, > Paul > > [0] > https://www.researchgate.net/publication/263424619_Fast_CRC_computati > on#full-text > [1] https://github.com/intel/isa-l > [2] https://opensource.org/license/bsd-3-clause > [3] https://opensource.org/license/postgresql > > [*] Weights used were 90% of requests less than 256 bytes, 10% greater than > or equal to 256 bytes.
> On 17 May 2024, at 18:21, Amonson, Paul D <paul.d.amonson@intel.com> wrote: > Hi, forgive the top-post but I have not seen any response to this post? The project is currently in feature-freeze in preparation for the next major release so new development and ideas are not the top priority right now. Additionally there is a large developer meeting shortly which many are busy preparing for. Excercise some patience, and I'm sure there will be follow-ups to this once development of postgres v18 picks up. -- Daniel Gustafsson
> The project is currently in feature-freeze in preparation for the next major > release so new development and ideas are not the top priority right now. > Additionally there is a large developer meeting shortly which many are busy > preparing for. Excercise some patience, and I'm sure there will be follow-ups > to this once development of postgres v18 picks up. Thanks, understood. I had our OSS internal team, who are experts in OSS licensing, review possible conflicts between the PostgreSQL license andthe BSD-Clause 3-like license for the CRC32C AVX-512 code, and they found no issues. Therefore, including the new licenseinto the PostgreSQL codebase should be acceptable. I am attaching the first official patches. The second patch is a simple test function in PostgreSQL SQL, which I used fortesting and benchmarking. It will not be merged. Code Structure Question: While working on this code, I noticed overlaps with runtime CPU checks done in the previous POPCNTmerged code. I was considering that these checks should perhaps be formalized and consolidated into a single source/headerfile pair. If this is desirable, where should I place these files? Should it be in "src/port" where they areused, or in "src/common" where they are available to all (not just the "src/port" tree)? Thanks, Paul
Attachment
"Amonson, Paul D" <paul.d.amonson@intel.com> writes: > I had our OSS internal team, who are experts in OSS licensing, review possible conflicts between the PostgreSQL licenseand the BSD-Clause 3-like license for the CRC32C AVX-512 code, and they found no issues. Therefore, including thenew license into the PostgreSQL codebase should be acceptable. Maybe you should get some actual lawyers to answer this type of question. The Chromium license this code cites is 3-clause-BSD style, which is NOT compatible: the "advertising" clause is significant. In any case, writing copyright notices that are pointers to external web pages is not how it's done around here. We generally operate on the assumption that the Postgres source code will outlive any specific web site. Dead links to incidental material might be okay, but legally relevant stuff not so much. regards, tom lane
On Wed, Jun 12, 2024 at 02:08:02PM -0400, Tom Lane wrote: > "Amonson, Paul D" <paul.d.amonson@intel.com> writes: > > I had our OSS internal team, who are experts in OSS licensing, review possible conflicts between the PostgreSQL licenseand the BSD-Clause 3-like license for the CRC32C AVX-512 code, and they found no issues. Therefore, including thenew license into the PostgreSQL codebase should be acceptable. > > Maybe you should get some actual lawyers to answer this type of > question. The Chromium license this code cites is 3-clause-BSD > style, which is NOT compatible: the "advertising" clause is > significant. > > In any case, writing copyright notices that are pointers to > external web pages is not how it's done around here. We generally > operate on the assumption that the Postgres source code will > outlive any specific web site. Dead links to incidental material > might be okay, but legally relevant stuff not so much. Agreed. The licenses are compatible in the sense that they can be combined to create a unified work, but they cannot be combined without modifying the license of the combined work. You would need to combine the Postgres and Chrome license for this, and I highly doubt we are going to be modifying the Postgres for this. -- Bruce Momjian <bruce@momjian.us> https://momjian.us EDB https://enterprisedb.com Only you can decide what is important to you.
Hi, I'm wonder if this isn't going in the wrong direction. We're using CRCs for something they're not well suited for in my understanding - and are paying a reasonably high price for it, given that even hardware accelerated CRCs aren't blazingly fast. CRCs are used for things like ethernet, iSCSI because they are good at detecting the kinds of errors encountered, namely short bursts of bitflips. And the covered data is limited to a fairly small limit. Which imo makes CRCs a bad choice for WAL. For one, we don't actually expect a short burst of bitflips, the most likely case is all bits after some point changing (because only one part of the record made it to disk). For another, WAL records are *not* limited to a small size, and if anything error detection becomes more important with longer records (they're likely to be span more pages / segments). It's hard to understand, but a nonetheless helpful page is https://users.ece.cmu.edu/~koopman/crc/crc32.html which lists properties for crc32c: https://users.ece.cmu.edu/~koopman/crc/c32/0x8f6e37a0_len.txt which lists (0x8f6e37a0; 0x11edc6f41) <=> (0x82f63b78; 0x105ec76f1) {2147483615,2147483615,5243,5243,177,177,47,47,20,20,8,8,6,6,1,1}| gold | (*op) iSCSI; CRC-32C; CRC-32/4 This cryptic notion AFAIU indicates that for our polynomial we can detect 2bit errors up to a length of 2147483615 bytes, 3 bit errors up to 2147483615, 3 and 4 bit errors up to 5243, 5 and 6 bit errors up to 177, 7/8 bit errors up to 47. IMO for our purposes just about all errors are going to be at least at sector boundaries, i.e. 512 bytes and thus are at least 8 bit large. At that point we are only guaranteed to find a single-byte error (it'll be common to have much more) up to a lenght of 47bits. Which isn't a useful guarantee. With that I perhaps have established that CRC guarantees aren't useful for us. But not yet why we should use something else: Given that we already aren't relying on hard guarantees, we could instead just use a fast hash like xxh3. https://github.com/Cyan4973/xxHash which is fast both for large and small amounts of data. Greetings, Andres Freund
Hi, On 2024-05-01 15:56:08 +0000, Amonson, Paul D wrote: > Comparing the current SSE4.2 implementation of the CRC32C algorithm in > Postgres, to an optimized AVX-512 algorithm [0] we observed significant > gains. The result was a ~6.6X average multiplier of increased performance > measured on 3 different Intel products. Details below. The AVX-512 algorithm > in C is a port of the ISA-L library [1] assembler code. > > Workload call size distribution details (write heavy): > * Average was approximately around 1,010 bytes per call > * ~80% of the calls were under 256 bytes > * ~20% of the calls were greater than or equal to 256 bytes up to the max buffer size of 8192 This is extremely workload dependent, it's not hard to find workloads with lots of very small record and very few big ones... What you observed might have "just" been the warmup behaviour where more full page writes have to be written. There a very frequent call computing COMP_CRC32C over just 20 bytes, while holding a crucial lock. If we were to do introduce something like this AVX-512 algorithm, it'd probably be worth to dispatch differently in case of compile-time known small lengths. How does the latency of the AVX-512 algorithm compare to just using the CRC32C instruction? FWIW, I tried the v2 patch on my Xeon Gold 5215 workstation, and dies early on with SIGILL: Program terminated with signal SIGILL, Illegal instruction. #0 0x0000000000d5946c in _mm512_clmulepi64_epi128 (__A=..., __B=..., __C=0) at /home/andres/build/gcc/master/install/lib/gcc/x86_64-pc-linux-gnu/15/include/vpclmulqdqintrin.h:42 42 return (__m512i) __builtin_ia32_vpclmulqdq_v8di ((__v8di)__A, (gdb) bt #0 0x0000000000d5946c in _mm512_clmulepi64_epi128 (__A=..., __B=..., __C=0) at /home/andres/build/gcc/master/install/lib/gcc/x86_64-pc-linux-gnu/15/include/vpclmulqdqintrin.h:42 #1 pg_comp_crc32c_avx512 (crc=<optimized out>, data=<optimized out>, length=<optimized out>) at ../../../../../home/andres/src/postgresql/src/port/pg_crc32c_avx512.c:163 #2 0x0000000000819343 in ReadControlFile () at ../../../../../home/andres/src/postgresql/src/backend/access/transam/xlog.c:4375 #3 0x000000000081c4ac in LocalProcessControlFile (reset=<optimized out>) at ../../../../../home/andres/src/postgresql/src/backend/access/transam/xlog.c:4817 #4 0x0000000000a8131d in PostmasterMain (argc=argc@entry=85, argv=argv@entry=0x341b08f0) at ../../../../../home/andres/src/postgresql/src/backend/postmaster/postmaster.c:902 #5 0x00000000009b53fe in main (argc=85, argv=0x341b08f0) at ../../../../../home/andres/src/postgresql/src/backend/main/main.c:197 Cascade lake doesn't have vpclmulqdq, so we shouldn't be getting here... This is on an optimied build with meson, with -march=native included in c_flags. Relevant configure output: Checking if "XSAVE intrinsics without -mxsave" : links: NO (cached) Checking if "XSAVE intrinsics with -mxsave" : links: YES (cached) Checking if "AVX-512 popcount without -mavx512vpopcntdq -mavx512bw" : links: NO (cached) Checking if "AVX-512 popcount with -mavx512vpopcntdq -mavx512bw" : links: YES (cached) Checking if "_mm512_clmulepi64_epi128 ... with -msse4.2 -mavx512vl -mvpclmulqdq" : links: YES Checking if "x86_64: popcntq instruction" compiles: YES (cached) Greetings, Andres Freund
> -----Original Message----- > From: Andres Freund <andres@anarazel.de> > Sent: Wednesday, June 12, 2024 1:12 PM > To: Amonson, Paul D <paul.d.amonson@intel.com> > FWIW, I tried the v2 patch on my Xeon Gold 5215 workstation, and dies early > on with SIGILL: Nice catch!!! I was testing the bit for the vpclmulqdq in EBX instead of the correct ECX register. New Patch attached. Iadded defines to make that easier to see those types of bugs rather than a simple index number. I double checked the othersas well. Paul
Attachment
> This is extremely workload dependent, it's not hard to find workloads with > lots of very small record and very few big ones... What you observed might > have "just" been the warmup behaviour where more full page writes have to > be written. Can you tell me how to avoid capturing this "warm-up" so that the numbers are more accurate? > There a very frequent call computing COMP_CRC32C over just 20 bytes, while > holding a crucial lock. If we were to do introduce something like this > AVX-512 algorithm, it'd probably be worth to dispatch differently in case of > compile-time known small lengths. So are you suggesting that we be able to directly call into the 64/32 bit based algorithm directly from these known smallbyte cases in the code? I think that we can do that with a separate API being exposed. > How does the latency of the AVX-512 algorithm compare to just using the > CRC32C instruction? I think I need more information on this one as I am not sure I understand the use case? The same function pointer indirectmethods are used with or without the AVX-512 algorithm? Paul
On 2024-Jun-12, Amonson, Paul D wrote: > +/*------------------------------------------------------------------------- > + * > + * pg_crc32c_avx512.c > + * Compute CRC-32C checksum using Intel AVX-512 instructions. > + * > + * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group > + * Portions Copyright (c) 1994, Regents of the University of California > + * Portions Copyright (c) 2024, Intel(r) Corporation > + * > + * IDENTIFICATION > + * src/port/pg_crc32c_avx512.c > + * > + *------------------------------------------------------------------------- > + */ Hmm, I wonder if the "(c) 2024 Intel" line is going to bring us trouble. (I bet it's not really necessary anyway.) > +/******************************************************************* > + * pg_crc32c_avx512(): compute the crc32c of the buffer, where the > + * buffer length must be at least 256, and a multiple of 64. Based > + * on: > + * > + * "Fast CRC Computation for Generic Polynomials Using PCLMULQDQ > + * Instruction" > + * V. Gopal, E. Ozturk, et al., 2009, > + * https://www.researchgate.net/publication/263424619_Fast_CRC_computation#full-text > + * > + * This Function: > + * Copyright 2017 The Chromium Authors > + * Copyright (c) 2024, Intel(r) Corporation > + * > + * Use of this source code is governed by a BSD-style license that can be > + * found in the Chromium source repository LICENSE file. > + * https://chromium.googlesource.com/chromium/src/+/refs/heads/main/LICENSE > + */ And this bit doesn't look good. The LICENSE file says: > // Redistribution and use in source and binary forms, with or without > // modification, are permitted provided that the following conditions are > // met: > // > // * Redistributions of source code must retain the above copyright > // notice, this list of conditions and the following disclaimer. > // * Redistributions in binary form must reproduce the above > // copyright notice, this list of conditions and the following disclaimer > // in the documentation and/or other materials provided with the > // distribution. > // * Neither the name of Google LLC nor the names of its > // contributors may be used to endorse or promote products derived from > // this software without specific prior written permission. The second clause essentially says we would have to add a page to our "documentation and/or other materials" with the contents of the license file. There's good reasons for UCB to have stopped using the old BSD license, but apparently Google (or more precisely the Chromium authors) didn't get the memo. Our fork distributors spent a lot of time scouring out source cleaning up copyrights, a decade ago or two. I bet they won't be happy to see this sort of thing crop up now. -- Álvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/ "No nos atrevemos a muchas cosas porque son difíciles, pero son difíciles porque no nos atrevemos a hacerlas" (Séneca)
> Hmm, I wonder if the "(c) 2024 Intel" line is going to bring us trouble. > (I bet it's not really necessary anyway.) Our lawyer agrees, copyright is covered by the "PostgreSQL Global Development Group" copyright line as a contributor. > And this bit doesn't look good. The LICENSE file says: ... > > // * Redistributions in binary form must reproduce the above > > // copyright notice, this list of conditions and the following > > disclaimer // in the documentation and/or other materials provided > > with the // distribution. ... > The second clause essentially says we would have to add a page to our > "documentation and/or other materials" with the contents of the license file. According to one of Intel’s lawyers, 55 instances of this clause was found when they searched in the PostgreSQL repository.Therefore, I assume that this obligation has either been satisfied or determined not to apply, given that thesecond BSD clause already appears in the PostgreSQL source tree. I might have misunderstood the concern, but the lawyerbelieves this is a non-issue. Could you please provide more clarifying details about the concern? Thanks, Paul
Attachment
On Tue, Jun 18, 2024 at 05:14:08PM +0000, Amonson, Paul D wrote: > > And this bit doesn't look good. The LICENSE file says: > ... > > > // * Redistributions in binary form must reproduce the above > > > // copyright notice, this list of conditions and the following > > > disclaimer // in the documentation and/or other materials provided > > > with the // distribution. > ... > > The second clause essentially says we would have to add a page to our > > "documentation and/or other materials" with the contents of the license file. > > According to one of Intel’s lawyers, 55 instances of this clause was found when they searched in the PostgreSQL repository.Therefore, I assume that this obligation has either been satisfied or determined not to apply, given that thesecond BSD clause already appears in the PostgreSQL source tree. I might have misunderstood the concern, but the lawyerbelieves this is a non-issue. Could you please provide more clarifying details about the concern? Yes, I can confirm that: grep -Rl 'Redistributions in binary form must reproduce' . | wc -l reports 54; file list attached. -- Bruce Momjian <bruce@momjian.us> https://momjian.us EDB https://enterprisedb.com Only you can decide what is important to you.
Attachment
On Tue, Jun 18, 2024 at 01:20:50PM -0400, Bruce Momjian wrote: > On Tue, Jun 18, 2024 at 05:14:08PM +0000, Amonson, Paul D wrote: > > > And this bit doesn't look good. The LICENSE file says: > > ... > > > > // * Redistributions in binary form must reproduce the above > > > > // copyright notice, this list of conditions and the following > > > > disclaimer // in the documentation and/or other materials provided > > > > with the // distribution. > > ... > > > The second clause essentially says we would have to add a page to our > > > "documentation and/or other materials" with the contents of the license file. > > > > According to one of Intel’s lawyers, 55 instances of this clause was found when they searched in the PostgreSQL repository.Therefore, I assume that this obligation has either been satisfied or determined not to apply, given that thesecond BSD clause already appears in the PostgreSQL source tree. I might have misunderstood the concern, but the lawyerbelieves this is a non-issue. Could you please provide more clarifying details about the concern? > > Yes, I can confirm that: > > grep -Rl 'Redistributions in binary form must reproduce' . | wc -l > > reports 54; file list attached. I am somewhat embarrassed by this since we made the Intel lawyers find something that was in our own source code. First, the "advertizing clause" in the 4-clause license: 3. All advertising materials mentioning features or use of this software must display the following acknowledgement: This product includes software developed by the University of California, Berkeley and its contributors. and was disavowed by Berkeley on July 22nd, 1999: https://elrc-share.eu/static/metashare/licences/BSD-3-Clause.pdf While the license we are concerned about does not have this clause, it does have: 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. I assume that must also include the name of the copyright holder. I think that means we need to mention The Regents of the University of California in our copyright notice, which we do. However several non-Regents of the University of California copyright holder licenses exist in our source tree, and accepting this AVX-512 patch would add another one. Specifically, I see existing entries for: Aaron D. Gifford Board of Trustees of the University of Illinois David Burren Eric P. Allman Jens Schweikhardt Marko Kreen Sun Microsystems, Inc. WIDE Project Now, some of these are these names plus Berkeley, and some are just the names above. -- Bruce Momjian <bruce@momjian.us> https://momjian.us EDB https://enterprisedb.com Only you can decide what is important to you.
On Tue, Jun 18, 2024 at 02:00:34PM -0400, Bruce Momjian wrote: > While the license we are concerned about does not have this clause, it > does have: > > 2. Redistributions in binary form must reproduce the above > copyright notice, this list of conditions and the following > disclaimer in the documentation and/or other materials provided > with the distribution. > > I assume that must also include the name of the copyright holder. > > I think that means we need to mention The Regents of the University of > California in our copyright notice, which we do. However several > non-Regents of the University of California copyright holder licenses > exist in our source tree, and accepting this AVX-512 patch would add > another one. Specifically, I see existing entries for: > > Aaron D. Gifford > Board of Trustees of the University of Illinois > David Burren > Eric P. Allman > Jens Schweikhardt > Marko Kreen > Sun Microsystems, Inc. > WIDE Project > > Now, some of these are these names plus Berkeley, and some are just the > names above. In summary, either we are doing something wrong in how we list copyrights in our documentation, or we don't need to make any changes for this Intel patch. Our license is at: https://www.postgresql.org/about/licence/ The Intel copyright in the source code is: * Copyright 2017 The Chromium Authors * Copyright (c) 2024, Intel(r) Corporation * * Use of this source code is governed by a BSD-style license that can be * found in the Chromium source repository LICENSE file. * https://chromium.googlesource.com/chromium/src/+/refs/heads/main/LICENSE and the URL contents are: // Copyright 2015 The Chromium Authors // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google LLC nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Google LLC is added to clause three, and I assume Intel is also covered by this because it is considered "the names of its contributors", maybe? It would be good to know exactly what, if any, changes the Intel lawyers want us to make to our license if we accept this patch. There are also different versions of clause three in our source tree. The Postgres license only lists the University of California in our equivalent of clause three, meaning that there are three-clause BSD licenses in our source tree that reference entities that we don't reference in the Postgres license. Oddly, the Postgres license doesn't even disclaim warranties for the PostgreSQL Global Development Group, only for Berkeley. An even bigger issue is that we are distributing 3-clause BSD licensed software under the Postgres license, which is not the 3-clause BSD license. I think we were functioning under the assuption that the licenses are compatibile, so can be combined, which is true, but I don't think we can assume the individual licenses can be covered by our one license, can we? -- Bruce Momjian <bruce@momjian.us> https://momjian.us EDB https://enterprisedb.com Only you can decide what is important to you.
> It would be good to know exactly what, if any, changes the Intel lawyers want > us to make to our license if we accept this patch. I asked about this and there is nothing Intel requires here license wise. They believe that there is nothing wrong with includingClause-3 BSD like licenses under the PostgreSQL license. They only specified that for the source file, the applyinglicense need to be present either as a link (which was previously discouraged in this thread) or the full text. Pleasenote that I checked and for this specific Chromium license there is not SPDX codename so the entire text is required. Thanks, Paul
On Tue, Jun 25, 2024 at 05:41:12PM +0000, Amonson, Paul D wrote: > > It would be good to know exactly what, if any, changes the Intel > > lawyers want us to make to our license if we accept this patch. > > I asked about this and there is nothing Intel requires here license > wise. They believe that there is nothing wrong with including Clause-3 > BSD like licenses under the PostgreSQL license. They only specified > that for the source file, the applying license need to be present > either as a link (which was previously discouraged in this thread) > or the full text. Please note that I checked and for this specific > Chromium license there is not SPDX codename so the entire text is > required. Okay, that is very interesting. Yes, we will have no problem reproducing the exact license text in the source code. I think we can remove the license issue as a blocker for this patch. -- Bruce Momjian <bruce@momjian.us> https://momjian.us EDB https://enterprisedb.com Only you can decide what is important to you.
> Okay, that is very interesting. Yes, we will have no problem reproducing the > exact license text in the source code. I think we can remove the license issue > as a blocker for this patch. Hi, I was wondering if I can I get a review please. I am interested in the refactor question for the HW capability tests as wellas an actual implementation review. I create a commit fest entry for this thread. Thanks, Paul
On Wed, Jun 12, 2024 at 12:37:46PM -0700, Andres Freund wrote: > I'm wonder if this isn't going in the wrong direction. We're using CRCs for > something they're not well suited for in my understanding - and are paying a > reasonably high price for it, given that even hardware accelerated CRCs aren't > blazingly fast. I tend to agree, especially that we should be more concerned about all bytes after a certain point being garbage than bit flips. (I think we should also care about bit flips, but I hope those are much less common than half-written WAL records.) > With that I perhaps have established that CRC guarantees aren't useful for us. > But not yet why we should use something else: Given that we already aren't > relying on hard guarantees, we could instead just use a fast hash like xxh3. > https://github.com/Cyan4973/xxHash which is fast both for large and small > amounts of data. Would it be out of the question to reuse the page checksum code (i.e., an FNV-1a derivative)? The chart in your link claims that xxh3 is substantially faster than "FNV64", but I wonder if the latter was vectorized. I don't know how our CRC-32C implementations (and proposed implementations) compare, either. -- nathan
Thanks for the new patches. On Thu, Aug 22, 2024 at 03:14:32PM +0000, Amonson, Paul D wrote: > I reran all the basic tests again to make sure that the performance > numbers were within the margin of error when compared to my original > finding. This step showed similar numbers (see origin post) around 1.45X > on average. I also made sure that if compiled with the AVX-512 features > and ran on HW without these features the Postgres server still worked > without throwing illegal instruction exceptions. Upthread [0], Andres suggested dispatching to a different implementation for compile-time-known small lengths. Have you looked into that? In your original post, you noted a 14% regression for records smaller than 256 bytes, which is not an uncommon case for Postgres. IMO we should try to mitigate that as much as possible. [0] https://postgr.es/m/20240612201135.kk77tiqcux77lgev%40awork3.anarazel.de -- nathan
> Upthread [0], Andres suggested dispatching to a different implementation for > compile-time-known small lengths. Have you looked into that? In your > original post, you noted a 14% regression for records smaller than 256 bytes, > which is not an uncommon case for Postgres. IMO we should try to mitigate > that as much as possible. So, without adding even more conditional tests (causing more latency), I can expose a new macro called COMP_CRC32C_SMALLthat can be called from known locations where the size is known to be 20bytes or less (or any fixed sizeless than 256). Other than that, there is no method I know of to pre-decide calling a function based on input size. Is there any concrete thought on this? Paul
On Mon, Aug 26, 2024 at 05:09:35PM +0000, Amonson, Paul D wrote: > Ok I added a patch that exposed a new macro CRC32C_COMP_SMALL for > targeted fixed size < 256 use cases in Postgres. As for mitigating the > regression in general, I have not been able to work up a fallback (i.e. > <256 bytes) that doesn't involve runtime checks which cause latency. I > also attempted to change the AVX512 fallback from the current algorithm > in the avx512 implementation to the SSE original implementation, but I am > not seeing any real difference for this use case in performance. I'm curious about where exactly the regression is coming from. Is it possible that your build for the SSE 4.2 tests was using it unconditionally, i.e., optimizing away the function pointer? -- nathan
> I'm curious about where exactly the regression is coming from. Is it possible > that your build for the SSE 4.2 tests was using it unconditionally, i.e., > optimizing away the function pointer? I am calling the SSE 4.2 implementation directly; I am not even building the pg_sse42_*_choose.c file with the AVX512 choice.As best I can tell there is one extra function call and one extra int64 conditional test when bytes are <256 and aof course a JMP instruction to skip the AVX512 implementation. Paul
On Mon, Aug 26, 2024 at 06:44:55PM +0000, Amonson, Paul D wrote: >> I'm curious about where exactly the regression is coming from. Is it possible >> that your build for the SSE 4.2 tests was using it unconditionally, i.e., >> optimizing away the function pointer? > > I am calling the SSE 4.2 implementation directly; I am not even building > the pg_sse42_*_choose.c file with the AVX512 choice. As best I can tell > there is one extra function call and one extra int64 conditional test > when bytes are <256 and a of course a JMP instruction to skip the AVX512 > implementation. And this still shows the ~14% regression in your original post? -- nathan
> And this still shows the ~14% regression in your original post? At the small buffer sizes the margin of error or "noise" is larger, 7-11%. My average could be just bad luck. It will takeme a while to re-setup for full data collection runs but I can try it again if you like. Paul
On Mon, Aug 26, 2024 at 06:54:58PM +0000, Amonson, Paul D wrote: >> And this still shows the ~14% regression in your original post? > > At the small buffer sizes the margin of error or "noise" is larger, > 7-11%. My average could be just bad luck. It will take me a while to > re-setup for full data collection runs but I can try it again if you > like. IMHO that would be useful to establish the current state of the patch set from a performance standpoint, especially since you've added code intended to mitigate the regression. +#define COMP_CRC32C_SMALL(crc, data, len) \ + ((crc) = pg_comp_crc32c_sse42((crc), (data), (len))) My interpretation of Andres's upthread suggestion is that we'd add the length check within the macro instead of introducing a separate one. We'd expect the compiler to optimize out comparisons for small lengths known at compile time and always call the existing implementation (which may still involve a function pointer in most cases). -- nathan
> IMHO that would be useful to establish the current state of the patch set from > a performance standpoint, especially since you've added code intended to > mitigate the regression. Ok. > +#define COMP_CRC32C_SMALL(crc, data, len) \ > + ((crc) = pg_comp_crc32c_sse42((crc), (data), (len))) > > My interpretation of Andres's upthread suggestion is that we'd add the length > check within the macro instead of introducing a separate one. We'd expect > the compiler to optimize out comparisons for small lengths known at compile > time and always call the existing implementation (which may still involve a > function pointer in most cases). How does the m4/compiler know the difference between a const "len" and a dynamic "len"? I already when the code and changedconstant sizes (structure sizes) to the new macro. Can you give an example of how this could work? Paul
On Mon, Aug 26, 2024 at 07:15:47PM +0000, Amonson, Paul D wrote: >> +#define COMP_CRC32C_SMALL(crc, data, len) \ >> + ((crc) = pg_comp_crc32c_sse42((crc), (data), (len))) >> >> My interpretation of Andres's upthread suggestion is that we'd add the length >> check within the macro instead of introducing a separate one. We'd expect >> the compiler to optimize out comparisons for small lengths known at compile >> time and always call the existing implementation (which may still involve a >> function pointer in most cases). > > How does the m4/compiler know the difference between a const "len" and a > dynamic "len"? I already when the code and changed constant sizes > (structure sizes) to the new macro. Can you give an example of how this > could work? Things like sizeof() and offsetof() are known at compile time, so the compiler will recognize when a condition is always true or false and optimize it out accordingly. In cases where the value cannot be known at compile time, checking the length in the macro and dispatching to a different implementation may still be advantageous, especially when the different implementation doesn't involve function pointers. -- nathan
Hi all, I will be retiring from Intel at the end of this week. I wanted to introduce the engineer who will be taking over the CRC32cproposal and commit fest entry. Devulapalli, Raghuveer <raghuveer.devulapalli@intel.com> I have brought him up to speed and he will be the go-to for technical review comments and questions. Please welcome him intothe community. Thanks, Paul
Thank you for the introduction, Paul. Hi all, I'm currently in the process of reviewing and analyzing Paul's patch. In the meantime, I'm open to addressing anyquestions or feedback you may have. > Hi all, > > I will be retiring from Intel at the end of this week. I wanted to introduce the > engineer who will be taking over the CRC32c proposal and commit fest entry. > > Devulapalli, Raghuveer <raghuveer.devulapalli@intel.com> > > I have brought him up to speed and he will be the go-to for technical review > comments and questions. Please welcome him into the community. > > Thanks, > Paul
On Tue, Oct 08, 2024 at 08:19:27PM +0000, Devulapalli, Raghuveer wrote: > Hi all, I'm currently in the process of reviewing and analyzing Paul's > patch. In the meantime, I'm open to addressing any questions or feedback > you may have. I've proposed a patch to move the existing AVX-512 code in Postgres to use __attribute__((target("..."))) instead of per-translation-unit compiler flags [0]. We should likely do something similar for this one. [0] https://postgr.es/m/ZxAqRG1-8fJLMRUY%40nathan -- nathan
> I've proposed a patch to move the existing AVX-512 code in Postgres to use > __attribute__((target("..."))) instead of per-translation-unit compiler flags [0]. We > should likely do something similar for this one. > > [0] https://postgr.es/m/ZxAqRG1-8fJLMRUY%40nathan I assume this will be committed separately and then I can rebase? > > -- > nathan
On Tue, Oct 29, 2024 at 09:00:17PM +0000, Devulapalli, Raghuveer wrote: > (1) The SSE42 and AVX-512 CRC32C also use function attributes to build > with ISA specific flag.. Would you mind moving the function attribute change for the existing SSE 4.2 code to its own patch? I think that is pretty straightforward, and IMHO it'd be nice to take care of it first so that we can focus on the new stuff. -- nathan
Hi, On 2024-10-30 21:03:20 +0000, Devulapalli, Raghuveer wrote: > v6: Fixing build failure on Windows/MSVC. > > Raghuveer > From b601e7b4ee9f25fd32e9d8d056bb20a03d755a8a Mon Sep 17 00:00:00 2001 > From: Paul Amonson <paul.d.amonson@intel.com> > Date: Mon, 6 May 2024 08:34:17 -0700 > Subject: [PATCH v6 1/6] Add a Postgres SQL function for crc32c testing. > > Signed-off-by: Paul Amonson <paul.d.amonson@intel.com> > Signed-off-by: Raghuveer Devulapalli <raghuveer.devulapalli@intel.com> > --- > src/test/modules/test_crc32c/Makefile | 20 +++++++++ > .../modules/test_crc32c/test_crc32c--1.0.sql | 1 + > src/test/modules/test_crc32c/test_crc32c.c | 41 +++++++++++++++++++ > .../modules/test_crc32c/test_crc32c.control | 4 ++ > 4 files changed, 66 insertions(+) > create mode 100644 src/test/modules/test_crc32c/Makefile > create mode 100644 src/test/modules/test_crc32c/test_crc32c--1.0.sql > create mode 100644 src/test/modules/test_crc32c/test_crc32c.c > create mode 100644 src/test/modules/test_crc32c/test_crc32c.control Needs to be integrated with the meson based build as well. > +/* > + * drive_crc32c(count: int, num: int) returns bigint > + * > + * count is the nuimber of loops to perform > + * > + * num is the number byte in the buffer to calculate > + * crc32c over. > + */ > +PG_FUNCTION_INFO_V1(drive_crc32c); > +Datum > +drive_crc32c(PG_FUNCTION_ARGS) > +{ > + int64 count = PG_GETARG_INT64(0); > + int64 num = PG_GETARG_INT64(1); > + pg_crc32c crc = 0xFFFFFFFF; > + const char* data = malloc((size_t)num); This is computing a crc of uninitialized data. That's a) undefined behaviour b) means the return value is basically random c) often will just CRC a lot of zeroes > From da26645ec8515e0e6d91e2311a83c3bb6649017e Mon Sep 17 00:00:00 2001 > From: Paul Amonson <paul.d.amonson@intel.com> > Date: Tue, 23 Jul 2024 11:23:23 -0700 > Subject: [PATCH v6 2/6] Move all HW checks to common file. Would be good to actually include a justification here. > --- /dev/null > +++ b/src/port/pg_hw_feat_check.c > @@ -0,0 +1,159 @@ > +/*------------------------------------------------------------------------- > + * > + * pg_hw_feat_check.c > + * Test for hardware features at runtime on x86_64 platforms. > + * > + * Copyright (c) 2024, PostgreSQL Global Development Group > + * > + * IDENTIFICATION > + * src/port/pg_hw_feat_check.c > + * > + *------------------------------------------------------------------------- > + */ > +#include "c.h" > + > +#if defined(HAVE__GET_CPUID) || defined(HAVE__GET_CPUID_COUNT) > +#include <cpuid.h> > +#endif > + > +#include <immintrin.h> > + > +#if defined(HAVE__CPUID) || defined(HAVE__CPUIDEX) > +#include <intrin.h> > +#endif > + > +#include "port/pg_hw_feat_check.h" > + > +/* Define names for EXX registers to avoid hard to see bugs in code below. */ > +typedef unsigned int exx_t; > +typedef enum > +{ > + EAX = 0, > + EBX = 1, > + ECX = 2, > + EDX = 3 > +} reg_name; Shouldn't this be in some x86 sepcific ifdef? > +# PGAC_AVX512_CRC32_INTRINSICS > +# --------------------------- > +# Check if the compiler supports the x86 CRC instructions added in AVX-512, > +# using the intrinsic functions: > + > +# (We don't test the 8-byte variant, _mm_crc32_u64, but it is assumed to > +# be present if the other ones are, on x86-64 platforms) > +# > +# An optional compiler flag can be passed as arguments (e.g. -msse4.2 > +# -mavx512vl -mvpclmulqdq). If the intrinsics are supported, sets > +# pgac_avx512_crc32_intrinsics, and CFLAGS_CRC. > +AC_DEFUN([PGAC_AVX512_CRC32_INTRINSICS], > +[define([Ac_cachevar], [AS_TR_SH([pgac_cv_avx512_crc32_intrinsics_$1])])dnl > +AC_CACHE_CHECK([for _mm512_clmulepi64_epi128, _mm512_clmulepi64_epi128... with CFLAGS=$1], [Ac_cachevar], > +[pgac_save_CFLAGS=$CFLAGS > +CFLAGS="$pgac_save_CFLAGS $1" > +AC_LINK_IFELSE([AC_LANG_PROGRAM([#include <immintrin.h>], > + [const unsigned long k1k2[[8]] = { > + 0xdcb17aa4, 0xb9e02b86, 0xdcb17aa4, 0xb9e02b86, > + 0xdcb17aa4, 0xb9e02b86, 0xdcb17aa4, 0xb9e02b86}; > + unsigned char buffer[[512]]; > + unsigned char *aligned = (unsigned char*)(((size_t)buffer + 64L) & 0xffffffffffc0L); > + unsigned long val; > + __m512i x0, x1, x2, x3, x4, x5, x6, x7, x8, y5, y6, y7, y8; > + __m128i a1, a2; > + unsigned int crc = 0xffffffff; > + y8 = _mm512_load_si512((__m512i *)aligned); > + x0 = _mm512_loadu_si512((__m512i *)k1k2); > + x1 = _mm512_loadu_si512((__m512i *)(buffer + 0x00)); > + x1 = _mm512_xor_si512(x1, _mm512_castsi128_si512(_mm_cvtsi32_si128(crc))); > + x5 = _mm512_clmulepi64_epi128(x1, x0, 0x00); > + x1 = _mm512_ternarylogic_epi64(x1, x5, y5, 0x96); > + a1 = _mm512_extracti32x4_epi32(x1, 3); > + a1 = _mm_xor_epi64(a1, _mm512_castsi512_si128(x0)); > + x0 = _mm512_shuffle_i64x2(x1, x1, 0x4E); > + val = _mm_crc32_u64(0, _mm_extract_epi64(a1, 0)); > + crc = (unsigned int)_mm_crc32_u64(val, _mm_extract_epi64(a1, 1)); > + return crc != 0;])], > + [Ac_cachevar=yes], > + [Ac_cachevar=no]) > +CFLAGS="$pgac_save_CFLAGS"]) > +if test x"$Ac_cachevar" = x"yes"; then > + CFLAGS_CRC="$1" > + pgac_avx512_crc32_intrinsics=yes > +fi > +undefine([Ac_cachevar])dnl > +])# PGAC_AVX512_CRC32_INTRINSICS > + Why is all this stuff needed inside a configure check? We don't need to check entire algorithms to check if we can build and link sepcific instructions, no? > From a495124ee42cb8f9f206f719b9f2235aff715963 Mon Sep 17 00:00:00 2001 > From: Nathan Bossart <nathan@postgresql.org> > Date: Wed, 16 Oct 2024 15:57:55 -0500 > Subject: [PATCH v6 5/6] use __attribute__((target(...))) for AVX-512 stuff Huh, so now we're undoing a bunch of stuff done earlier. Makes this series pretty hard to review. Greetings, Andres Freund
On Thu, Nov 07, 2024 at 11:05:14AM -0500, Andres Freund wrote: > On 2024-10-30 21:03:20 +0000, Devulapalli, Raghuveer wrote: >> From a495124ee42cb8f9f206f719b9f2235aff715963 Mon Sep 17 00:00:00 2001 >> From: Nathan Bossart <nathan@postgresql.org> >> Date: Wed, 16 Oct 2024 15:57:55 -0500 >> Subject: [PATCH v6 5/6] use __attribute__((target(...))) for AVX-512 stuff > > Huh, so now we're undoing a bunch of stuff done earlier. Makes this series > pretty hard to review. I'm planning to commit this one very soon (it's being tracked in a separate thread [0]), so this patch series will need rebasing, anyway. I think we should use __attribute__((target(...))) right away for $SUBJECT instead of undoing stuff in later patches. [0] https://postgr.es/m/ZywlZzPcPnlqKvt5%40nathan -- nathan
> Would you mind moving the function attribute change for the existing SSE > 4.2 code to its own patch? I think that is pretty straightforward, and IMHO it'd be > nice to take care of it first so that we can focus on the new stuff. Just submitted a separate patch for this. Will update the CRC32C patch once this is committed. Raghuveer
On Mon, Nov 25, 2024 at 08:54:48PM +0000, Devulapalli, Raghuveer wrote: > As Nathan suggested, we moved this to a separate thread. The latest set > of patches here need to applied on top of patches in that thread. Raghuveer, would you mind rebasing this patch set now that the SSE4.2 patch is committed? -- nathan
On Tue, Dec 03, 2024 at 03:46:16PM +0000, Devulapalli, Raghuveer wrote: >> Raghuveer, would you mind rebasing this patch set now that the SSE4.2 patch is >> committed? > > Rebased to master branch. Thanks! cfbot is showing a couple of errors [0] [1] [2]. 32-bit Linux is failing to compile with the 64-bit intrinsics. I think it'd be fine to limi this optimization to 64-bit builds unless the code can be easily fixed to work for both. The macOS build seems to be trying to include the x86 headers, which is producing many errors. We'll need to make sure that none of this code is being compiled on ARM machine. The Windows build seems to be unable to resolve the pg_comp_crc32c symbol, but it is not immediately obvious to me why. [0] https://cirrus-ci.com/task/6023394207989760 [1] https://cirrus-ci.com/task/5460444254568448 [2] https://cirrus-ci.com/task/6586344161411072 -- nathan
> Thanks! cfbot is showing a couple of errors [0] [1] [2]. Oh yikes, the CI had passed with an earlier version. Wonder if I made a mess of the rebase. I will take a look and fix them. Raghuveer
On Thu, Jun 13, 2024 at 3:11 AM Andres Freund <andres@anarazel.de> wrote: > > On 2024-05-01 15:56:08 +0000, Amonson, Paul D wrote: > > Workload call size distribution details (write heavy): > > * Average was approximately around 1,010 bytes per call > > * ~80% of the calls were under 256 bytes > > * ~20% of the calls were greater than or equal to 256 bytes up to the max buffer size of 8192 > > This is extremely workload dependent, it's not hard to find workloads with > lots of very small record and very few big ones... What you observed might > have "just" been the warmup behaviour where more full page writes have to be > written. Sorry for going back so far, but this thread was pointed out to me, and this aspect of the design could use some more discussion: + * pg_crc32c_avx512(): compute the crc32c of the buffer, where the + * buffer length must be at least 256, and a multiple of 64. Based There is another technique that computes CRC on 3 separate chunks and combines them at the end, so about 3x faster on large-enough chunks. That's the way used for the Arm proposal [0], coincidentally also citing a white paper from Intel, but as Dimitry pointed out in that thread, its link has apparently disappeared. Raghuveer, do you know about this, and is there another link available? http://www.intel.com/content/dam/www/public/us/en/documents/white-papers/crc-iscsi-polynomial-crc32-instruction-paper.pdf The cut off point in one implementation is only 144 bytes [1] , which is maybe not as small as we'd like, but is quite a bit smaller than 256. That seems better suited to our workloads, and more portable. I have a *brand-new* laptop with an Intel chip, and IIUC it doesn't support AVX-512 because it uses a big-little architecture. I also understand that Sierra Forrest (a server product line) will be all little cores with no AVX-512 support, so I'm not sure why the proposal here requires AVX-512. > There a very frequent call computing COMP_CRC32C over just 20 bytes, while > holding a crucial lock. If we were to do introduce something like this > AVX-512 algorithm, it'd probably be worth to dispatch differently in case of > compile-time known small lengths. I know you've read an earlier version of the patch and realized that it wouldn't help here, but we could probably dispatch differently regardless, although it may only be worth it if we can inline the instructions. Since we technically only need to wait for xl_prev, I believe we could push the computation of the other 12 bytes to before acquiring the lock, then only execute a single instruction on xl_prev to complete the CRC computation. Is there any reason why we couldn't do that, assuming we have a clean way to make that portable? That would mean that the CRCs between major versions would be different, but I think we don't guarantee that anyway. [0] https://commitfest.postgresql.org/50/4620/ [1] https://github.com/komrad36/CRC/blob/master/CRC/golden_intel.cpp#L138C27-L138C42 -- John Naylor Amazon Web Services
> Sorry for going back so far, but this thread was pointed out to me, and this aspect > of the design could use some more discussion: > > + * pg_crc32c_avx512(): compute the crc32c of the buffer, where the > + * buffer length must be at least 256, and a multiple of 64. Based > > There is another technique that computes CRC on 3 separate chunks and > combines them at the end, so about 3x faster on large-enough chunks. > That's the way used for the Arm proposal [0], coincidentally also citing a white > paper from Intel, but as Dimitry pointed out in that thread, its link has apparently > disappeared. Raghuveer, do you know about this, and is there another link > available? > > http://www.intel.com/content/dam/www/public/us/en/documents/white- > papers/crc-iscsi-polynomial-crc32-instruction-paper.pdf I am not aware of this paper. Let me poke a few people internally and get back to you on this. > The cut off point in one implementation is only 144 bytes [1] , which is maybe not > as small as we'd like, but is quite a bit smaller than 256. That seems better suited > to our workloads, and more portable. I have a *brand-new* laptop with an Intel > chip, and IIUC it doesn't support AVX-512 because it uses a big-little architecture. > I also understand that Sierra Forrest (a server product line) will be all little cores > with no AVX-512 support, so I'm not sure why the proposal here requires AVX- > 512. AVX-512 is present all of Intel main P-core based Xeon and AMD's Zen4 and Zen5. Sierra Forest contains the SSE and AVX/AVX2family ISA but AFAIK AVX/AVX2 does not contain any CRC32C specific instructions. See: 1) https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=pclmul&ig_expand=754&techs=AVX_ALL 2) https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#ig_expand=754&techs=AVX_ALL&text=crc32 > > > There a very frequent call computing COMP_CRC32C over just 20 bytes, > > while holding a crucial lock. If we were to do introduce something > > like this > > AVX-512 algorithm, it'd probably be worth to dispatch differently in > > case of compile-time known small lengths. > > I know you've read an earlier version of the patch and realized that it wouldn't > help here, but we could probably dispatch differently regardless, although it may > only be worth it if we can inline the instructions. Since we technically only need to > wait for xl_prev, I believe we could push the computation of the other 12 bytes to > before acquiring the lock, then only execute a single instruction on xl_prev to > complete the CRC computation. Is there any reason why we couldn't do that, > assuming we have a clean way to make that portable? That would mean that the > CRCs between major versions would be different, but I think we don't guarantee > that anyway. Not sure about that. This is not my expertise and I might need a little time to figure this out. Unfortunately, I am on travelwith limited internet connection for the next 6 weeks. I will only be able to address this when I get back. Is thisa blocker for the patch or is this something we can address as a revision? Raghuveer
On Sat, Dec 7, 2024 at 10:16 PM Devulapalli, Raghuveer <raghuveer.devulapalli@intel.com> wrote: > > There is another technique that computes CRC on 3 separate chunks and > > combines them at the end, so about 3x faster on large-enough chunks. > > That's the way used for the Arm proposal [0], coincidentally also citing a white > > paper from Intel, but as Dimitry pointed out in that thread, its link has apparently > > disappeared. Raghuveer, do you know about this, and is there another link > > available? > > > > http://www.intel.com/content/dam/www/public/us/en/documents/white- > > papers/crc-iscsi-polynomial-crc32-instruction-paper.pdf > > I am not aware of this paper. Let me poke a few people internally and get back to you on this. Thanks! I have a portable PoC of how this works, but I'll save that for another thread, since it's not Intel (or Arm) specific. > > The cut off point in one implementation is only 144 bytes [1] , which is maybe not > > as small as we'd like, but is quite a bit smaller than 256. That seems better suited > > to our workloads, and more portable. I have a *brand-new* laptop with an Intel > > chip, and IIUC it doesn't support AVX-512 because it uses a big-little architecture. > > I also understand that Sierra Forrest (a server product line) will be all little cores > > with no AVX-512 support, so I'm not sure why the proposal here requires AVX- > > 512. > > AVX-512 is present all of Intel main P-core based Xeon and AMD's Zen4 and Zen5. Sierra Forest contains the SSE and AVX/AVX2family ISA but AFAIK AVX/AVX2 does not contain any CRC32C specific instructions. See: CRC32C was added in SSE 4.2, so it's quite old. The AVX-512 intrinsics used in the patch are not CRC-specific, if I understand correctly. My point was, it seems Intel still considers AVX-512 as optional, so we can't count on it being present even in future chips. That's why I'm interested in alternatives, at least as a first step. If we can get 3x throughput, the calculation might bend up low enough in the profile that going to 6x might not be noticeable (not sure). > > > There a very frequent call computing COMP_CRC32C over just 20 bytes, > > > while holding a crucial lock. If we were to do introduce something > > > like this > > > AVX-512 algorithm, it'd probably be worth to dispatch differently in > > > case of compile-time known small lengths. > > > > I know you've read an earlier version of the patch and realized that it wouldn't > > help here, but we could probably dispatch differently regardless, although it may > > only be worth it if we can inline the instructions. Since we technically only need to > > wait for xl_prev, I believe we could push the computation of the other 12 bytes to > > before acquiring the lock, then only execute a single instruction on xl_prev to > > complete the CRC computation. Is there any reason why we couldn't do that, > > assuming we have a clean way to make that portable? That would mean that the > > CRCs between major versions would be different, but I think we don't guarantee > > that anyway. > > Not sure about that. This is not my expertise and I might need a little time to figure this out. Unfortunately, I am ontravel with limited internet connection for the next 6 weeks. I will only be able to address this when I get back. Is thisa blocker for the patch or is this something we can address as a revision? This is orthogonal and is not related to the patch, since it doesn't affect 8 and 20-byte paths, only 256 and greater. -- John Naylor Amazon Web Services
+ * For This Function: + * Copyright 2015 The Chromium Authors I went and looked at the Chromium source, and found the following snippet that uses the same technique, but only requires 128-bit CLMUL and has a minimum input size of 64 bytes, rather than 256. This seems like it might be better suited for shorter inputs. Also seems much easier than trying to get the AVX-512 hippo to dance. It uses the IEEE polynomial, so would need new constants calculated for ours, but that had to be done for the shared patch, too. https://github.com/chromium/chromium/blob/main/third_party/zlib/crc32_simd.c#L215 -- John Naylor Amazon Web Services
Hi, On 2024-12-12 18:32:20 +0700, John Naylor wrote: > I went and looked at the Chromium source, and found the following > snippet that uses the same technique, but only requires 128-bit CLMUL > and has a minimum input size of 64 bytes, rather than 256. This seems > like it might be better suited for shorter inputs. Also seems much > easier than trying to get the AVX-512 hippo to dance. It uses the IEEE > polynomial, so would need new constants calculated for ours, but that > had to be done for the shared patch, too. Frankly, we should just move away from using CRCs. They're good for cases where short runs of bit flips are much more likely than other kinds of errors and where the amount of data covered by them has a low upper bound. That's not at all the case for WAL records. It'd not matter too much if CRCs were cheap to compute - but they aren't. We should instead move to some more generic hashing algorithm, decent ones are much faster. Greetings, Andres
On Thu, Dec 12, 2024 at 10:45:29AM -0500, Andres Freund wrote: > Frankly, we should just move away from using CRCs. They're good for cases > where short runs of bit flips are much more likely than other kinds of errors > and where the amount of data covered by them has a low upper bound. That's not > at all the case for WAL records. It'd not matter too much if CRCs were cheap > to compute - but they aren't. We should instead move to some more generic > hashing algorithm, decent ones are much faster. Upthread [0], I wondered aloud about trying to reuse the page checksum code for this. IIRC there was a lot of focus on performance when that was added, and IME it catches problems decently well. [0] https://postgr.es/m/ZrUcX2kq-0doNBea%40nathan -- nathan
On Fri, 13 Dec 2024 at 00:14, Nathan Bossart <nathandbossart@gmail.com> wrote: > > On Thu, Dec 12, 2024 at 10:45:29AM -0500, Andres Freund wrote: > > Frankly, we should just move away from using CRCs. They're good for cases > > where short runs of bit flips are much more likely than other kinds of errors > > and where the amount of data covered by them has a low upper bound. That's not > > at all the case for WAL records. It'd not matter too much if CRCs were cheap > > to compute - but they aren't. We should instead move to some more generic > > hashing algorithm, decent ones are much faster. > > Upthread [0], I wondered aloud about trying to reuse the page checksum code > for this. IIRC there was a lot of focus on performance when that was > added, and IME it catches problems decently well. > > [0] https://postgr.es/m/ZrUcX2kq-0doNBea%40nathan It was carefully built to allow compiler auto-vectorization for power of 2 block sizes to run fast on any CPU that has fast vectorized 32 bit multiplication instructions. Performance is great, if compiled with -march=native it gets 15.8 bytes/cycle on Zen 3. Compared to 19.5 for t1ha0_aes_avx2, 7.9 for aes-ni hash, and 2.15 for fasthash32. However, it isn't particularly good for small (<1K) blocks both for hash quality and performance reasons. One idea would be to use fasthash for short lengths and an extended version of the page checksum for larger values. But before committing to that approach, I think revisiting the quality of the page checksum algorithm is due. Quality and robustness were not the highest priorities when developing it. -- Ants Aasma Lead Database Consultant www.cybertec-postgresql.com
On Thu, Jun 13, 2024 at 2:37 AM Andres Freund <andres@anarazel.de> wrote: > > It's hard to understand, but a nonetheless helpful page is > https://users.ece.cmu.edu/~koopman/crc/crc32.html which lists properties for > crc32c: > https://users.ece.cmu.edu/~koopman/crc/c32/0x8f6e37a0_len.txt > which lists > (0x8f6e37a0; 0x11edc6f41) <=> (0x82f63b78; 0x105ec76f1) {2147483615,2147483615,5243,5243,177,177,47,47,20,20,8,8,6,6,1,1}| gold | (*op) iSCSI; CRC-32C; CRC-32/4 > > This cryptic notion AFAIU indicates that for our polynomial we can detect 2bit > errors up to a length of 2147483615 bytes, 3 bit errors up to 2147483615, 3 > and 4 bit errors up to 5243, 5 and 6 bit errors up to 177, 7/8 bit errors up > to 47. One aspect of that cryptic notation that you seemed to have missed is "(*op)" -- explained as: *p - primitive polynomial. This has optimal length for HD=3, and good HD=2 performance above that length. *o - odd bit errors detected. This has a factor of (x+1) and detects all odd bit errors (implying that even number of bit errors have an elevated undetected error rate) *op - odd bit errors detected plus primitive. This is a primitive polynomial times (x+1). It has optimal length for HD=4, and detects all odd bit errors. This means it's not really a 32-bit checksum -- it's a 1-bit checksum plus a 31-bit checksum. The 1-bit checksum can detect any odd number of bit-flips. Do we really want to throw that property away? Sure, for an even number bitflips beyond a small number, we're left with the luck ordinary collisions, and CRC is not particularly great, but for two messages of the same length, I'm also not sure it's all that bad, either -- John Naylor Amazon Web Services
Hi, On 2024-12-14 12:08:57 +0700, John Naylor wrote: > On Thu, Jun 13, 2024 at 2:37 AM Andres Freund <andres@anarazel.de> wrote: > > > > It's hard to understand, but a nonetheless helpful page is > > https://users.ece.cmu.edu/~koopman/crc/crc32.html which lists properties for > > crc32c: > > https://users.ece.cmu.edu/~koopman/crc/c32/0x8f6e37a0_len.txt > > which lists > > (0x8f6e37a0; 0x11edc6f41) <=> (0x82f63b78; 0x105ec76f1) {2147483615,2147483615,5243,5243,177,177,47,47,20,20,8,8,6,6,1,1}| gold | (*op) iSCSI; CRC-32C; CRC-32/4 > > > > This cryptic notion AFAIU indicates that for our polynomial we can detect 2bit > > errors up to a length of 2147483615 bytes, 3 bit errors up to 2147483615, 3 > > and 4 bit errors up to 5243, 5 and 6 bit errors up to 177, 7/8 bit errors up > > to 47. > > One aspect of that cryptic notation that you seemed to have missed is > "(*op)" -- explained as: > > *p - primitive polynomial. This has optimal length for HD=3, and good > HD=2 performance above that length. > *o - odd bit errors detected. This has a factor of (x+1) and detects > all odd bit errors (implying that even number of bit errors have an > elevated undetected error rate) > *op - odd bit errors detected plus primitive. This is a primitive > polynomial times (x+1). It has optimal length for HD=4, and detects > all odd bit errors. > > This means it's not really a 32-bit checksum -- it's a 1-bit checksum > plus a 31-bit checksum. The 1-bit checksum can detect any odd number > of bit-flips. Do we really want to throw that property away? I think it's pretty much irrelevant for our usecase. What the WAL checksum needs to protect against are cases like a record spanning >1 disk sectors or >1 OS pages and one of those sectors/pages not having made it to disk, while the rest has made it (and thus shows old contents). That means we have to detect runs of "wrong content" that are *never* in the single bit range (since sector boundaries never fall within a bit), *never* within a 4 byte range (because that's what we IIRC align records to, and again, sector boundaries don't fall within aligned 4 byte quantities). Because the likely causes of failure are parts of the correct record and then a tail or an intermittent long chunk (>= 1 sector) of wrong content, detecting certain number of bit flips just doesn't help. Bit flips are an important thing to detect and correct when they are something that can happen in isolation. E.g. a bunch of interference in an ethernet cable. Or the charge in an individual flash cell being a tiny bit above/below some threshold. But that's just not what we have with WAL. It's also worth noting that just about *all* permanent storage already has applied sector-level checksums, protecting against (and correcting) bit flips at that level. > Sure, for an even number bitflips beyond a small number, we're left > with the luck ordinary collisions, and CRC is not particularly great, I.e. just about *all* failure scenarios for WAL. > but for two messages of the same length, I'm also not sure it's all > that bad, either Our records rarely have the same length, no? Greetings, Andres Freund
On Sat, Dec 14, 2024 at 10:24 PM Andres Freund <andres@anarazel.de> wrote: > > Hi, > > On 2024-12-14 12:08:57 +0700, John Naylor wrote: > > On Thu, Jun 13, 2024 at 2:37 AM Andres Freund <andres@anarazel.de> wrote: > > > > > > It's hard to understand, but a nonetheless helpful page is > > > https://users.ece.cmu.edu/~koopman/crc/crc32.html which lists properties for > > > crc32c: > > > https://users.ece.cmu.edu/~koopman/crc/c32/0x8f6e37a0_len.txt > > > which lists > > > (0x8f6e37a0; 0x11edc6f41) <=> (0x82f63b78; 0x105ec76f1) {2147483615,2147483615,5243,5243,177,177,47,47,20,20,8,8,6,6,1,1}| gold | (*op) iSCSI; CRC-32C; CRC-32/4 > > > > > > This cryptic notion AFAIU indicates that for our polynomial we can detect 2bit > > > errors up to a length of 2147483615 bytes, 3 bit errors up to 2147483615, 3 > > > and 4 bit errors up to 5243, 5 and 6 bit errors up to 177, 7/8 bit errors up > > > to 47. > > > > One aspect of that cryptic notation that you seemed to have missed is > > "(*op)" -- explained as: > > > > *p - primitive polynomial. This has optimal length for HD=3, and good > > HD=2 performance above that length. > > *o - odd bit errors detected. This has a factor of (x+1) and detects > > all odd bit errors (implying that even number of bit errors have an > > elevated undetected error rate) > > *op - odd bit errors detected plus primitive. This is a primitive > > polynomial times (x+1). It has optimal length for HD=4, and detects > > all odd bit errors. > > > > This means it's not really a 32-bit checksum -- it's a 1-bit checksum > > plus a 31-bit checksum. The 1-bit checksum can detect any odd number > > of bit-flips. Do we really want to throw that property away? > > I think it's pretty much irrelevant for our usecase. > > What the WAL checksum needs to protect against are cases like a record > spanning >1 disk sectors or >1 OS pages and one of those sectors/pages not > having made it to disk, while the rest has made it (and thus shows old > contents). > > That means we have to detect runs of "wrong content" that are *never* in the > single bit range (since sector boundaries never fall within a bit), *never* > within a 4 byte range (because that's what we IIRC align records to, and > again, sector boundaries don't fall within aligned 4 byte quantities). > > Because the likely causes of failure are parts of the correct record and then > a tail or an intermittent long chunk (>= 1 sector) of wrong content, detecting > certain number of bit flips just doesn't help. Granted, but my point was, if a sector of wrong content is wrong by an odd number of bits, the 1-bit part of the checksum will always catch it. Every bit flip causes the popcount of the result to flip from even to odd (or vice versa), so the odd case can never collide: --original select crc32c(repeat('A', 512)::bytea); crc32c ------------ 3817965270 select bit_count(b'11100011100100011000011011010110') % 2; ?column? ---------- 0 --odd number of bitflips select crc32c(('A' || repeat('C', 511))::bytea); crc32c ----------- 113262028 select bit_count(b'110110000000011110111001100') % 2; ?column? ---------- 1 --even number of bitflips select crc32c(('A' || repeat('B', 511))::bytea); crc32c ------------ 1953030209 select bit_count(b'1110100011010001110000001000001') % 2; ?column? ---------- 0 If the number of bitflips is even, than the 1-bit part will tell us nothing, and the guarantees of the 31-bit part will not help the WAL case for the reasons you describe. So as I understand it the trade-off for WAL error detection is: CRC odd: 100% even: the collision-avoidance probability of a mediocre hash function good hash function: odd: the collision-avoidance probability of a good hash function even: the collision-avoidance probability of a good hash function Stated this way, it's possible we don't have the best solution, but it's also not immediately obvious to me that the second way is so much better that it's worth the effort to change it. If we did go to a hash function, It'd be ideal to have the collision guarantees of an "almost universal" hash function. For any two messages of length at most 'n', the claimed probability of collision is at most, for example: VHASH [1]: n * 2**-61 CLHASH [1]: 2.0004 * 2**-64 (for same length strings) umash [2]: ceil(n / 4096) 2**-55 polymur hash [3]: n * 2**-60.2 ...but these are all 64-bit hashes, and some have further traits that make them impractical for us. I'm not aware of any 32-bit universal hashes. If there were, the bound might be n * 2** -(31 or less?) ...which for n=8192 and larger, is starting not to look as good. But for a normal hash function, we only have statistical tests which are only practical for small lengths. > It's also worth noting that just about *all* permanent storage already has > applied sector-level checksums, protecting against (and correcting) bit flips > at that level. Sure. > > but for two messages of the same length, I'm also not sure it's all > > that bad, either > > Our records rarely have the same length, no? Right, I failed to consider the case where the length is in the garbled part of the message. [1] https://arxiv.org/pdf/1503.03465 [2] https://github.com/backtrace-labs/umash [3] https://github.com/orlp/polymur-hash -- John Naylor Amazon Web Services