Thread: Extending amcheck to check toast size and compression
Hackers, During the version 14 development period, a few checks of toasted attributes were written but never committed. For the version15 development cycle, I'd like to consider extending the checks of toasted attributes. First, no toasted attributeshould ever have a rawsize larger than the 1GB varlena limit. Second, no compressed toasted attribute should havean extsize indicating that the toast expanded during toasting. Such a extsize could mean the compression code is malfunctioning,or that the extsize or rawsize fields are corrupt. Third, any compressed attribute should have a valid compressionmethod ID. These checks are cheap. Actually retrieving the compressed toasted data and checking that it uncompresses correctly wouldhave very different performance implications, but that is not included in this patch. — Mark Dilger EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
Attachment
+ /* Oversized toasted attributes should never be stored */ + if (toast_pointer.va_rawsize > VARLENA_SIZE_LIMIT) + report_corruption(ctx, + psprintf("toast value %u rawsize %u exceeds limit %u", + toast_pointer.va_valueid, + toast_pointer.va_rawsize, + VARLENA_SIZE_LIMIT)); + I think the comment sounds wrong since toast is precisely for storage of "oversized" attributes. https://www.postgresql.org/docs/current/storage-toast.html | This section provides an overview of TOAST (The Oversized-Attribute Storage Technique). -- Justin
> On May 4, 2021, at 9:43 AM, Justin Pryzby <pryzby@telsasoft.com> wrote: > > + /* Oversized toasted attributes should never be stored */ > + if (toast_pointer.va_rawsize > VARLENA_SIZE_LIMIT) > + report_corruption(ctx, > + psprintf("toast value %u rawsize %u exceeds limit %u", > + toast_pointer.va_valueid, > + toast_pointer.va_rawsize, > + VARLENA_SIZE_LIMIT)); > + > > I think the comment sounds wrong since toast is precisely for storage of > "oversized" attributes. > > https://www.postgresql.org/docs/current/storage-toast.html > | This section provides an overview of TOAST (The Oversized-Attribute Storage Technique). Thanks for reviewing! Changed to: + /* Toasted attributes too large to be untoasted should never be stored */ — Mark Dilger EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
Attachment
The following review has been posted through the commitfest application: make installcheck-world: tested, failed Implements feature: tested, failed Spec compliant: tested, failed Documentation: tested, failed This patch looks good to me. Considering a positive response from another reviewer, status change to "Ready for Committer"seems to be appropriate. The new status of this patch is: Ready for Committer
Hi hackers, > make installcheck-world: tested, failed > Implements feature: tested, failed > Spec compliant: tested, failed > Documentation: tested, failed Very sorry about these "failed" checkboxes. Didn't use the commitfest webapp for a while. The patch is fine. > The new status of this patch is: Ready for Committer -- Best regards, Aleksander Alekseev
> @@ -30,6 +30,9 @@ PG_FUNCTION_INFO_V1(verify_heapam); > /* The number of columns in tuples returned by verify_heapam */ > #define HEAPCHECK_RELATION_COLS 4 > > +/* The largest valid toast va_rawsize */ > +#define VARLENA_SIZE_LIMIT 0x3FFFFFFF > + Hmm, a toasted datum cannot be larger than MaxAllocSize, because it's reconstituted in a palloc'd datum, right? - Heikki
> On Jul 14, 2021, at 3:33 AM, Heikki Linnakangas <hlinnaka@iki.fi> wrote: > >> +/* The largest valid toast va_rawsize */ >> +#define VARLENA_SIZE_LIMIT 0x3FFFFFFF >> + > > Hmm, a toasted datum cannot be larger than MaxAllocSize, because it's reconstituted in a palloc'd datum, right? No datum size exceeds MaxAllocSize, and no datum expands when compressed (because for those that do expand under any particularcompression algorithm, we opt to instead store the datum uncompressed), so no valid toast pointer should containa va_rawsize field greater than MaxAllocSize. Any toast pointers that have larger va_rawsize fields are thereforecorrupt. VARLENA_SIZE_LIMIT is defined here equal to MaxAllocSize: src/include/utils/memutils.h:#define MaxAllocSize ((Size) 0x3fffffff) /* 1 gigabyte - 1 */ Earlier versions of the patch used MaxAllocSize rather than defining VARLENA_SIZE_LIMIT, but review comments suggested thatwas less clear. — Mark Dilger EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
> On Jul 14, 2021, at 7:57 AM, Mark Dilger <mark.dilger@enterprisedb.com> wrote: > > so no valid toast pointer should contain a va_rawsize field greater than MaxAllocSize ... nor should any valid toast pointer contain a va_extinfo field encoding a va_extsize greater than va_rawsize - VARHDRSZ. Violations of either of these properties suggest either a bug in the code which wrote the toast pointer, or that the toastpointer has been corrupted since being written, or that the page of data being read is being interpreted incorrectly,perhaps due to catalog corruption, or because the page is just random noise and not part of a valid table, etc. The amcheck code is not focused specifically on whether the toasted value can be detoasted so much as deducing thatthe data cannot be correct. — Mark Dilger EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
Right so here's a review. I think the patch is committable as is. It's an improvement and it does the job as promised. I do have some comments but I don't think they're serious issues and would actually be pretty happy committing it as is. Fwiw I didn't realize how short the patch was at first and it probably doesn't need yet another review. /* Toasted attributes too large to be untoasted should never be stored */ if (toast_pointer.va_rawsize > VARLENA_SIZE_LIMIT) 1) I know this used to say MaxAlloc -- personally I would probably go with that but either is fine. But if you're going to have a separate constant there could be more of a comment explaining why that's the maximum -- probably with a pointer to MaxAlloc and postgres.h's VARSIZE macros. The switch statement at line 1443 seems a bit ... baroque. Is it clearer than a simple "if cmid != TOAST_PGLZ_COMPRESSION_ID && cmid != TOAST_LZ4_COMPRESSION_ID)" ? I mean, I see this is easier to add more cases to but I found dealing with a case that falls through and no default is a lot of cognitive overhead to understand what is in the end just effectively a simple branch. Fwiw compilers aren't always the best at optimizing switch statements. It's entirely possible a compiler may end up building a whole lookup table of jumps for this thing. Not that it's performance critical but ... But all that's more words than necessary for a minor style comment. Fwiw I spent a few minutes thinking about and writing up this suggestion and then only afterwards realized the code in question wasn't from this patch. I'll mention it anyways but it's not relevant to this patch review I guess :) I found the whole expected_chunk_seq parameter thing a bit confusing and less useful than possible. I would instead suggestion: Allocate an array of the expected number of chunk numbers before calling check_toast_tuple and then just gather the chunk_seq that are returned. When it's finished you can do things like: a) Check if they're all ascending and report index corruption if not. b) Check if any numbers are missing and report which ones are missing and/or how many. c) Check if there are duplicates and report that. These would all be easier for a user to interpret than "index scan returned chunk 5 when expecting chunk 9". On Tue, 4 May 2021 at 12:20, Mark Dilger <mark.dilger@enterprisedb.com> wrote: > > Hackers, > > During the version 14 development period, a few checks of toasted attributes were written but never committed. For theversion 15 development cycle, I'd like to consider extending the checks of toasted attributes. First, no toasted attributeshould ever have a rawsize larger than the 1GB varlena limit. Second, no compressed toasted attribute should havean extsize indicating that the toast expanded during toasting. Such a extsize could mean the compression code is malfunctioning,or that the extsize or rawsize fields are corrupt. Third, any compressed attribute should have a valid compressionmethod ID. > > These checks are cheap. Actually retrieving the compressed toasted data and checking that it uncompresses correctly wouldhave very different performance implications, but that is not included in this patch. > > > > — > Mark Dilger > EnterpriseDB: http://www.enterprisedb.com > The Enterprise PostgreSQL Company > > > -- greg On Wed, 14 Jul 2021 at 10:58, Mark Dilger <mark.dilger@enterprisedb.com> wrote: > > > > > On Jul 14, 2021, at 3:33 AM, Heikki Linnakangas <hlinnaka@iki.fi> wrote: > > > >> +/* The largest valid toast va_rawsize */ > >> +#define VARLENA_SIZE_LIMIT 0x3FFFFFFF > >> + > > > > Hmm, a toasted datum cannot be larger than MaxAllocSize, because it's reconstituted in a palloc'd datum, right? > > No datum size exceeds MaxAllocSize, and no datum expands when compressed (because for those that do expand under any particularcompression algorithm, we opt to instead store the datum uncompressed), so no valid toast pointer should containa va_rawsize field greater than MaxAllocSize. Any toast pointers that have larger va_rawsize fields are thereforecorrupt. > > VARLENA_SIZE_LIMIT is defined here equal to MaxAllocSize: > > src/include/utils/memutils.h:#define MaxAllocSize ((Size) 0x3fffffff) /* 1 gigabyte - 1 */ > > Earlier versions of the patch used MaxAllocSize rather than defining VARLENA_SIZE_LIMIT, but review comments suggestedthat was less clear. > > — > Mark Dilger > EnterpriseDB: http://www.enterprisedb.com > The Enterprise PostgreSQL Company > > > > > -- greg
> On Oct 19, 2021, at 1:58 PM, Greg Stark <stark@mit.edu> wrote: > > Right so here's a review. > > I think the patch is committable as is. It's an improvement and it > does the job as promised. I do have some comments but I don't think > they're serious issues and would actually be pretty happy committing > it as is. Fwiw I didn't realize how short the patch was at first and > it probably doesn't need yet another review. Thanks for reviewing! > /* Toasted attributes too large to be untoasted should never be stored */ > if (toast_pointer.va_rawsize > VARLENA_SIZE_LIMIT) > > 1) I know this used to say MaxAlloc -- personally I would probably go > with that but either is fine. But if you're going to have a separate > constant there could be more of a comment explaining why that's the > maximum -- probably with a pointer to MaxAlloc and postgres.h's > VARSIZE macros. I find the comment a bit verbose that way, but maybe people like it better? How does this look: diff --git a/contrib/amcheck/verify_heapam.c b/contrib/amcheck/verify_heapam.c index 774a70f63d..988e104d8e 100644 --- a/contrib/amcheck/verify_heapam.c +++ b/contrib/amcheck/verify_heapam.c @@ -30,7 +30,11 @@ PG_FUNCTION_INFO_V1(verify_heapam); /* The number of columns in tuples returned by verify_heapam */ #define HEAPCHECK_RELATION_COLS 4 -/* The largest valid toast va_rawsize */ +/* + * The largest valid toast va_rawsize. This is the same as the MaxAllocSize + * constant from memutils.h, and is the largest size that can fit in a varlena + * va_header's 30-bit size field. + */ #define VARLENA_SIZE_LIMIT 0x3FFFFFFF /* > The switch statement at line 1443 seems a bit ... baroque. Is it > clearer than a simple "if cmid != TOAST_PGLZ_COMPRESSION_ID && cmid != > TOAST_LZ4_COMPRESSION_ID)" ? I mean, I see this is easier to add more > cases to but I found dealing with a case that falls through and no > default is a lot of cognitive overhead to understand what is in the > end just effectively a simple branch. The variable cmid (which stands for compression method identifier), is of enum type ToastCompressionId. From toast_compression.h: typedef enum ToastCompressionId { TOAST_PGLZ_COMPRESSION_ID = 0, TOAST_LZ4_COMPRESSION_ID = 1, TOAST_INVALID_COMPRESSION_ID = 2 } ToastCompressionId; There is clearly room for one more compression algorithm in that list without overflowing the 2 bits reserved for such values,and I'd not like to gamble on some future hacker who adds TOAST_MY_FANCY_COMPRESSION_ID = 3 remembering to updatecontrib/amcheck. I used a switch statement to trigger a compiler warning in such an event. > Fwiw compilers aren't always the best at optimizing switch statements. > It's entirely possible a compiler may end up building a whole lookup > table of jumps for this thing. Not that it's performance critical but > ... That may be a fair argument, but I'm a huge fan of using enums and switch statements to elicit the compiler's help in futuremodifications to the code. This is the first time I've heard a complaint of this sort and I'm unsure how to respond. How common is this optimization problem on modern compilers? > But all that's more words than necessary for a minor style comment. Ok. > Fwiw I spent a few minutes thinking about and writing up this > suggestion and then only afterwards realized the code in question > wasn't from this patch. I'll mention it anyways but it's not relevant > to this patch review I guess :) Sure, we can discuss it. > I found the whole expected_chunk_seq parameter thing a bit confusing > and less useful than possible. I would instead suggestion: > > Allocate an array of the expected number of chunk numbers before > calling check_toast_tuple and then just gather the chunk_seq that are > returned. Of course, you might get more chunks back than you expected, and overflow your array. But assuming you realloc, and assumingthe checker avoids going into an infinite loop, that is one option. > When it's finished you can do things like: a) Check if > they're all ascending and report index corruption if not. b) Check if > any numbers are missing and report which ones are missing and/or how > many. c) Check if there are duplicates and report that. These would > all be easier for a user to interpret than "index scan returned chunk > 5 when expecting chunk 9". This was reworked multiple times. The problem is how to think about the missing or extra chunks. One interpretation isthat the chunks themselves are corrupt, but another interpretation is that the toast index is corrupt and causing the indexscan over the toast table to visit the same chunk multiple times, or in the wrong order, etc. The index scan itselfmight bomb out with a segfault, or go into an infinite loop. It's hard to predict such things in the face of corruption,especially when considering that the index scan code might be modified in the future. I'm not claiming thereis no room for improvement here -- likely there is -- but it is not simple, and the patch that would result would belarger than the patch actually being reviewed. I'd rather leave such a project for another day. — Mark Dilger EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
On Wed., Oct. 20, 2021, 12:41 Mark Dilger, <mark.dilger@enterprisedb.com> wrote:
I used a switch statement to trigger a compiler warning in such an event.
Catching better compiler diagnostics is an excellent reason to choose this structure. I guess all I could ask is that the comment saying no default branch say this is the motivation.
> On Oct 20, 2021, at 11:42 AM, Greg Stark <stark@mit.edu> wrote: > > > > On Wed., Oct. 20, 2021, 12:41 Mark Dilger, <mark.dilger@enterprisedb.com> wrote: > > I used a switch statement to trigger a compiler warning in such an event. > > Catching better compiler diagnostics is an excellent reason to choose this structure. I guess all I could ask is that thecomment saying no default branch say this is the motivation. Ok. How about: diff --git a/contrib/amcheck/verify_heapam.c b/contrib/amcheck/verify_heapam.c index 774a70f63d..9500f43bc9 100644 --- a/contrib/amcheck/verify_heapam.c +++ b/contrib/amcheck/verify_heapam.c @@ -30,7 +30,11 @@ PG_FUNCTION_INFO_V1(verify_heapam); /* The number of columns in tuples returned by verify_heapam */ #define HEAPCHECK_RELATION_COLS 4 -/* The largest valid toast va_rawsize */ +/* + * The largest valid toast va_rawsize. This is the same as the MaxAllocSize + * constant from memutils.h, and is the largest size that can fit in a varlena + * va_header's 30-bit size field. + */ #define VARLENA_SIZE_LIMIT 0x3FFFFFFF /* @@ -1452,7 +1456,11 @@ check_tuple_attribute(HeapCheckContext *ctx) case TOAST_INVALID_COMPRESSION_ID: break; - /* Intentionally no default here */ + /* + * Intentionally no default here. We want the compiler to warn if + * new compression methods are added to the ToastCompressionId enum + * but not handled in our switch. + */ } if (!valid) report_corruption(ctx, — Mark Dilger EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
> On Oct 20, 2021, at 12:06 PM, Mark Dilger <mark.dilger@enterprisedb.com> wrote: > > Ok. How about: Done that way. — Mark Dilger EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
Attachment
On Wed, Nov 3, 2021 at 6:56 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote: > Done that way. I agree with what others have said: this looks fine. But, is it plausible to add test coverage for the new checks, or is that going to be too much of a pain? -- Robert Haas EDB: http://www.enterprisedb.com
> On Nov 4, 2021, at 7:53 AM, Robert Haas <robertmhaas@gmail.com> wrote: > > But, is it plausible to add test coverage for the new checks, or is > that going to be too much of a pain? It only takes about 20 additional lines in the regression test to check the code paths for raw sizes which are too largeand too small, so I've done that in this next version. Testing corrupt compressed data in a deterministic, cross platformmanner with a compact, easy to maintain regression test has eluded me and is not included here. — Mark Dilger EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
Attachment
On Thu, Nov 4, 2021 at 6:58 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote: > It only takes about 20 additional lines in the regression test to check the code paths for raw sizes which are too largeand too small, so I've done that in this next version. Testing corrupt compressed data in a deterministic, cross platformmanner with a compact, easy to maintain regression test has eluded me and is not included here. OK, I've committed this version. -- Robert Haas EDB: http://www.enterprisedb.com
Robert Haas <robertmhaas@gmail.com> writes: > OK, I've committed this version. Some of the buildfarm is unimpressed with this --- looks like the test output is less stable than you thought. regards, tom lane
> On Nov 5, 2021, at 8:56 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > > Some of the buildfarm is unimpressed with this --- looks like the test > output is less stable than you thought. Yes, it does. I had to play with it a bit to be sure the test itself is faulty, and I believe that it is. — Mark Dilger EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company