Re: Vacuumdb Fails: Huge Tuple - Mailing list pgsql-general

From Tom Lane
Subject Re: Vacuumdb Fails: Huge Tuple
Date
Msg-id 14606.1254518615@sss.pgh.pa.us
Whole thread Raw
In response to Re: Vacuumdb Fails: Huge Tuple  (Teodor Sigaev <teodor@sigaev.ru>)
List pgsql-general
Teodor Sigaev <teodor@sigaev.ru> writes:
> ginHeapTupleFastCollect and ginEntryInsert checked tuple's size for
> TOAST_INDEX_TARGET, but ginHeapTupleFastCollect checks without one ItemPointer,
> as ginEntryInsert does it. So ginHeapTupleFastCollect could produce a tuple
> which 6-bytes larger than allowed by ginEntryInsert. ginEntryInsert is called
> during pending list cleanup.

I applied this patch after improving the error reporting a bit --- but
I was unable to get the unpatched code to fail in vacuum as the OP
reported was happening for him.  It looks to me like the original coding
limits the tuple size to TOAST_INDEX_TARGET (512 bytes) during
collection, but checks only the much larger GinMaxItemSize limit during
final insertion.  So while this is a good cleanup, I am suspicious that
it may not actually explain the trouble report.

I notice that the complaint was about a VACUUM FULL not a plain VACUUM,
which means that the vacuum would have been moving tuples around and
hence inserting brand new index entries.  Is there any possible way that
we could extract a larger index tuple from a moved row than we had
extracted from the original version?

It would be nice to see an actual test case that makes 8.4 fail this way
...

            regards, tom lane

pgsql-general by date:

Previous
From: Scott Marlowe
Date:
Subject: Re: Limit of bgwriter_lru_maxpages of max. 1000?
Next
From: Tim Landscheidt
Date:
Subject: Re: Procedure for feature requests?