> APseudoUtopia <apseudoutopia@gmail.com> writes:
>>> Here's what happened:
>>>
>>> $ vacuumdb --all --full --analyze --no-password
>>> vacuumdb: vacuuming database "postgres"
>>> vacuumdb: vacuuming database "web_main"
>>> vacuumdb: vacuuming of database "web_main" failed: ERROR: б═huge tuple
>
>> PostgreSQL 8.4.0 on i386-portbld-freebsd7.2, compiled by GCC cc (GCC)
>> 4.2.1 20070719 [FreeBSD], 32-bit
Pls, apply attached patch. Patch increases max size from approximately 500 bytes
up to 2700 bytes, so vacuum will be able to finish.
> This is evidently coming out of ginHeapTupleFastCollect because it's
> formed a GIN tuple that is too large (either too long a word, or too
> many postings, or both). I'd say that this represents a serious
> degradation in usability from pre-8.4 releases: before, you would have
> gotten the error upon attempting to insert the table row that triggers
> the problem. Now, with the "fast insert" stuff, you don't find out
> until VACUUM fails, and you have no idea where the bad data is. Not cool.
>
> Oleg, Teodor, what can we do about this? Can we split an oversize
> tuple into multiple entries? Can we apply suitable size checks
> before instead of after the fast-insert queue?
ginHeapTupleFastCollect and ginEntryInsert checked tuple's size for
TOAST_INDEX_TARGET, but ginHeapTupleFastCollect checks without one ItemPointer,
as ginEntryInsert does it. So ginHeapTupleFastCollect could produce a tuple
which 6-bytes larger than allowed by ginEntryInsert. ginEntryInsert is called
during pending list cleanup.
Patch removes checking of TOAST_INDEX_TARGET and use checking only by
GinMaxItemSize which is greater than TOAST_INDEX_TARGET. All size's check is now
in GinFormTuple.
--
Teodor Sigaev E-mail: teodor@sigaev.ru
WWW: http://www.sigaev.ru/