B-tree index row size limit - Mailing list pgsql-hackers

From Florian Weimer
Subject B-tree index row size limit
Date
Msg-id 87bmythhte.fsf@mid.deneb.enyo.de
Whole thread Raw
Responses Re: B-tree index row size limit  (Heikki Linnakangas <hlinnaka@iki.fi>)
List pgsql-hackers
The index row size limit reared its ugly head again.

My current use of PostgreSQL is to load structured data into it but
from sources I don't have control over, to support a wide range of
queries whose precise nature is not yet known to me.  (Is this called
a data warehouse?)

Anyway, what happens from time to time is that some data which has
been processed successfully in the past suddenly failed to load
because there happens to be a very long string in it.  I know how to
work around this, but it's still annoying when it happens, and the
workarounds may make it much, much harder to write efficient queries.

What it would it take to eliminate the B-tree index row size limit (or
rather, increase it to several hundred megabytes)?  I don't care about
performance for index-based lookups for overlong columns, I just want
to be able to load arbitrary data and index it.



pgsql-hackers by date:

Previous
From: Pavel Stehule
Date:
Subject: Re: proposal: psql \setfileref
Next
From: Corey Huinker
Date:
Subject: Re: proposal: psql \setfileref