Re: Reducing tuple overhead - Mailing list pgsql-hackers

From Robert Haas
Subject Re: Reducing tuple overhead
Date
Msg-id CA+TgmoYDKC0GLr42U0Xrha0ChTua29dCAr_kVXn=X8EeEDQ-0g@mail.gmail.com
Whole thread Raw
In response to Re: Reducing tuple overhead  (Jim Nasby <Jim.Nasby@BlueTreble.com>)
Responses Re: Reducing tuple overhead
Re: Reducing tuple overhead
List pgsql-hackers
On Mon, Apr 27, 2015 at 5:01 PM, Jim Nasby <Jim.Nasby@bluetreble.com> wrote:
> The problem with just having the value is that if *anything* changes between
> how you evaluated the value when you created the index tuple and when you
> evaluate it a second time you'll corrupt your index. This is actually an
> incredibly easy problem to have; witness how we allowed indexing
> timestamptz::date until very recently. That was clearly broken, but because
> we never attempted to re-run the index expression to do vacuuming at least
> we never corrupted the index itself.

True.  But I guess what I don't understand is: how big a deal is this,
really?  The "uncorrupted" index can still return wrong answers to
queries.  The fact that you won't end up with index entries pointing
to completely unrelated tuples is nice, but if index scans are missing
tuples that they should see, aren't you still pretty hosed?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



pgsql-hackers by date:

Previous
From: Robert Haas
Date:
Subject: Re: forward vs backward slashes in msvc build code
Next
From: Robert Haas
Date:
Subject: Re: mogrify and indent features for jsonb