Re: Bugs/slowness inserting and indexing cubes - Mailing list pgsql-hackers

From Tom Lane
Subject Re: Bugs/slowness inserting and indexing cubes
Date
Msg-id 11015.1329260043@sss.pgh.pa.us
Whole thread Raw
In response to Re: Bugs/slowness inserting and indexing cubes  (Alexander Korotkov <aekorotkov@gmail.com>)
Responses Re: Bugs/slowness inserting and indexing cubes  (Alexander Korotkov <aekorotkov@gmail.com>)
List pgsql-hackers
Alexander Korotkov <aekorotkov@gmail.com> writes:
> ITSM, I found the problem. This piece of code is triggering an error. It
> assumes each page of corresponding to have initialized buffer. That should
> be true because we're inserting index tuples from up to down while
> splits propagate from down to up.
> But this assumptions becomes false we turn buffer off in the root page. So,
> root page can produce pages without initialized buffers when splits.

Hmm ... can we tighten the error check rather than just remove it?  It
feels less than safe to assume that a hash-entry-not-found condition
*must* reflect a corner-case situation like that.  At the very least
I'd like to see it verify that we'd turned off buffering before deciding
this is OK.  Better, would it be practical to make dummy entries in the
hash table even after turning buffers off, so that the logic here
becomes
if (!found) error;else if (entry is dummy) return without doing anything;else proceed;
        regards, tom lane


pgsql-hackers by date:

Previous
From: Tom Lane
Date:
Subject: Re: When do we lose column names?
Next
From: Tom Lane
Date:
Subject: Re: pg_test_fsync performance