On Fri, 2009-01-16 at 15:39 +0300, Teodor Sigaev wrote:
> > START_CRIT_SECTION();
> > ...
> > l = PageAddItem(...);
> > if (l == InvalidOffsetNumber)
> > elog(ERROR, "failed to add item to index page in \"%s\"",
> > RelationGetRelationName(index));
> >
> > It's no use using ERROR, because it will turn into PANIC, which is
> I did that similar to other GIN/GiST places. BTW, BTree directly emits PANIC if
> PageAddItem fails
>
I'd still prefer PANIC over an ERROR that will always turn into a PANIC.
I'll leave it as you did, though.
> >
> > 4. Heikki mentioned:
> > http://archives.postgresql.org/pgsql-hackers/2008-11/msg01832.php
> >
> > "To make things worse, a query will fail if all the matching
> > fast-inserted tuples don't fit in the non-lossy tid bitmap."
> >
> > That issue still remains, correct? Is there a resolution to that?
>
> Now gincostestimate can forbid index scan by disable_cost (see Changes). Of
> course, it doesn't prevent failure in case of large update (for example), but it
> prevents in most cases. BTW, because of sequential scan of pending list cost of
> scan grows up fast and index scan becomes non-optimal.
Is this a 100% bulletproof solution, or is it still possible for a query
to fail due to the pending list? It relies on the stats collector, so
perhaps in rare cases it could still fail?
It might be surprising though, that after an UPDATE and before a VACUUM,
the gin index just stops working (if work_mem is too low). For many
use-cases, if GIN is not used, it's just as bad as the query failing,
because it would be so slow.
Can you explain why the tbm must not be lossy?
Also, can you clarify why a large update can cause a problem? In the
previous discussion, you suggested that it force normal index inserts
after a threshold based on work_mem:
http://archives.postgresql.org/pgsql-hackers/2008-12/msg00065.php
Regards,Jeff Davis