Thread: [PATCH] Fix ScalarArrayOpExpr estimation for GIN indexes
Hi list, Since PostgreSQL 9.1, GIN has new cost estimation code. This code assumes that the only expression type it's going to see is OpExpr. However, ScalarArrayOpExpr has also been possible in earlier versions. Estimating col <op> ANY (<array>) queries segfaults in 9.1 if there's a GIN index on the column. Case in point: create table words (word text); create index on words using gin (to_tsvector('english', word)); explain analyze select * from words where to_tsvector('english', word) @@ any ('{foo}'); (It seems that RowCompareExpr and NullTest clauses are impossible for a GIN index -- at least my efforts to find such cases failed) Attached is an attempted fix for the issue. I split out the code for the extract call and now run that for each array element, adding together the average of (partialEntriesInQuals, exactEntriesInQuals, searchEntriesInQuals) for each array element. After processing all quals, I multiply the entries by the number of array_scans (which is the product of all array lengths) to get the total cost. This required a fair bit of refactoring, but I tried to follow the patterns for OpExpr pretty strictly -- discounting scans over NULL elements, returning 0 cost when none of the array elements can match, accounting for cache effects when there are multiple scans, etc. But it's also possible that I have no idea what I'm really doing. :) I also added regression tests for this to tsearch and pg_trgm. Regards, Marti
Attachment
Marti Raudsepp <marti@juffo.org> writes: > Since PostgreSQL 9.1, GIN has new cost estimation code. This code > assumes that the only expression type it's going to see is OpExpr. > However, ScalarArrayOpExpr has also been possible in earlier versions. > Estimating col <op> ANY (<array>) queries segfaults in 9.1 if there's > a GIN index on the column. Ugh. I think we subconsciously assumed that ScalarArrayOpExpr couldn't appear here because GIN doesn't set amsearcharray, but of course that's wrong. > (It seems that RowCompareExpr and NullTest clauses are impossible for > a GIN index -- at least my efforts to find such cases failed) No, those are safe for the moment --- indxpath.c has a hard-wired assumption that RowCompareExpr is only usable with btree, and NullTest is only considered to be indexable if amsearchnulls is set. Still, it'd likely be better if this code ignored unrecognized qual expression types rather than Assert'ing they're not there. It's not like the cases it *does* handle are done so perfectly that ignoring an unknown qual could be said to bollix the estimate unreasonably. > Attached is an attempted fix for the issue. I split out the code for > the extract call and now run that for each array element, adding > together the average of (partialEntriesInQuals, exactEntriesInQuals, > searchEntriesInQuals) for each array element. After processing all > quals, I multiply the entries by the number of array_scans (which is > the product of all array lengths) to get the total cost. > This required a fair bit of refactoring, but I tried to follow the > patterns for OpExpr pretty strictly -- discounting scans over NULL > elements, returning 0 cost when none of the array elements can match, > accounting for cache effects when there are multiple scans, etc. But > it's also possible that I have no idea what I'm really doing. :) Hmm. I am reminded of how utterly unreadable "diff -u" format is for anything longer than single-line changes :-( ... but I think I don't like this refactoring much. Will take a closer look tomorrow. regards, tom lane
On Tue, Dec 20, 2011 at 07:08, Tom Lane <tgl@sss.pgh.pa.us> wrote: > it'd likely be better if this code ignored unrecognized qual expression > types rather than Assert'ing they're not there. The patch replaced that Assert with an elog(ERROR) > Hmm. I am reminded of how utterly unreadable "diff -u" format is for > anything longer than single-line changes :-( ... Sorry, the new patch is in context (-C) diff format proper. I also moved around code a bit and removed an unused variable that was left around from the refactoring. > but I think I don't > like this refactoring much. Will take a closer look tomorrow. I was afraid you'd say that, especially for a change that should be backpatched. But I couldn't think of alternative ways to do it that give non-bogus estimates. ---- While writing this patch, the largest dilemma was where to account for the multiple array scans. Given that this code is mostly a heuristic and I lack a deep understanding of GIN indexes, it's likely that I got this part wrong. Currently I'm doing this: partialEntriesInQuals *= array_scans; exactEntriesInQuals *= array_scans; searchEntriesInQuals *= array_scans; Which seems to be the right thing as far as random disk accesses are concerned (successive scans are more likely to hit the cache) and also works well with queries that don't touch most of the index. But this fails spectacularly when multiple full scans are performed e.g. LIKE ANY ('{%,%,%}'). Because index_pages_fetched() ends up removing all of the rescan costs. Another approach is multiplying the total cost from the number of scans. This overestimates random accesses from rescans, but fixes the above case: *indexTotalCost = (*indexStartupCost + dataPagesFetched * spc_random_page_cost) * array_scans; Regards, Marti
Attachment
Marti Raudsepp <marti@juffo.org> writes: > On Tue, Dec 20, 2011 at 07:08, Tom Lane <tgl@sss.pgh.pa.us> wrote: >> but I think I don't >> like this refactoring much. Will take a closer look tomorrow. > I was afraid you'd say that, especially for a change that should be > backpatched. But I couldn't think of alternative ways to do it that > give non-bogus estimates. I've applied a revised version of this patch that factors things in a way I found nicer. The main concrete thing I didn't like about what you'd done was dropping the haveFullScan logic. If we have more than one qual triggering that, we're still going to do one full scan, not multiples of that. It seemed unreasonably hard to get that exactly right when there are multiple array quals each doing such a thing, but I didn't want to let it regress in its handling of multiple plain quals. Also, while looking at this I realized that we had the costing of nestloop cases all wrong. The idea is to scale up the number of tuples (pages) fetched, apply index_pages_fetched(), then scale down again. I think maybe somebody thought that was redundant, but it's not because index_pages_fetched() is nonlinear. regards, tom lane
On Wed, Dec 21, 2011 at 03:03, Tom Lane <tgl@sss.pgh.pa.us> wrote: > I've applied a revised version of this patch that factors things in a > way I found nicer. Nice, thanks! Regards, Marti