Thread: Avoid full GIN index scan when possible

Avoid full GIN index scan when possible

From
Julien Rouhaud
Date:
Hi,

Marc (in Cc) reported me a problematic query using a GIN index hit in
production.  The issue is that even if an GIN opclass says that the
index can be used for an operator, it's still possible that some
values aren't really compatible and requires a full index scan.

One simple example is with a GIN pg_trgm index (but other opclasses
have similar restrictions) , doing a LIKE with wildcard on both side,
where the pattern is shorter than a trigram, e.g. col LIKE '%a%'.  So,
a where clause of the form:

WHERE col LIKE '%verylongpattern%' AND col LIKE '%a%'

is much more expensive than

WHERE col LKE '%verylongpattern%'

While there's nothing to do if the unhandled const is the only
predicate, if there are multiple AND-ed predicates and at least one of
them doesn't require a full index scan, we can avoid it.

Attached patch tries to fix the issue by detecting such cases and
dropping the unhandled quals in the BitmapIndexScan, letting the
recheck in BitmapHeapScan do the proper filtering.  I'm not happy to
call the extractQuery support functions an additional time, but i
didn't find a cleaner way.  This is of course intended for pg13.

Attachment

Re: Avoid full GIN index scan when possible

From
Julien Rouhaud
Date:
On Sun, Mar 24, 2019 at 11:52 AM Julien Rouhaud <rjuju123@gmail.com> wrote:
>
> Marc (in Cc) reported me a problematic query using a GIN index hit in
> production.  The issue is that even if an GIN opclass says that the
> index can be used for an operator, it's still possible that some
> values aren't really compatible and requires a full index scan.
>
> One simple example is with a GIN pg_trgm index (but other opclasses
> have similar restrictions) , doing a LIKE with wildcard on both side,
> where the pattern is shorter than a trigram, e.g. col LIKE '%a%'.  So,
> a where clause of the form:
>
> WHERE col LIKE '%verylongpattern%' AND col LIKE '%a%'
>
> is much more expensive than
>
> WHERE col LKE '%verylongpattern%'
>
> While there's nothing to do if the unhandled const is the only
> predicate, if there are multiple AND-ed predicates and at least one of
> them doesn't require a full index scan, we can avoid it.
>
> Attached patch tries to fix the issue by detecting such cases and
> dropping the unhandled quals in the BitmapIndexScan, letting the
> recheck in BitmapHeapScan do the proper filtering.  I'm not happy to
> call the extractQuery support functions an additional time, but i
> didn't find a cleaner way.  This is of course intended for pg13.

Patch doesn't apply anymore (thanks cfbot).  Rebased patch attached.

Attachment

Re: Avoid full GIN index scan when possible

From
Tomas Vondra
Date:
Hi,

I've briefly looked at the patch today. I think the idea is worthwhile,
but I found a couple of issues with the patch:


1) The index_selfuncs.h header is included in the wrong place, it should
be included before lsyscache.h (because 'i' < 'l').


2) I'm not sure it's a good idea to add dependency on a specific AM type
into indxpath.c. At the moment there are only two places, both referring
to BTREE_AM_OID, do we really hard-code another OID here?

I wonder if this could be generalized to another support proc in the
inde AM API, with just GIN implementing it.


3) selfuncs.c is hardly the right place for gin_get_optimizable_quals,
as it's only for functions computing selectivity estimates (and funcs
directly related to that). And the new function is not related to that
at all, so why not to define it in indxpath.c directly?

Of course, if it gets into the index AM API then this would disappear.


4) The gin_get_optimizable_quals is quite misleading. Firstly, it's not
very obvious what "optimizable" means in this context, but that's a
minor issue. The bigger issue is that it's a lie - when there are no
"optimizable" clauses (so all clauses would require full scan) the
function returns the original list, which is by definition completely
non-optimizable.


regards

-- 
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services 



Re: Avoid full GIN index scan when possible

From
Julien Rouhaud
Date:
On Fri, Jun 28, 2019 at 6:10 PM Tomas Vondra
<tomas.vondra@2ndquadrant.com> wrote:
>
> I've briefly looked at the patch today. I think the idea is worthwhile,

Thanks!

> 2) I'm not sure it's a good idea to add dependency on a specific AM type
> into indxpath.c. At the moment there are only two places, both referring
> to BTREE_AM_OID, do we really hard-code another OID here?
>
> I wonder if this could be generalized to another support proc in the
> inde AM API, with just GIN implementing it.

Yes, this patch was more a POC than anything, to discuss the approach
before spending too much time on infrastructure.  I considered another
support function, but I'm still unclear of how useful it'd be for
custom AM (as I don't see any use for that for the vanilla one I
think), or whether if this should be opclass specific or not.

> 3) selfuncs.c is hardly the right place for gin_get_optimizable_quals,
> as it's only for functions computing selectivity estimates (and funcs
> directly related to that). And the new function is not related to that
> at all, so why not to define it in indxpath.c directly?

I kept this function in selfuncs.c as it's using some private
functions (gincost_opexpr and gincost_scalararrayopexpr) used by
gincostestimate.  That seemed the simplest approach at this stage.
BTW there's also an ongoing discussion to move the (am)estimate
functions in AM-specific files [1], so that'll directly impact this
too.

> 4) The gin_get_optimizable_quals is quite misleading. Firstly, it's not
> very obvious what "optimizable" means in this context, but that's a
> minor issue. The bigger issue is that it's a lie - when there are no
> "optimizable" clauses (so all clauses would require full scan) the
> function returns the original list, which is by definition completely
> non-optimizable.

The comment is hopefully clearer about what this function does, but
definitely this name is terrible.  I'll try to come up with a better
one.

[1] https://www.postgresql.org/message-id/4079.1561661677%40sss.pgh.pa.us



Re: Avoid full GIN index scan when possible

From
Tom Lane
Date:
Julien Rouhaud <rjuju123@gmail.com> writes:
> On Fri, Jun 28, 2019 at 6:10 PM Tomas Vondra
> <tomas.vondra@2ndquadrant.com> wrote:
>> 2) I'm not sure it's a good idea to add dependency on a specific AM type
>> into indxpath.c. At the moment there are only two places, both referring
>> to BTREE_AM_OID, do we really hard-code another OID here?
>> 
>> I wonder if this could be generalized to another support proc in the
>> inde AM API, with just GIN implementing it.

> Yes, this patch was more a POC than anything, to discuss the approach
> before spending too much time on infrastructure.  I considered another
> support function, but I'm still unclear of how useful it'd be for
> custom AM (as I don't see any use for that for the vanilla one I
> think), or whether if this should be opclass specific or not.

I just spent a lot of sweat to get rid of (most of) indxpath.c's knowledge
about specific AMs' capabilities; I'd be very sad if we started to put any
back.  Aside from being a modularity violation, it's going to fall foul
of the principle that if index AM X wants something, some index AM Y is
going to want it too, eventually.

Also, I'm quite unhappy about including index_selfuncs.h into indxpath.c
at all, never mind whether you got the alphabetical ordering right.
I have doubts still about how we ought to refactor the mess that is
*selfuncs.c, but this isn't going in the right direction.

>> 3) selfuncs.c is hardly the right place for gin_get_optimizable_quals,
>> as it's only for functions computing selectivity estimates (and funcs
>> directly related to that). And the new function is not related to that
>> at all, so why not to define it in indxpath.c directly?

I not only don't want that function in indxpath.c, I don't even want
it to be known/called from there.  If we need the ability for the index
AM to editorialize on the list of indexable quals (which I'm not very
convinced of yet), let's make an AM interface function to do it.

BTW, I have no idea what you think you're doing here by messing with
outer_relids, but it's almost certainly wrong, and if it isn't wrong
then it needs a comment explaining itself.

            regards, tom lane



Re: Avoid full GIN index scan when possible

From
Tomas Vondra
Date:
On Fri, Jun 28, 2019 at 03:03:19PM -0400, Tom Lane wrote:
>Julien Rouhaud <rjuju123@gmail.com> writes:
>> On Fri, Jun 28, 2019 at 6:10 PM Tomas Vondra
>> <tomas.vondra@2ndquadrant.com> wrote:
>>> 2) I'm not sure it's a good idea to add dependency on a specific AM type
>>> into indxpath.c. At the moment there are only two places, both referring
>>> to BTREE_AM_OID, do we really hard-code another OID here?
>>>
>>> I wonder if this could be generalized to another support proc in the
>>> inde AM API, with just GIN implementing it.
>
>> Yes, this patch was more a POC than anything, to discuss the approach
>> before spending too much time on infrastructure.  I considered another
>> support function, but I'm still unclear of how useful it'd be for
>> custom AM (as I don't see any use for that for the vanilla one I
>> think), or whether if this should be opclass specific or not.
>
>I just spent a lot of sweat to get rid of (most of) indxpath.c's knowledge
>about specific AMs' capabilities; I'd be very sad if we started to put any
>back.  Aside from being a modularity violation, it's going to fall foul
>of the principle that if index AM X wants something, some index AM Y is
>going to want it too, eventually.
>
>Also, I'm quite unhappy about including index_selfuncs.h into indxpath.c
>at all, never mind whether you got the alphabetical ordering right.
>I have doubts still about how we ought to refactor the mess that is
>*selfuncs.c, but this isn't going in the right direction.
>

Right.

>>> 3) selfuncs.c is hardly the right place for gin_get_optimizable_quals,
>>> as it's only for functions computing selectivity estimates (and funcs
>>> directly related to that). And the new function is not related to that
>>> at all, so why not to define it in indxpath.c directly?
>
>I not only don't want that function in indxpath.c, I don't even want
>it to be known/called from there.  If we need the ability for the index
>AM to editorialize on the list of indexable quals (which I'm not very
>convinced of yet), let's make an AM interface function to do it.
>

Wouldn't it be better to have a function that inspects a single qual and
says whether it's "optimizable" or not? That could be part of the AM
implementation, and we'd call it and it'd be us messing with the list.

That being said, is this really a binary thing - if you have a value
that matches 99% of rows, that probably is not much better than a full
scan.  It may be more difficult to decide (compared to the 'short
trigram' case), but perhaps we should allow that too? Essentially,
instead of 'optimizable' returning true/false, it might return value
between 0.0 and 1.0, as a measure of 'optimizability'.

But that kinda resembles stuff we already have - selectivity/cost. So
why shouldn't this be considered as part of costing? That is, could
gincostestimate look at the index quals and decide what will be used for
scanning the index? Of course, this would make the logic GIN-specific,
and other index AMs would have to implement their own version ...


regards

-- 
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services 



Re: Avoid full GIN index scan when possible

From
Tom Lane
Date:
Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:
> On Fri, Jun 28, 2019 at 03:03:19PM -0400, Tom Lane wrote:
>> I not only don't want that function in indxpath.c, I don't even want
>> it to be known/called from there.  If we need the ability for the index
>> AM to editorialize on the list of indexable quals (which I'm not very
>> convinced of yet), let's make an AM interface function to do it.

> Wouldn't it be better to have a function that inspects a single qual and
> says whether it's "optimizable" or not? That could be part of the AM
> implementation, and we'd call it and it'd be us messing with the list.

Uh ... we already determined that the qual is indexable (ie is a member
of the index's opclass), or allowed the index AM to derive an indexable
clause from it, so I'm not sure what you envision would happen
additionally there.  If I understand what Julien is concerned about
--- and I may not --- it's that the set of indexable clauses *as a whole*
may have or lack properties of interest.  So I'm thinking the answer
involves some callback that can do something to the whole list, not
qual-at-a-time.  We've already got facilities for the latter case.

> But that kinda resembles stuff we already have - selectivity/cost. So
> why shouldn't this be considered as part of costing?

Yeah, I'm not entirely convinced that we need anything new here.
The cost estimate function can detect such situations, and so can
the index AM at scan start --- for example, btree checks for
contradictory quals at scan start.  There's a certain amount of
duplicative effort involved there perhaps, but you also have to
keep in mind that we don't know the values of run-time-determined
comparison values until scan start.  So if you want certainty rather
than just a cost estimate, you may have to do these sorts of checks
at scan start.

            regards, tom lane



Re: Avoid full GIN index scan when possible

From
Julien Rouhaud
Date:
On Fri, Jun 28, 2019 at 10:16 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
>
> Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:
> > On Fri, Jun 28, 2019 at 03:03:19PM -0400, Tom Lane wrote:
> >> I not only don't want that function in indxpath.c, I don't even want
> >> it to be known/called from there.  If we need the ability for the index
> >> AM to editorialize on the list of indexable quals (which I'm not very
> >> convinced of yet), let's make an AM interface function to do it.
>
> > Wouldn't it be better to have a function that inspects a single qual and
> > says whether it's "optimizable" or not? That could be part of the AM
> > implementation, and we'd call it and it'd be us messing with the list.
>
> Uh ... we already determined that the qual is indexable (ie is a member
> of the index's opclass), or allowed the index AM to derive an indexable
> clause from it, so I'm not sure what you envision would happen
> additionally there.  If I understand what Julien is concerned about
> --- and I may not --- it's that the set of indexable clauses *as a whole*
> may have or lack properties of interest.  So I'm thinking the answer
> involves some callback that can do something to the whole list, not
> qual-at-a-time.  We've already got facilities for the latter case.

Yes, the root issue here is that with gin it's entirely possible that
"WHERE sometable.col op value1" is way more efficient than "WHERE
sometable.col op value AND sometable.col op value2", where both qual
are determined indexable by the opclass.  The only way to avoid that
is indeed to inspect the whole list, as done in this poor POC.

This is a problem actually hit in production, and as far as I know
there's no easy way from the application POV to prevent unexpected
slowdown.  Maybe Marc will have more details about the actual problem
and how expensive such a case was compared to the normal ones.

> > But that kinda resembles stuff we already have - selectivity/cost. So
> > why shouldn't this be considered as part of costing?
>
> Yeah, I'm not entirely convinced that we need anything new here.
> The cost estimate function can detect such situations, and so can
> the index AM at scan start --- for example, btree checks for
> contradictory quals at scan start.  There's a certain amount of
> duplicative effort involved there perhaps, but you also have to
> keep in mind that we don't know the values of run-time-determined
> comparison values until scan start.  So if you want certainty rather
> than just a cost estimate, you may have to do these sorts of checks
> at scan start.

Ah, I didn't know about _bt_preprocess_keys().  I'm not familiar with
this code, so please bear with me.  IIUC the idea would be to add
additional logic in gingetbitmap() / ginNewScanKey() to drop some
quals at runtime.  But that would mean that additional logic would
also be required in BitmapHeapScan, or that all the returned bitmap
should be artificially marked as lossy to enforce a recheck?



Re: Avoid full GIN index scan when possible

From
Nikita Glukhov
Date:
Hi!

On 29.06.2019 1:23, Julien Rouhaud wrote:
But that kinda resembles stuff we already have - selectivity/cost. So
why shouldn't this be considered as part of costing?
Yeah, I'm not entirely convinced that we need anything new here.
The cost estimate function can detect such situations, and so can
the index AM at scan start --- for example, btree checks for
contradictory quals at scan start.  There's a certain amount of
duplicative effort involved there perhaps, but you also have to
keep in mind that we don't know the values of run-time-determined
comparison values until scan start.  So if you want certainty rather
than just a cost estimate, you may have to do these sorts of checks
at scan start.
Ah, I didn't know about _bt_preprocess_keys().  I'm not familiar with
this code, so please bear with me.  IIUC the idea would be to add
additional logic in gingetbitmap() / ginNewScanKey() to drop some
quals at runtime.  But that would mean that additional logic would
also be required in BitmapHeapScan, or that all the returned bitmap
should be artificially marked as lossy to enforce a recheck?

We have a similar solution for this problem.  The idea is to avoid full index
scan inside GIN itself when we have some GIN entries, and forcibly recheck
all tuples if triconsistent() returns GIN_MAYBE for the keys that emitted no
GIN entries.

The attached patch in its current shape contain at least two ugly places:

1. We still need to initialize empty scan key to call triconsistent(), but   then we have to remove it from the list of scan keys.  Simple refactoring   of ginFillScanKey() can be helpful here.
2. We need to replace GIN_SEARCH_MODE_EVERYTHING with GIN_SEARCH_MODE_ALL  if there are no GIN entries and some key requested GIN_SEARCH_MODE_ALL  because we need to skip NULLs in GIN_SEARCH_MODE_ALL.  Simplest example here  is "array @> '{}'": triconsistent() returns GIN_TRUE, recheck is not forced,  and GIN_SEARCH_MODE_EVERYTHING returns NULLs that are not rechecked.  Maybe  it would be better to introduce new GIN_SEARCH_MODE_EVERYTHING_NON_NULL.



Example:

CREATE TABLE test AS SELECT i::text AS t FROM generate_series(0, 999999) i;

CREATE INDEX ON test USING gin (t gin_trgm_ops);

-- master
EXPLAIN ANALYZE SELECT * FROM test WHERE LIKE '%1234%' AND t LIKE '%1%';                                                         QUERY PLAN                                                          
------------------------------------------------------------------------------------------------------------------------------Bitmap Heap Scan on test  (cost=11777.99..16421.73 rows=7999 width=32) (actual time=65.431..65.857 rows=300 loops=1)  Recheck Cond: ((t ~~ '%1234%'::text) AND (t ~~ '%1%'::text))  Rows Removed by Index Recheck: 2  Heap Blocks: exact=114  ->  Bitmap Index Scan on test_t_idx  (cost=0.00..11775.99 rows=7999 width=0) (actual time=65.380..65.380 rows=302 loops=1)        Index Cond: ((t ~~ '%1234%'::text) AND (t ~~ '%1%'::text))Planning Time: 0.151 msExecution Time: 65.900 ms
(8 rows)


-- patched
EXPLAIN ANALYZE SELECT * FROM test WHERE t LIKE '%1234%' AND t LIKE '%1%';                                                     QUERY PLAN                                                       
-----------------------------------------------------------------------------------------------------------------------Bitmap Heap Scan on test  (cost=20.43..176.79 rows=42 width=6) (actual time=0.287..0.424 rows=300 loops=1)  Recheck Cond: ((t ~~ '%1234%'::text) AND (t ~~ '%1%'::text))  Rows Removed by Index Recheck: 2  Heap Blocks: exact=114  ->  Bitmap Index Scan on test_t_idx  (cost=0.00..20.42 rows=42 width=0) (actual time=0.271..0.271 rows=302 loops=1)        Index Cond: ((t ~~ '%1234%'::text) AND (t ~~ '%1%'::text))Planning Time: 0.080 msExecution Time: 0.450 ms
(8 rows)

-- 
Nikita Glukhov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachment

Re: Avoid full GIN index scan when possible

From
Tomas Vondra
Date:
On Fri, Jun 28, 2019 at 04:16:23PM -0400, Tom Lane wrote:
>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:
>> On Fri, Jun 28, 2019 at 03:03:19PM -0400, Tom Lane wrote:
>>> I not only don't want that function in indxpath.c, I don't even want
>>> it to be known/called from there.  If we need the ability for the index
>>> AM to editorialize on the list of indexable quals (which I'm not very
>>> convinced of yet), let's make an AM interface function to do it.
>
>> Wouldn't it be better to have a function that inspects a single qual and
>> says whether it's "optimizable" or not? That could be part of the AM
>> implementation, and we'd call it and it'd be us messing with the list.
>
>Uh ... we already determined that the qual is indexable (ie is a member
>of the index's opclass), or allowed the index AM to derive an indexable
>clause from it, so I'm not sure what you envision would happen
>additionally there.  If I understand what Julien is concerned about
>--- and I may not --- it's that the set of indexable clauses *as a whole*
>may have or lack properties of interest.  So I'm thinking the answer
>involves some callback that can do something to the whole list, not
>qual-at-a-time.  We've already got facilities for the latter case.
>

I'm not sure I understand the problem either.

I don't think "indexable" is the thing we care about here - in Julien's
original example the qual with '%a%' is indexable. And we probably want
to keep it that way.

The problem is that evaluating some of the quals may be inefficient with
a given index - but only if there are other quals. In Julien's example
it makes sense to just drop the '%a%' qual, but only when there are some
quals that work with the trigram index. But if there are no such 'good'
quals, it may be better to keep al least the bad ones.

So I think you're right we need to look at the list as a whole.

>> But that kinda resembles stuff we already have - selectivity/cost. So
>> why shouldn't this be considered as part of costing?
>
>Yeah, I'm not entirely convinced that we need anything new here.
>The cost estimate function can detect such situations, and so can
>the index AM at scan start --- for example, btree checks for
>contradictory quals at scan start.  There's a certain amount of
>duplicative effort involved there perhaps, but you also have to
>keep in mind that we don't know the values of run-time-determined
>comparison values until scan start.  So if you want certainty rather
>than just a cost estimate, you may have to do these sorts of checks
>at scan start.
>

Right, that's why I suggested doing this as part of costing, but you're
right scan start would be another option. I assume it should affect cost
estimates in some way, so the cost function would be my first choice.

But does the cost function really has enough info to make such decision?
For example, ignoring quals is valid only if we recheck them later. For
GIN that's not an issue thanks to the bitmap index scan.


regards

-- 
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services 



Re: Avoid full GIN index scan when possible

From
Julien Rouhaud
Date:
On Sat, Jun 29, 2019 at 12:51 AM Nikita Glukhov
<n.gluhov@postgrespro.ru> wrote:>
> On 29.06.2019 1:23, Julien Rouhaud wrote:
>
> But that kinda resembles stuff we already have - selectivity/cost. So
> why shouldn't this be considered as part of costing?
>
> Yeah, I'm not entirely convinced that we need anything new here.
> The cost estimate function can detect such situations, and so can
> the index AM at scan start --- for example, btree checks for
> contradictory quals at scan start.  There's a certain amount of
> duplicative effort involved there perhaps, but you also have to
> keep in mind that we don't know the values of run-time-determined
> comparison values until scan start.  So if you want certainty rather
> than just a cost estimate, you may have to do these sorts of checks
> at scan start.
>
> Ah, I didn't know about _bt_preprocess_keys().  I'm not familiar with
> this code, so please bear with me.  IIUC the idea would be to add
> additional logic in gingetbitmap() / ginNewScanKey() to drop some
> quals at runtime.  But that would mean that additional logic would
> also be required in BitmapHeapScan, or that all the returned bitmap
> should be artificially marked as lossy to enforce a recheck?
>
> We have a similar solution for this problem.  The idea is to avoid full index
> scan inside GIN itself when we have some GIN entries, and forcibly recheck
> all tuples if triconsistent() returns GIN_MAYBE for the keys that emitted no
> GIN entries.

Thanks for looking at it.  That's I think a way better approach.

> The attached patch in its current shape contain at least two ugly places:
>
> 1. We still need to initialize empty scan key to call triconsistent(), but
>    then we have to remove it from the list of scan keys.  Simple refactoring
>    of ginFillScanKey() can be helpful here.
>
> 2. We need to replace GIN_SEARCH_MODE_EVERYTHING with GIN_SEARCH_MODE_ALL
>    if there are no GIN entries and some key requested GIN_SEARCH_MODE_ALL
>    because we need to skip NULLs in GIN_SEARCH_MODE_ALL.  Simplest example here
>    is "array @> '{}'": triconsistent() returns GIN_TRUE, recheck is not forced,
>    and GIN_SEARCH_MODE_EVERYTHING returns NULLs that are not rechecked.  Maybe
>    it would be better to introduce new GIN_SEARCH_MODE_EVERYTHING_NON_NULL.

Also

+       if (searchMode == GIN_SEARCH_MODE_ALL && nQueryValues <= 0)
+       {
+           /*
+            * Don't emit ALL key with no entries, check only whether
+            * unconditional recheck is needed.
+            */
+           GinScanKey  key = &so->keys[--so->nkeys];
+
+           hasSearchAllMode = true;
+           so->forcedRecheck = key->triConsistentFn(key) != GIN_TRUE;
+       }

Shouldn't you make sure that  the forcedRecheck flag can't reset?

> -- patched
> EXPLAIN ANALYZE SELECT * FROM test WHERE t LIKE '%1234%' AND t LIKE '%1%';
>                                                       QUERY PLAN
>
-----------------------------------------------------------------------------------------------------------------------
>  Bitmap Heap Scan on test  (cost=20.43..176.79 rows=42 width=6) (actual time=0.287..0.424 rows=300 loops=1)
>    Recheck Cond: ((t ~~ '%1234%'::text) AND (t ~~ '%1%'::text))
>    Rows Removed by Index Recheck: 2
>    Heap Blocks: exact=114
>    ->  Bitmap Index Scan on test_t_idx  (cost=0.00..20.42 rows=42 width=0) (actual time=0.271..0.271 rows=302
loops=1)
>          Index Cond: ((t ~~ '%1234%'::text) AND (t ~~ '%1%'::text))
>  Planning Time: 0.080 ms
>  Execution Time: 0.450 ms
> (8 rows)

One thing that's bothering me is that the explain implies that the
LIKE '%i% was part of the index scan, while in reality it wasn't.  One
of the reason why I tried to modify the qual while generating the path
was to have the explain be clearer about what is really done.



Re: Avoid full GIN index scan when possible

From
Tomas Vondra
Date:
On Sat, Jun 29, 2019 at 11:10:03AM +0200, Julien Rouhaud wrote:
>On Sat, Jun 29, 2019 at 12:51 AM Nikita Glukhov
><n.gluhov@postgrespro.ru> wrote:>
>> On 29.06.2019 1:23, Julien Rouhaud wrote:
>>
>> But that kinda resembles stuff we already have - selectivity/cost. So
>> why shouldn't this be considered as part of costing?
>>
>> Yeah, I'm not entirely convinced that we need anything new here.
>> The cost estimate function can detect such situations, and so can
>> the index AM at scan start --- for example, btree checks for
>> contradictory quals at scan start.  There's a certain amount of
>> duplicative effort involved there perhaps, but you also have to
>> keep in mind that we don't know the values of run-time-determined
>> comparison values until scan start.  So if you want certainty rather
>> than just a cost estimate, you may have to do these sorts of checks
>> at scan start.
>>
>> Ah, I didn't know about _bt_preprocess_keys().  I'm not familiar with
>> this code, so please bear with me.  IIUC the idea would be to add
>> additional logic in gingetbitmap() / ginNewScanKey() to drop some
>> quals at runtime.  But that would mean that additional logic would
>> also be required in BitmapHeapScan, or that all the returned bitmap
>> should be artificially marked as lossy to enforce a recheck?
>>
>> We have a similar solution for this problem.  The idea is to avoid full index
>> scan inside GIN itself when we have some GIN entries, and forcibly recheck
>> all tuples if triconsistent() returns GIN_MAYBE for the keys that emitted no
>> GIN entries.
>
>Thanks for looking at it.  That's I think a way better approach.
>
>> The attached patch in its current shape contain at least two ugly places:
>>
>> 1. We still need to initialize empty scan key to call triconsistent(), but
>>    then we have to remove it from the list of scan keys.  Simple refactoring
>>    of ginFillScanKey() can be helpful here.
>>
>> 2. We need to replace GIN_SEARCH_MODE_EVERYTHING with GIN_SEARCH_MODE_ALL
>>    if there are no GIN entries and some key requested GIN_SEARCH_MODE_ALL
>>    because we need to skip NULLs in GIN_SEARCH_MODE_ALL.  Simplest example here
>>    is "array @> '{}'": triconsistent() returns GIN_TRUE, recheck is not forced,
>>    and GIN_SEARCH_MODE_EVERYTHING returns NULLs that are not rechecked.  Maybe
>>    it would be better to introduce new GIN_SEARCH_MODE_EVERYTHING_NON_NULL.
>
>Also
>
>+       if (searchMode == GIN_SEARCH_MODE_ALL && nQueryValues <= 0)
>+       {
>+           /*
>+            * Don't emit ALL key with no entries, check only whether
>+            * unconditional recheck is needed.
>+            */
>+           GinScanKey  key = &so->keys[--so->nkeys];
>+
>+           hasSearchAllMode = true;
>+           so->forcedRecheck = key->triConsistentFn(key) != GIN_TRUE;
>+       }
>
>Shouldn't you make sure that  the forcedRecheck flag can't reset?
>
>> -- patched
>> EXPLAIN ANALYZE SELECT * FROM test WHERE t LIKE '%1234%' AND t LIKE '%1%';
>>                                                       QUERY PLAN
>>
-----------------------------------------------------------------------------------------------------------------------
>>  Bitmap Heap Scan on test  (cost=20.43..176.79 rows=42 width=6) (actual time=0.287..0.424 rows=300 loops=1)
>>    Recheck Cond: ((t ~~ '%1234%'::text) AND (t ~~ '%1%'::text))
>>    Rows Removed by Index Recheck: 2
>>    Heap Blocks: exact=114
>>    ->  Bitmap Index Scan on test_t_idx  (cost=0.00..20.42 rows=42 width=0) (actual time=0.271..0.271 rows=302
loops=1)
>>          Index Cond: ((t ~~ '%1234%'::text) AND (t ~~ '%1%'::text))
>>  Planning Time: 0.080 ms
>>  Execution Time: 0.450 ms
>> (8 rows)
>
>One thing that's bothering me is that the explain implies that the
>LIKE '%i% was part of the index scan, while in reality it wasn't.  One
>of the reason why I tried to modify the qual while generating the path
>was to have the explain be clearer about what is really done.

Yeah, I think that's a bit annoying - it'd be nice to make it clear
which quals were actually used to scan the index. It some cases it may
not be possible (e.g. in cases when the decision is done at runtime, not
while planning the query), but it'd be nice to show it when possible.

A related issue is that during costing is too late to modify cardinality
estimates, so the 'Bitmap Index Scan' will be expected to return fewer
rows than it actually returns (after ignoring the full-scan quals).
Ignoring redundant quals (the way btree does it at execution) does not
have such consequence, of course.

Which may be an issue, because we essentially want to modify the list of
quals to minimize the cost of

   bitmap index scan + recheck during bitmap heap scan

OTOH it's not a huge issue, because it won't affect the rest of the plan
(because that uses the bitmap heap scan estimates, and those are not
affected by this).


regards

-- 
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services 



Re: Avoid full GIN index scan when possible

From
Julien Rouhaud
Date:
On Sat, Jun 29, 2019 at 12:25 PM Tomas Vondra
<tomas.vondra@2ndquadrant.com> wrote:
>
> On Sat, Jun 29, 2019 at 11:10:03AM +0200, Julien Rouhaud wrote:
> >On Sat, Jun 29, 2019 at 12:51 AM Nikita Glukhov
> >> -- patched
> >> EXPLAIN ANALYZE SELECT * FROM test WHERE t LIKE '%1234%' AND t LIKE '%1%';
> >>                                                       QUERY PLAN
> >>
-----------------------------------------------------------------------------------------------------------------------
> >>  Bitmap Heap Scan on test  (cost=20.43..176.79 rows=42 width=6) (actual time=0.287..0.424 rows=300 loops=1)
> >>    Recheck Cond: ((t ~~ '%1234%'::text) AND (t ~~ '%1%'::text))
> >>    Rows Removed by Index Recheck: 2
> >>    Heap Blocks: exact=114
> >>    ->  Bitmap Index Scan on test_t_idx  (cost=0.00..20.42 rows=42 width=0) (actual time=0.271..0.271 rows=302
loops=1)
> >>          Index Cond: ((t ~~ '%1234%'::text) AND (t ~~ '%1%'::text))
> >>  Planning Time: 0.080 ms
> >>  Execution Time: 0.450 ms
> >> (8 rows)
> >
> >One thing that's bothering me is that the explain implies that the
> >LIKE '%i% was part of the index scan, while in reality it wasn't.  One
> >of the reason why I tried to modify the qual while generating the path
> >was to have the explain be clearer about what is really done.
>
> Yeah, I think that's a bit annoying - it'd be nice to make it clear
> which quals were actually used to scan the index. It some cases it may
> not be possible (e.g. in cases when the decision is done at runtime, not
> while planning the query), but it'd be nice to show it when possible.

Maybe we could somehow add some runtime information about ignored
quals, similar to the "never executed" information for loops?

> A related issue is that during costing is too late to modify cardinality
> estimates, so the 'Bitmap Index Scan' will be expected to return fewer
> rows than it actually returns (after ignoring the full-scan quals).
> Ignoring redundant quals (the way btree does it at execution) does not
> have such consequence, of course.
>
> Which may be an issue, because we essentially want to modify the list of
> quals to minimize the cost of
>
>    bitmap index scan + recheck during bitmap heap scan
>
> OTOH it's not a huge issue, because it won't affect the rest of the plan
> (because that uses the bitmap heap scan estimates, and those are not
> affected by this).

Doesn't this problem already exists, as the quals that we could drop
can't actually reduce the node's results?



Re: Avoid full GIN index scan when possible

From
Tomas Vondra
Date:
On Sat, Jun 29, 2019 at 02:50:51PM +0200, Julien Rouhaud wrote:
>On Sat, Jun 29, 2019 at 12:25 PM Tomas Vondra
><tomas.vondra@2ndquadrant.com> wrote:
>>
>> On Sat, Jun 29, 2019 at 11:10:03AM +0200, Julien Rouhaud wrote:
>> >On Sat, Jun 29, 2019 at 12:51 AM Nikita Glukhov
>> >> -- patched
>> >> EXPLAIN ANALYZE SELECT * FROM test WHERE t LIKE '%1234%' AND t LIKE '%1%';
>> >>                                                       QUERY PLAN
>> >>
-----------------------------------------------------------------------------------------------------------------------
>> >>  Bitmap Heap Scan on test  (cost=20.43..176.79 rows=42 width=6) (actual time=0.287..0.424 rows=300 loops=1)
>> >>    Recheck Cond: ((t ~~ '%1234%'::text) AND (t ~~ '%1%'::text))
>> >>    Rows Removed by Index Recheck: 2
>> >>    Heap Blocks: exact=114
>> >>    ->  Bitmap Index Scan on test_t_idx  (cost=0.00..20.42 rows=42 width=0) (actual time=0.271..0.271 rows=302
loops=1)
>> >>          Index Cond: ((t ~~ '%1234%'::text) AND (t ~~ '%1%'::text))
>> >>  Planning Time: 0.080 ms
>> >>  Execution Time: 0.450 ms
>> >> (8 rows)
>> >
>> >One thing that's bothering me is that the explain implies that the
>> >LIKE '%i% was part of the index scan, while in reality it wasn't.  One
>> >of the reason why I tried to modify the qual while generating the path
>> >was to have the explain be clearer about what is really done.
>>
>> Yeah, I think that's a bit annoying - it'd be nice to make it clear
>> which quals were actually used to scan the index. It some cases it may
>> not be possible (e.g. in cases when the decision is done at runtime, not
>> while planning the query), but it'd be nice to show it when possible.
>
>Maybe we could somehow add some runtime information about ignored
>quals, similar to the "never executed" information for loops?
>

Maybe. I suppose it depends on when exactly we make the decision about
which quals to ignore.

>> A related issue is that during costing is too late to modify cardinality
>> estimates, so the 'Bitmap Index Scan' will be expected to return fewer
>> rows than it actually returns (after ignoring the full-scan quals).
>> Ignoring redundant quals (the way btree does it at execution) does not
>> have such consequence, of course.
>>
>> Which may be an issue, because we essentially want to modify the list of
>> quals to minimize the cost of
>>
>>    bitmap index scan + recheck during bitmap heap scan
>>
>> OTOH it's not a huge issue, because it won't affect the rest of the plan
>> (because that uses the bitmap heap scan estimates, and those are not
>> affected by this).
>
>Doesn't this problem already exists, as the quals that we could drop
>can't actually reduce the node's results?

How could it not reduce the node's results, if you ignore some quals
that are not redundant? My understanding is we have a plan like this:

    Bitmap Heap Scan
      -> Bitmap Index Scan

and by ignoring some quals at the index scan level, we trade the (high)
cost of evaluating the qual there for a plain recheck at the bitmap heap
scan. But it means the index scan may produce more rows, so it's only a
win if the "extra rechecks" are cheaper than the (removed) full scan.

So the full scan might actually reduce the number of rows from the index
scan, but clearly whatever we do the results from the bitmap heap scan
must remain the same.

regards

-- 
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services 



Re: Avoid full GIN index scan when possible

From
Julien Rouhaud
Date:
On Sat, Jun 29, 2019 at 3:11 PM Tomas Vondra
<tomas.vondra@2ndquadrant.com> wrote:
>
> On Sat, Jun 29, 2019 at 02:50:51PM +0200, Julien Rouhaud wrote:
> >On Sat, Jun 29, 2019 at 12:25 PM Tomas Vondra
> >> A related issue is that during costing is too late to modify cardinality
> >> estimates, so the 'Bitmap Index Scan' will be expected to return fewer
> >> rows than it actually returns (after ignoring the full-scan quals).
> >> Ignoring redundant quals (the way btree does it at execution) does not
> >> have such consequence, of course.
> >>
> >> Which may be an issue, because we essentially want to modify the list of
> >> quals to minimize the cost of
> >>
> >>    bitmap index scan + recheck during bitmap heap scan
> >>
> >> OTOH it's not a huge issue, because it won't affect the rest of the plan
> >> (because that uses the bitmap heap scan estimates, and those are not
> >> affected by this).
> >
> >Doesn't this problem already exists, as the quals that we could drop
> >can't actually reduce the node's results?
>
> How could it not reduce the node's results, if you ignore some quals
> that are not redundant? My understanding is we have a plan like this:
>
>     Bitmap Heap Scan
>       -> Bitmap Index Scan
>
> and by ignoring some quals at the index scan level, we trade the (high)
> cost of evaluating the qual there for a plain recheck at the bitmap heap
> scan. But it means the index scan may produce more rows, so it's only a
> win if the "extra rechecks" are cheaper than the (removed) full scan.

Sorry, by node I meant the BitmapIndexScan.  AIUI, if you have for
instance WHERE val LIKE '%abcde%' AND val LIKE '%z%' and a trgm index,
the BitmapIndexScan will have to through the whole index and discard
rows based on the only opclass-optimizable qual (LIKE '%abcde%'),
letting the recheck do the proper filtering for the other qual.  So
whether you have the LIKE '%z%' or not in the index scan, the
BitmapIndexScan will return the same number of rows, the only
difference being that in one case you'll have to scan the whole index,
while in the other case you won't.

> clearly whatever we do the results from the bitmap heap scan
> must remain the same.

Of course.



Re: Avoid full GIN index scan when possible

From
Tomas Vondra
Date:
On Sat, Jun 29, 2019 at 03:28:11PM +0200, Julien Rouhaud wrote:
>On Sat, Jun 29, 2019 at 3:11 PM Tomas Vondra
><tomas.vondra@2ndquadrant.com> wrote:
>>
>> On Sat, Jun 29, 2019 at 02:50:51PM +0200, Julien Rouhaud wrote:
>> >On Sat, Jun 29, 2019 at 12:25 PM Tomas Vondra
>> >> A related issue is that during costing is too late to modify cardinality
>> >> estimates, so the 'Bitmap Index Scan' will be expected to return fewer
>> >> rows than it actually returns (after ignoring the full-scan quals).
>> >> Ignoring redundant quals (the way btree does it at execution) does not
>> >> have such consequence, of course.
>> >>
>> >> Which may be an issue, because we essentially want to modify the list of
>> >> quals to minimize the cost of
>> >>
>> >>    bitmap index scan + recheck during bitmap heap scan
>> >>
>> >> OTOH it's not a huge issue, because it won't affect the rest of the plan
>> >> (because that uses the bitmap heap scan estimates, and those are not
>> >> affected by this).
>> >
>> >Doesn't this problem already exists, as the quals that we could drop
>> >can't actually reduce the node's results?
>>
>> How could it not reduce the node's results, if you ignore some quals
>> that are not redundant? My understanding is we have a plan like this:
>>
>>     Bitmap Heap Scan
>>       -> Bitmap Index Scan
>>
>> and by ignoring some quals at the index scan level, we trade the (high)
>> cost of evaluating the qual there for a plain recheck at the bitmap heap
>> scan. But it means the index scan may produce more rows, so it's only a
>> win if the "extra rechecks" are cheaper than the (removed) full scan.
>
>Sorry, by node I meant the BitmapIndexScan.  AIUI, if you have for
>instance WHERE val LIKE '%abcde%' AND val LIKE '%z%' and a trgm index,
>the BitmapIndexScan will have to through the whole index and discard
>rows based on the only opclass-optimizable qual (LIKE '%abcde%'),
>letting the recheck do the proper filtering for the other qual.  So
>whether you have the LIKE '%z%' or not in the index scan, the
>BitmapIndexScan will return the same number of rows, the only
>difference being that in one case you'll have to scan the whole index,
>while in the other case you won't.
>

Oh! I thought 'full scan' means we have to scan all the keys in the GIN
index, but we can still eliminate some of the keys (for example for the
trigrams we might check if the trigram contains the short string). But
clearly I was mistaken and it does not work like that ...


regards

-- 
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services 



Re: Avoid full GIN index scan when possible

From
Alexander Korotkov
Date:
Hi!

On Sat, Jun 29, 2019 at 1:52 AM Nikita Glukhov <n.gluhov@postgrespro.ru> wrote:
> We have a similar solution for this problem.  The idea is to avoid full index
> scan inside GIN itself when we have some GIN entries, and forcibly recheck
> all tuples if triconsistent() returns GIN_MAYBE for the keys that emitted no
> GIN entries.
>
> The attached patch in its current shape contain at least two ugly places:
>
> 1. We still need to initialize empty scan key to call triconsistent(), but
>    then we have to remove it from the list of scan keys.  Simple refactoring
>    of ginFillScanKey() can be helpful here.
>
> 2. We need to replace GIN_SEARCH_MODE_EVERYTHING with GIN_SEARCH_MODE_ALL
>    if there are no GIN entries and some key requested GIN_SEARCH_MODE_ALL
>    because we need to skip NULLs in GIN_SEARCH_MODE_ALL.  Simplest example here
>    is "array @> '{}'": triconsistent() returns GIN_TRUE, recheck is not forced,
>    and GIN_SEARCH_MODE_EVERYTHING returns NULLs that are not rechecked.  Maybe
>    it would be better to introduce new GIN_SEARCH_MODE_EVERYTHING_NON_NULL.

Thank you for publishing this!

What would happen when two-columns index have GIN_SEARCH_MODE_DEFAULT
scan on first column and GIN_SEARCH_MODE_ALL on second?  I think even
if triconsistent() for second column returns GIN_TRUE, we still need
to recheck to verify second columns is not NULL.

------
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



Re: Avoid full GIN index scan when possible

From
Alexander Korotkov
Date:
On Sat, Jun 29, 2019 at 1:25 PM Tomas Vondra
<tomas.vondra@2ndquadrant.com> wrote:
> A related issue is that during costing is too late to modify cardinality
> estimates, so the 'Bitmap Index Scan' will be expected to return fewer
> rows than it actually returns (after ignoring the full-scan quals).
> Ignoring redundant quals (the way btree does it at execution) does not
> have such consequence, of course.

Adjust cardinality estimates should be possible in gincostestimate(),
because we call extractquery() method there.  However, it seems to be
quite independent issue.  Number of rows returned by 'Bitmap Index
Scan' doesn't vary much whether we execute GIN_SEARCH_MODE_ALL or not.
The only difference is for multicolumn index, GIN_SEARCH_MODE_ALL
allows to exclude NULL on one column, when normal scan is performed on
another column.  And we can take it into account in gincostestimate().

------
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



Re: Avoid full GIN index scan when possible

From
Alexander Korotkov
Date:
On Sat, Jun 29, 2019 at 3:51 PM Julien Rouhaud <rjuju123@gmail.com> wrote:
> On Sat, Jun 29, 2019 at 12:25 PM Tomas Vondra
> <tomas.vondra@2ndquadrant.com> wrote:
> >
> > On Sat, Jun 29, 2019 at 11:10:03AM +0200, Julien Rouhaud wrote:
> > >On Sat, Jun 29, 2019 at 12:51 AM Nikita Glukhov
> > >> -- patched
> > >> EXPLAIN ANALYZE SELECT * FROM test WHERE t LIKE '%1234%' AND t LIKE '%1%';
> > >>                                                       QUERY PLAN
> > >>
-----------------------------------------------------------------------------------------------------------------------
> > >>  Bitmap Heap Scan on test  (cost=20.43..176.79 rows=42 width=6) (actual time=0.287..0.424 rows=300 loops=1)
> > >>    Recheck Cond: ((t ~~ '%1234%'::text) AND (t ~~ '%1%'::text))
> > >>    Rows Removed by Index Recheck: 2
> > >>    Heap Blocks: exact=114
> > >>    ->  Bitmap Index Scan on test_t_idx  (cost=0.00..20.42 rows=42 width=0) (actual time=0.271..0.271 rows=302
loops=1)
> > >>          Index Cond: ((t ~~ '%1234%'::text) AND (t ~~ '%1%'::text))
> > >>  Planning Time: 0.080 ms
> > >>  Execution Time: 0.450 ms
> > >> (8 rows)
> > >
> > >One thing that's bothering me is that the explain implies that the
> > >LIKE '%i% was part of the index scan, while in reality it wasn't.  One
> > >of the reason why I tried to modify the qual while generating the path
> > >was to have the explain be clearer about what is really done.
> >
> > Yeah, I think that's a bit annoying - it'd be nice to make it clear
> > which quals were actually used to scan the index. It some cases it may
> > not be possible (e.g. in cases when the decision is done at runtime, not
> > while planning the query), but it'd be nice to show it when possible.
>
> Maybe we could somehow add some runtime information about ignored
> quals, similar to the "never executed" information for loops?

+1,
This sounds reasonable for me.

------
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



Re: Avoid full GIN index scan when possible

From
Marc Cousin
Date:
On 29/06/2019 00:23, Julien Rouhaud wrote:
> On Fri, Jun 28, 2019 at 10:16 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
>>
>> Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:
>>> On Fri, Jun 28, 2019 at 03:03:19PM -0400, Tom Lane wrote:
>>>> I not only don't want that function in indxpath.c, I don't even want
>>>> it to be known/called from there.  If we need the ability for the index
>>>> AM to editorialize on the list of indexable quals (which I'm not very
>>>> convinced of yet), let's make an AM interface function to do it.
>>
>>> Wouldn't it be better to have a function that inspects a single qual and
>>> says whether it's "optimizable" or not? That could be part of the AM
>>> implementation, and we'd call it and it'd be us messing with the list.
>>
>> Uh ... we already determined that the qual is indexable (ie is a member
>> of the index's opclass), or allowed the index AM to derive an indexable
>> clause from it, so I'm not sure what you envision would happen
>> additionally there.  If I understand what Julien is concerned about
>> --- and I may not --- it's that the set of indexable clauses *as a whole*
>> may have or lack properties of interest.  So I'm thinking the answer
>> involves some callback that can do something to the whole list, not
>> qual-at-a-time.  We've already got facilities for the latter case.
>
> Yes, the root issue here is that with gin it's entirely possible that
> "WHERE sometable.col op value1" is way more efficient than "WHERE
> sometable.col op value AND sometable.col op value2", where both qual
> are determined indexable by the opclass.  The only way to avoid that
> is indeed to inspect the whole list, as done in this poor POC.
>
> This is a problem actually hit in production, and as far as I know
> there's no easy way from the application POV to prevent unexpected
> slowdown.  Maybe Marc will have more details about the actual problem
> and how expensive such a case was compared to the normal ones.

Sorry for the delay...

Yes, quite easily, here is what we had (it's just a bit simplified, we have other criterions but I think it shows the
problem):

rh2=> explain analyze select * from account_employee where typeahead like '%albert%';
                                                                       QUERY PLAN
                                

--------------------------------------------------------------------------------------------------------------------------------------------------------
 Bitmap Heap Scan on account_employee  (cost=53.69..136.27 rows=734 width=666) (actual time=15.562..35.044 rows=8957
loops=1)
   Recheck Cond: (typeahead ~~ '%albert%'::text)
   Rows Removed by Index Recheck: 46
   Heap Blocks: exact=8919
   ->  Bitmap Index Scan on account_employee_site_typeahead_gin_idx  (cost=0.00..53.51 rows=734 width=0) (actual
time=14.135..14.135rows=9011 loops=1) 
         Index Cond: (typeahead ~~ '%albert%'::text)
 Planning time: 0.224 ms
 Execution time: 35.389 ms
(8 rows)

rh2=> explain analyze select * from account_employee where typeahead like '%albert%' and typeahead like '%lo%';
                                                                           QUERY PLAN
                                        

----------------------------------------------------------------------------------------------------------------------------------------------------------------
 Bitmap Heap Scan on account_employee  (cost=28358.38..28366.09 rows=67 width=666) (actual time=18210.109..18227.134
rows=1172loops=1) 
   Recheck Cond: ((typeahead ~~ '%albert%'::text) AND (typeahead ~~ '%lo%'::text))
   Rows Removed by Index Recheck: 7831
   Heap Blocks: exact=8919
   ->  Bitmap Index Scan on account_employee_site_typeahead_gin_idx  (cost=0.00..28358.37 rows=67 width=0) (actual
time=18204.756..18204.756rows=9011 loops=1) 
         Index Cond: ((typeahead ~~ '%albert%'::text) AND (typeahead ~~ '%lo%'::text))
 Planning time: 0.288 ms
 Execution time: 18230.182 ms
(8 rows)


We noticed this because the application timed out for users searching someone whose name was 2 characters ( it happens
:)). 

We reject such filters when it's the only criterion, as we know it's going to be slow, but ignoring it as a
supplementaryfilter would be a bit weird. 

Of course there is the possibility of filtering with two stages with a CTE, but that's not as great as having
PostgreSQLdoing it itself. 


By the way, while preparing this, I noticed that it seems that during this kind of index scan, the interrupt signal is
masked
for a very long time. Control-C takes a very long while to cancel the query. But it's an entirely different problem :)

Regards


Attachment

Re: Avoid full GIN index scan when possible

From
Tom Lane
Date:
Marc Cousin <cousinmarc@gmail.com> writes:
> By the way, while preparing this, I noticed that it seems that during this kind of index scan, the interrupt signal
ismasked 
> for a very long time. Control-C takes a very long while to cancel the query. But it's an entirely different problem
:)

Yeah, that seems like an independent problem/patch, but it's not obvious
where to fix --- can you provide a self-contained test case?

Meanwhile, I looked at the v3 patch, and it seems like it might not be
too far from committable.  I think we should *not* let this get bogged
down in questions of whether EXPLAIN can report which index quals were
used or ignored.  That's a problem that's existed for decades in the
btree code, with more or less zero user complaints.

I do think v3 needs more attention to comments, for instance this
hunk is clearly falsifying the adjacent comment:

@ -141,7 +141,8 @@ ginFillScanKey(GinScanOpaque so, OffsetNumber attnum,
     uint32        i;

     /* Non-default search modes add one "hidden" entry to each key */
-    if (searchMode != GIN_SEARCH_MODE_DEFAULT)
+    if (searchMode != GIN_SEARCH_MODE_DEFAULT &&
+        (searchMode != GIN_SEARCH_MODE_ALL || nQueryValues))
         nQueryValues++;
     key->nentries = nQueryValues;
     key->nuserentries = nUserQueryValues;

Also, I agree with Julien that this

+            so->forcedRecheck = key->triConsistentFn(key) != GIN_TRUE;

probably needs to be

+            so->forcedRecheck |= key->triConsistentFn(key) != GIN_TRUE;


            regards, tom lane



Re: Avoid full GIN index scan when possible

From
Thomas Munro
Date:
On Wed, Jul 31, 2019 at 5:28 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Meanwhile, I looked at the v3 patch, and it seems like it might not be
> too far from committable.  I think we should *not* let this get bogged
> down in questions of whether EXPLAIN can report which index quals were
> used or ignored.  That's a problem that's existed for decades in the
> btree code, with more or less zero user complaints.
>
> I do think v3 needs more attention to comments, for instance this
> hunk is clearly falsifying the adjacent comment:
>
> @ -141,7 +141,8 @@ ginFillScanKey(GinScanOpaque so, OffsetNumber attnum,
>         uint32          i;
>
>         /* Non-default search modes add one "hidden" entry to each key */
> -       if (searchMode != GIN_SEARCH_MODE_DEFAULT)
> +       if (searchMode != GIN_SEARCH_MODE_DEFAULT &&
> +               (searchMode != GIN_SEARCH_MODE_ALL || nQueryValues))
>                 nQueryValues++;
>         key->nentries = nQueryValues;
>         key->nuserentries = nUserQueryValues;
>
> Also, I agree with Julien that this
>
> +                       so->forcedRecheck = key->triConsistentFn(key) != GIN_TRUE;
>
> probably needs to be
>
> +                       so->forcedRecheck |= key->triConsistentFn(key) != GIN_TRUE;

Ping, Julien?  Based on the above, it looks like if we had a
last-minute patch addressing the above this could go directly to Ready
for Committer?  I will hold off moving this one to CF2 until my
morning.

-- 
Thomas Munro
https://enterprisedb.com



Re: Avoid full GIN index scan when possible

From
Julien Rouhaud
Date:
On Thu, Aug 1, 2019 at 8:43 AM Thomas Munro <thomas.munro@gmail.com> wrote:
>
> On Wed, Jul 31, 2019 at 5:28 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
> > Meanwhile, I looked at the v3 patch, and it seems like it might not be
> > too far from committable.  I think we should *not* let this get bogged
> > down in questions of whether EXPLAIN can report which index quals were
> > used or ignored.  That's a problem that's existed for decades in the
> > btree code, with more or less zero user complaints.
> >
> > I do think v3 needs more attention to comments, for instance this
> > hunk is clearly falsifying the adjacent comment:
> >
> > @ -141,7 +141,8 @@ ginFillScanKey(GinScanOpaque so, OffsetNumber attnum,
> >         uint32          i;
> >
> >         /* Non-default search modes add one "hidden" entry to each key */
> > -       if (searchMode != GIN_SEARCH_MODE_DEFAULT)
> > +       if (searchMode != GIN_SEARCH_MODE_DEFAULT &&
> > +               (searchMode != GIN_SEARCH_MODE_ALL || nQueryValues))
> >                 nQueryValues++;
> >         key->nentries = nQueryValues;
> >         key->nuserentries = nUserQueryValues;
> >
> > Also, I agree with Julien that this
> >
> > +                       so->forcedRecheck = key->triConsistentFn(key) != GIN_TRUE;
> >
> > probably needs to be
> >
> > +                       so->forcedRecheck |= key->triConsistentFn(key) != GIN_TRUE;
>
> Ping, Julien?  Based on the above, it looks like if we had a
> last-minute patch addressing the above this could go directly to Ready
> for Committer?  I will hold off moving this one to CF2 until my
> morning.

Attached v4 that should address all comments.

Attachment

Re: Avoid full GIN index scan when possible

From
Julien Rouhaud
Date:
On Thu, Aug 1, 2019 at 12:13 PM Julien Rouhaud <rjuju123@gmail.com> wrote:
>
> On Thu, Aug 1, 2019 at 8:43 AM Thomas Munro <thomas.munro@gmail.com> wrote:
> >
> > On Wed, Jul 31, 2019 at 5:28 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
> > > Meanwhile, I looked at the v3 patch, and it seems like it might not be
> > > too far from committable.  I think we should *not* let this get bogged
> > > down in questions of whether EXPLAIN can report which index quals were
> > > used or ignored.  That's a problem that's existed for decades in the
> > > btree code, with more or less zero user complaints.
> > >
> > > I do think v3 needs more attention to comments, for instance this
> > > hunk is clearly falsifying the adjacent comment:
> > >
> > > @ -141,7 +141,8 @@ ginFillScanKey(GinScanOpaque so, OffsetNumber attnum,
> > >         uint32          i;
> > >
> > >         /* Non-default search modes add one "hidden" entry to each key */
> > > -       if (searchMode != GIN_SEARCH_MODE_DEFAULT)
> > > +       if (searchMode != GIN_SEARCH_MODE_DEFAULT &&
> > > +               (searchMode != GIN_SEARCH_MODE_ALL || nQueryValues))
> > >                 nQueryValues++;
> > >         key->nentries = nQueryValues;
> > >         key->nuserentries = nUserQueryValues;
> > >
> > > Also, I agree with Julien that this
> > >
> > > +                       so->forcedRecheck = key->triConsistentFn(key) != GIN_TRUE;
> > >
> > > probably needs to be
> > >
> > > +                       so->forcedRecheck |= key->triConsistentFn(key) != GIN_TRUE;
> >
> > Ping, Julien?  Based on the above, it looks like if we had a
> > last-minute patch addressing the above this could go directly to Ready
> > for Committer?  I will hold off moving this one to CF2 until my
> > morning.
>
> Attached v4 that should address all comments.

And of course, thanks a lot!  Sorry for the message sent quite
precipitately, I'm also dealing with plumbing issues this morning :(



Re: Avoid full GIN index scan when possible

From
Tom Lane
Date:
Julien Rouhaud <rjuju123@gmail.com> writes:
> Attached v4 that should address all comments.

Eyeing this a bit further ... doesn't scanPendingInsert also need
to honor so->forcedRecheck?  Something along the lines of

-            tbm_add_tuples(tbm, &pos.item, 1, recheck);
+            tbm_add_tuples(tbm, &pos.item, 1, recheck | so->forcedRecheck);

at line 1837?  (Obviously, there's more than one way you could
write that.)

I'm also not exactly satisfied with the new comments --- they aren't
conveying much, and the XXX in one of them is confusing; does that
mean you're unsure that the comment is correct?

The added test case seems a bit unsatisfying as well, in that it
fails to retrieve any rows.  It's not very clear what it's
trying to test.

            regards, tom lane



Re: Avoid full GIN index scan when possible

From
Julien Rouhaud
Date:
On Thu, Aug 1, 2019 at 4:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
>
> Julien Rouhaud <rjuju123@gmail.com> writes:
> > Attached v4 that should address all comments.
>
> Eyeing this a bit further ... doesn't scanPendingInsert also need
> to honor so->forcedRecheck?  Something along the lines of
>
> -                       tbm_add_tuples(tbm, &pos.item, 1, recheck);
> +                       tbm_add_tuples(tbm, &pos.item, 1, recheck | so->forcedRecheck);
>
> at line 1837?  (Obviously, there's more than one way you could
> write that.)

I think so.

> I'm also not exactly satisfied with the new comments --- they aren't
> conveying much, and the XXX in one of them is confusing; does that
> mean you're unsure that the comment is correct?

That's actually not my code, and I'm not familiar enough with GIN code
to do much better :(

For the XXX, IIUC Nikita added this comment as room for future
improvement, as stated in his initial mail:

>> 2. We need to replace GIN_SEARCH_MODE_EVERYTHING with GIN_SEARCH_MODE_ALL
>>    if there are no GIN entries and some key requested GIN_SEARCH_MODE_ALL
>>    because we need to skip NULLs in GIN_SEARCH_MODE_ALL.  Simplest example here
>>    is "array @> '{}'": triconsistent() returns GIN_TRUE, recheck is not forced,
>>    and GIN_SEARCH_MODE_EVERYTHING returns NULLs that are not rechecked.  Maybe
>>    it would be better to introduce new GIN_SEARCH_MODE_EVERYTHING_NON_NULL.

> The added test case seems a bit unsatisfying as well, in that it
> fails to retrieve any rows.  It's not very clear what it's
> trying to test.

Yes, I used the same tests as before, but since with this approach
there's no way to distinguish whether a full index scan was performed,
so the explain is quite useless.  However, testing both cases should
still have the value to test the newly added code path.



Re: Avoid full GIN index scan when possible

From
Tom Lane
Date:
Julien Rouhaud <rjuju123@gmail.com> writes:
> On Thu, Aug 1, 2019 at 4:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> Eyeing this a bit further ... doesn't scanPendingInsert also need
>> to honor so->forcedRecheck?  Something along the lines of

> I think so.

Yeah, it does --- the updated pg_trgm test attached fails if it doesn't.

Also, I found that Alexander's concern upthread:

>> What would happen when two-columns index have GIN_SEARCH_MODE_DEFAULT
>> scan on first column and GIN_SEARCH_MODE_ALL on second?  I think even
>> if triconsistent() for second column returns GIN_TRUE, we still need
>> to recheck to verify second columns is not NULL.

is entirely on-point.  This patch generates the wrong answer in the
case I added to gin.sql below.  (The expected output was generated
with HEAD and seems correct, but with these code changes, we incorrectly
report the row with NULL as matching.  So I expect the cfbot is going
to complain about the patch in this state.)

While I've not attempted to fix that here, I wonder whether we shouldn't
fix it by just forcing forcedRecheck to true in any case where we discard
an ALL qualifier.  That would get rid of all the ugliness around
ginFillScanKey, which I'd otherwise really want to refactor to avoid
this business of adding and then removing a scan key.  It would also
get rid of the bit about "XXX Need to use ALL mode instead of EVERYTHING
to skip NULLs if ALL mode has been seen", which aside from being ugly
seems to be dead wrong for multi-column-index cases.

BTW, it's not particularly the fault of this patch, but: what does it
even mean to specify GIN_SEARCH_MODE_ALL with a nonzero number of keys?
Should we decide to treat that as an error?  It doesn't look to me like
any of the in-tree opclasses will return such a case, and I'm not at
all convinced what the GIN scan code would actually do with it, except
that I doubt it matches the documentation.

Setting this back to Waiting on Author.

            regards, tom lane

diff --git a/contrib/pg_trgm/expected/pg_trgm.out b/contrib/pg_trgm/expected/pg_trgm.out
index b3e709f..3e5ba9b 100644
--- a/contrib/pg_trgm/expected/pg_trgm.out
+++ b/contrib/pg_trgm/expected/pg_trgm.out
@@ -3498,6 +3498,68 @@ select count(*) from test_trgm where t ~ '[qwerty]{2}-?[qwerty]{2}';
   1000
 (1 row)

+-- check handling of indexquals that generate no searchable conditions
+explain (costs off)
+select count(*) from test_trgm where t like '%99%' and t like '%qwerty%';
+                                 QUERY PLAN
+-----------------------------------------------------------------------------
+ Aggregate
+   ->  Bitmap Heap Scan on test_trgm
+         Recheck Cond: ((t ~~ '%99%'::text) AND (t ~~ '%qwerty%'::text))
+         ->  Bitmap Index Scan on trgm_idx
+               Index Cond: ((t ~~ '%99%'::text) AND (t ~~ '%qwerty%'::text))
+(5 rows)
+
+select count(*) from test_trgm where t like '%99%' and t like '%qwerty%';
+ count
+-------
+    19
+(1 row)
+
+explain (costs off)
+select count(*) from test_trgm where t like '%99%' and t like '%qw%';
+                               QUERY PLAN
+-------------------------------------------------------------------------
+ Aggregate
+   ->  Bitmap Heap Scan on test_trgm
+         Recheck Cond: ((t ~~ '%99%'::text) AND (t ~~ '%qw%'::text))
+         ->  Bitmap Index Scan on trgm_idx
+               Index Cond: ((t ~~ '%99%'::text) AND (t ~~ '%qw%'::text))
+(5 rows)
+
+select count(*) from test_trgm where t like '%99%' and t like '%qw%';
+ count
+-------
+    19
+(1 row)
+
+-- ensure that pending-list items are handled correctly, too
+create temp table t_test_trgm(t text COLLATE "C");
+create index t_trgm_idx on t_test_trgm using gin (t gin_trgm_ops);
+insert into t_test_trgm values ('qwerty99'), ('qwerty01');
+explain (costs off)
+select count(*) from t_test_trgm where t like '%99%' and t like '%qwerty%';
+                                 QUERY PLAN
+-----------------------------------------------------------------------------
+ Aggregate
+   ->  Bitmap Heap Scan on t_test_trgm
+         Recheck Cond: ((t ~~ '%99%'::text) AND (t ~~ '%qwerty%'::text))
+         ->  Bitmap Index Scan on t_trgm_idx
+               Index Cond: ((t ~~ '%99%'::text) AND (t ~~ '%qwerty%'::text))
+(5 rows)
+
+select count(*) from t_test_trgm where t like '%99%' and t like '%qwerty%';
+ count
+-------
+     1
+(1 row)
+
+select count(*) from t_test_trgm where t like '%99%' and t like '%qw%';
+ count
+-------
+     1
+(1 row)
+
 create table test2(t text COLLATE "C");
 insert into test2 values ('abcdef');
 insert into test2 values ('quark');
diff --git a/contrib/pg_trgm/sql/pg_trgm.sql b/contrib/pg_trgm/sql/pg_trgm.sql
index 08459e6..dcfd3c2 100644
--- a/contrib/pg_trgm/sql/pg_trgm.sql
+++ b/contrib/pg_trgm/sql/pg_trgm.sql
@@ -55,6 +55,22 @@ select t,similarity(t,'gwertyu0988') as sml from test_trgm where t % 'gwertyu098
 select t,similarity(t,'gwertyu1988') as sml from test_trgm where t % 'gwertyu1988' order by sml desc, t;
 select count(*) from test_trgm where t ~ '[qwerty]{2}-?[qwerty]{2}';

+-- check handling of indexquals that generate no searchable conditions
+explain (costs off)
+select count(*) from test_trgm where t like '%99%' and t like '%qwerty%';
+select count(*) from test_trgm where t like '%99%' and t like '%qwerty%';
+explain (costs off)
+select count(*) from test_trgm where t like '%99%' and t like '%qw%';
+select count(*) from test_trgm where t like '%99%' and t like '%qw%';
+-- ensure that pending-list items are handled correctly, too
+create temp table t_test_trgm(t text COLLATE "C");
+create index t_trgm_idx on t_test_trgm using gin (t gin_trgm_ops);
+insert into t_test_trgm values ('qwerty99'), ('qwerty01');
+explain (costs off)
+select count(*) from t_test_trgm where t like '%99%' and t like '%qwerty%';
+select count(*) from t_test_trgm where t like '%99%' and t like '%qwerty%';
+select count(*) from t_test_trgm where t like '%99%' and t like '%qw%';
+
 create table test2(t text COLLATE "C");
 insert into test2 values ('abcdef');
 insert into test2 values ('quark');
diff --git a/src/backend/access/gin/ginget.c b/src/backend/access/gin/ginget.c
index b18ae2b..65ed8b2 100644
--- a/src/backend/access/gin/ginget.c
+++ b/src/backend/access/gin/ginget.c
@@ -1814,7 +1814,7 @@ scanPendingInsert(IndexScanDesc scan, TIDBitmap *tbm, int64 *ntids)
          * consistent functions.
          */
         oldCtx = MemoryContextSwitchTo(so->tempCtx);
-        recheck = false;
+        recheck = so->forcedRecheck;
         match = true;

         for (i = 0; i < so->nkeys; i++)
@@ -1888,9 +1888,14 @@ gingetbitmap(IndexScanDesc scan, TIDBitmap *tbm)
     {
         CHECK_FOR_INTERRUPTS();

+        /* Get next item ... */
         if (!scanGetItem(scan, iptr, &iptr, &recheck))
             break;

+        /* ... apply forced recheck if required ... */
+        recheck |= so->forcedRecheck;
+
+        /* ... and transfer it into bitmap */
         if (ItemPointerIsLossyPage(&iptr))
             tbm_add_page(tbm, ItemPointerGetBlockNumber(&iptr));
         else
diff --git a/src/backend/access/gin/ginscan.c b/src/backend/access/gin/ginscan.c
index 74d9821..3d5c7e3 100644
--- a/src/backend/access/gin/ginscan.c
+++ b/src/backend/access/gin/ginscan.c
@@ -140,8 +140,12 @@ ginFillScanKey(GinScanOpaque so, OffsetNumber attnum,
     uint32        nUserQueryValues = nQueryValues;
     uint32        i;

-    /* Non-default search modes add one "hidden" entry to each key */
-    if (searchMode != GIN_SEARCH_MODE_DEFAULT)
+    /*
+     * Non-default search modes add one "hidden" entry to each key, unless ALL
+     * key with no entries as those only need to call triConsistentFn
+     */
+    if (searchMode != GIN_SEARCH_MODE_DEFAULT &&
+        !(searchMode == GIN_SEARCH_MODE_ALL && (nQueryValues <= 0)))
         nQueryValues++;
     key->nentries = nQueryValues;
     key->nuserentries = nUserQueryValues;
@@ -265,6 +269,7 @@ ginNewScanKey(IndexScanDesc scan)
     GinScanOpaque so = (GinScanOpaque) scan->opaque;
     int            i;
     bool        hasNullQuery = false;
+    bool        hasSearchAllMode = false;
     MemoryContext oldCtx;

     /*
@@ -286,6 +291,7 @@ ginNewScanKey(IndexScanDesc scan)
         palloc(so->allocentries * sizeof(GinScanEntry));

     so->isVoidRes = false;
+    so->forcedRecheck = false;

     for (i = 0; i < scan->numberOfKeys; i++)
     {
@@ -371,6 +377,18 @@ ginNewScanKey(IndexScanDesc scan)
                        skey->sk_argument, nQueryValues,
                        queryValues, categories,
                        partial_matches, extra_data);
+
+        if (searchMode == GIN_SEARCH_MODE_ALL && nQueryValues <= 0)
+        {
+            /*
+             * Don't emit ALL key with no entries, check only whether
+             * unconditional recheck is needed.
+             */
+            GinScanKey    key = &so->keys[--so->nkeys];
+
+            hasSearchAllMode = true;
+            so->forcedRecheck |= key->triConsistentFn(key) != GIN_TRUE;
+        }
     }

     /*
@@ -384,6 +402,13 @@ ginNewScanKey(IndexScanDesc scan)
                        InvalidStrategy, GIN_SEARCH_MODE_EVERYTHING,
                        (Datum) 0, 0,
                        NULL, NULL, NULL, NULL);
+
+        /*
+         * XXX Need to use ALL mode instead of EVERYTHING to skip NULLs if ALL
+         * mode has been seen.
+         */
+        if (hasSearchAllMode)
+            so->keys[so->nkeys - 1].scanEntry[0]->searchMode = GIN_SEARCH_MODE_ALL;
     }

     /*
diff --git a/src/backend/utils/adt/selfuncs.c b/src/backend/utils/adt/selfuncs.c
index 7eba59e..1a9d76d 100644
--- a/src/backend/utils/adt/selfuncs.c
+++ b/src/backend/utils/adt/selfuncs.c
@@ -6326,6 +6326,16 @@ gincost_pattern(IndexOptInfo *index, int indexcol,
         return false;
     }

+    if (nentries <= 0 && searchMode == GIN_SEARCH_MODE_ALL)
+    {
+        /*
+         * GIN does not emit scan entries for empty GIN_SEARCH_MODE_ALL keys,
+         * and it can avoid full index scan if there are entries from other
+         * keys, so we can skip setting of 'haveFullScan' flag.
+         */
+        return true;
+    }
+
     for (i = 0; i < nentries; i++)
     {
         /*
@@ -6709,7 +6719,7 @@ gincostestimate(PlannerInfo *root, IndexPath *path, double loop_count,
         return;
     }

-    if (counts.haveFullScan || indexQuals == NIL)
+    if (counts.haveFullScan || indexQuals == NIL || counts.searchEntries <= 0)
     {
         /*
          * Full index scan will be required.  We treat this as if every key in
diff --git a/src/include/access/gin_private.h b/src/include/access/gin_private.h
index afb3e15..b0251f7 100644
--- a/src/include/access/gin_private.h
+++ b/src/include/access/gin_private.h
@@ -359,6 +359,7 @@ typedef struct GinScanOpaqueData
     MemoryContext keyCtx;        /* used to hold key and entry data */

     bool        isVoidRes;        /* true if query is unsatisfiable */
+    bool        forcedRecheck;    /* must recheck all returned tuples */
 } GinScanOpaqueData;

 typedef GinScanOpaqueData *GinScanOpaque;
diff --git a/src/test/regress/expected/gin.out b/src/test/regress/expected/gin.out
index a3911a6..fb0d29c 100644
--- a/src/test/regress/expected/gin.out
+++ b/src/test/regress/expected/gin.out
@@ -1,7 +1,7 @@
 --
 -- Test GIN indexes.
 --
--- There are other tests to test different GIN opclassed. This is for testing
+-- There are other tests to test different GIN opclasses. This is for testing
 -- GIN itself.
 -- Create and populate a test table with a GIN index.
 create table gin_test_tbl(i int4[]) with (autovacuum_enabled = off);
@@ -35,3 +35,31 @@ insert into gin_test_tbl select array[1, 2, g] from generate_series(1, 1000) g;
 insert into gin_test_tbl select array[1, 3, g] from generate_series(1, 1000) g;
 delete from gin_test_tbl where i @> array[2];
 vacuum gin_test_tbl;
+-- Test optimization of empty queries
+create temp table t_gin_test_tbl(i int4[], j int4[]);
+create index on t_gin_test_tbl using gin (i, j);
+insert into t_gin_test_tbl select array[100,g], array[200,g]
+from generate_series(1, 10) g;
+insert into t_gin_test_tbl values(array[0,0], null);
+set enable_seqscan = off;
+explain
+select * from t_gin_test_tbl where array[0] <@ i;
+                                      QUERY PLAN
+--------------------------------------------------------------------------------------
+ Bitmap Heap Scan on t_gin_test_tbl  (cost=12.03..20.49 rows=4 width=64)
+   Recheck Cond: ('{0}'::integer[] <@ i)
+   ->  Bitmap Index Scan on t_gin_test_tbl_i_j_idx  (cost=0.00..12.03 rows=4 width=0)
+         Index Cond: (i @> '{0}'::integer[])
+(4 rows)
+
+select * from t_gin_test_tbl where array[0] <@ i;
+   i   | j
+-------+---
+ {0,0} |
+(1 row)
+
+select * from t_gin_test_tbl where array[0] <@ i and '{}'::int4[] <@ j;
+ i | j
+---+---
+(0 rows)
+
diff --git a/src/test/regress/sql/gin.sql b/src/test/regress/sql/gin.sql
index c566e9b..aaf9c19 100644
--- a/src/test/regress/sql/gin.sql
+++ b/src/test/regress/sql/gin.sql
@@ -1,7 +1,7 @@
 --
 -- Test GIN indexes.
 --
--- There are other tests to test different GIN opclassed. This is for testing
+-- There are other tests to test different GIN opclasses. This is for testing
 -- GIN itself.

 -- Create and populate a test table with a GIN index.
@@ -34,3 +34,15 @@ insert into gin_test_tbl select array[1, 3, g] from generate_series(1, 1000) g;

 delete from gin_test_tbl where i @> array[2];
 vacuum gin_test_tbl;
+
+-- Test optimization of empty queries
+create temp table t_gin_test_tbl(i int4[], j int4[]);
+create index on t_gin_test_tbl using gin (i, j);
+insert into t_gin_test_tbl select array[100,g], array[200,g]
+from generate_series(1, 10) g;
+insert into t_gin_test_tbl values(array[0,0], null);
+set enable_seqscan = off;
+explain
+select * from t_gin_test_tbl where array[0] <@ i;
+select * from t_gin_test_tbl where array[0] <@ i;
+select * from t_gin_test_tbl where array[0] <@ i and '{}'::int4[] <@ j;

Re: Avoid full GIN index scan when possible

From
Alexander Korotkov
Date:
On Thu, Aug 1, 2019 at 9:59 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
>
> Julien Rouhaud <rjuju123@gmail.com> writes:
> > On Thu, Aug 1, 2019 at 4:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
> >> Eyeing this a bit further ... doesn't scanPendingInsert also need
> >> to honor so->forcedRecheck?  Something along the lines of
>
> > I think so.
>
> Yeah, it does --- the updated pg_trgm test attached fails if it doesn't.
>
> Also, I found that Alexander's concern upthread:
>
> >> What would happen when two-columns index have GIN_SEARCH_MODE_DEFAULT
> >> scan on first column and GIN_SEARCH_MODE_ALL on second?  I think even
> >> if triconsistent() for second column returns GIN_TRUE, we still need
> >> to recheck to verify second columns is not NULL.
>
> is entirely on-point.  This patch generates the wrong answer in the
> case I added to gin.sql below.  (The expected output was generated
> with HEAD and seems correct, but with these code changes, we incorrectly
> report the row with NULL as matching.  So I expect the cfbot is going
> to complain about the patch in this state.)
>
> While I've not attempted to fix that here, I wonder whether we shouldn't
> fix it by just forcing forcedRecheck to true in any case where we discard
> an ALL qualifier.  That would get rid of all the ugliness around
> ginFillScanKey, which I'd otherwise really want to refactor to avoid
> this business of adding and then removing a scan key.  It would also
> get rid of the bit about "XXX Need to use ALL mode instead of EVERYTHING
> to skip NULLs if ALL mode has been seen", which aside from being ugly
> seems to be dead wrong for multi-column-index cases.

+1 for setting forcedRecheck in any case we discard ALL qualifier.
ISTM, real life number of cases we can skip recheck here is
negligible.  And it doesn't justify complexity.

> BTW, it's not particularly the fault of this patch, but: what does it
> even mean to specify GIN_SEARCH_MODE_ALL with a nonzero number of keys?

It might mean we would like to see all the results, which don't
contain given key.

> Should we decide to treat that as an error?  It doesn't look to me like
> any of the in-tree opclasses will return such a case, and I'm not at
> all convinced what the GIN scan code would actually do with it, except
> that I doubt it matches the documentation.

I think tsvector_ops behaves so.  See gin_extract_tsquery().

        /*
         * If the query doesn't have any required positive matches (for
         * instance, it's something like '! foo'), we have to do a full index
         * scan.
         */
        if (tsquery_requires_match(item))
            *searchMode = GIN_SEARCH_MODE_DEFAULT;
        else
            *searchMode = GIN_SEARCH_MODE_ALL;

------
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



Re: Avoid full GIN index scan when possible

From
Tom Lane
Date:
Alexander Korotkov <a.korotkov@postgrespro.ru> writes:
> On Thu, Aug 1, 2019 at 9:59 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> While I've not attempted to fix that here, I wonder whether we shouldn't
>> fix it by just forcing forcedRecheck to true in any case where we discard
>> an ALL qualifier.

> +1 for setting forcedRecheck in any case we discard ALL qualifier.
> ISTM, real life number of cases we can skip recheck here is
> negligible.  And it doesn't justify complexity.

Yeah, that was pretty much what I was thinking --- by the time we got
it fully right considering nulls and multicolumn indexes, the cases
where not rechecking could actually do something useful would be
pretty narrow.  And a bitmap heap scan is always going to have to
visit the heap, IIRC, so how much could skipping the recheck really
save?

>> BTW, it's not particularly the fault of this patch, but: what does it
>> even mean to specify GIN_SEARCH_MODE_ALL with a nonzero number of keys?

> It might mean we would like to see all the results, which don't
> contain given key.

Ah, right, I forgot that the consistent-fn might look at the match
results.

            regards, tom lane



Re: Avoid full GIN index scan when possible

From
Nikita Glukhov
Date:
Attached 6th version of the patches.

On 01.08.2019 22:28, Tom Lane wrote:
Alexander Korotkov <a.korotkov@postgrespro.ru> writes:
On Thu, Aug 1, 2019 at 9:59 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
While I've not attempted to fix that here, I wonder whether we shouldn't
fix it by just forcing forcedRecheck to true in any case where we discard
an ALL qualifier.

+1 for setting forcedRecheck in any case we discard ALL qualifier.
ISTM, real life number of cases we can skip recheck here is
negligible.  And it doesn't justify complexity.
Yeah, that was pretty much what I was thinking --- by the time we got
it fully right considering nulls and multicolumn indexes, the cases
where not rechecking could actually do something useful would be
pretty narrow.  And a bitmap heap scan is always going to have to
visit the heap, IIRC, so how much could skipping the recheck really
save?
I have simplified patch #1 setting forcedRecheck for all discarded ALL quals.
(This solution is very close to the earliest unpublished version of the patch.)


More accurate recheck-forcing logic was moved into patch #2 (multicolumn 
indexes were fixed).  This patch also contains ginFillScanKey() refactoring
and new internal mode GIN_SEARCH_MODE_NOT_NULL that is used only for 
GinScanKey.xxxConsistentFn initialization and transformed into 
GIN_SEARCH_MODE_ALL before GinScanEntry initialization.


The cost estimation seems to be correct for both patch #1 and patch #2 and
left untouched since v05.


BTW, it's not particularly the fault of this patch, but: what does it
even mean to specify GIN_SEARCH_MODE_ALL with a nonzero number of keys?

It might mean we would like to see all the results, which don't
contain given key.
Ah, right, I forgot that the consistent-fn might look at the match
results.
Also I decided to go further and tried to optimize (patch #3) the case for
GIN_SEARCH_MODE_ALL with a nonzero number of keys.

Full GIN scan can be avoided in queries like this contrib/intarray query:
"arr @@ '1' AND arr @@ '!2'" (search arrays containing 1 and not containing 2).

Here we have two keys:- key '1' with GIN_SEARCH_MODE_DEFAULT- key '2' with GIN_SEARCH_MODE_ALL

Key '2' requires full scan that can be avoided with the forced recheck.

This query is equivalent to single-qual query "a @@ '1 & !2'" which
emits only one GIN key '1' with recheck.


Below is example for contrib/intarray operator @@:

=# CREATE EXTENSION intarray;
=# CREATE TABLE t (a int[]);
=# INSERT INTO t SELECT ARRAY[i] FROM generate_series(1, 1000000) i;
=# CREATE INDEX ON t USING gin (a gin__int_ops);
=# SET enable_seqscan = OFF;

-- master
=# EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM t WHERE a @@ '1' AND a @@ '!2';                                                        QUERY PLAN                                                         
----------------------------------------------------------------------------------------------------------------------------Bitmap Heap Scan on t  (cost=16000095.45..16007168.16 rows=5019 width=24) (actual time=66.955..66.956 rows=1 loops=1)  Recheck Cond: ((a @@ '1'::query_int) AND (a @@ '!2'::query_int))  Heap Blocks: exact=1  Buffers: shared hit=6816  ->  Bitmap Index Scan on t_a_idx  (cost=0.00..16000094.19 rows=5019 width=0) (actual time=66.950..66.950 rows=1 loops=1)        Index Cond: ((a @@ '1'::query_int) AND (a @@ '!2'::query_int))        Buffers: shared hit=6815Planning Time: 0.086 msExecution Time: 67.076 ms
(9 rows)

-- equivalent single-qual query
=# EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM t WHERE a @@ '1 & !2';                                                    QUERY PLAN                                                     
--------------------------------------------------------------------------------------------------------------------Bitmap Heap Scan on t  (cost=78.94..7141.57 rows=5025 width=24) (actual time=0.019..0.019 rows=1 loops=1)  Recheck Cond: (a @@ '1 & !2'::query_int)  Heap Blocks: exact=1  Buffers: shared hit=8  ->  Bitmap Index Scan on t_a_idx  (cost=0.00..77.68 rows=5025 width=0) (actual time=0.014..0.014 rows=1 loops=1)        Index Cond: (a @@ '1 & !2'::query_int)        Buffers: shared hit=7Planning Time: 0.056 msExecution Time: 0.039 ms
(9 rows)

-- with patch #3
=# EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM t WHERE a @@ '1' AND a @@ '!2';                                                    QUERY PLAN                                                     
--------------------------------------------------------------------------------------------------------------------Bitmap Heap Scan on t  (cost=75.45..7148.16 rows=5019 width=24) (actual time=0.019..0.020 rows=1 loops=1)  Recheck Cond: ((a @@ '1'::query_int) AND (a @@ '!2'::query_int))  Heap Blocks: exact=1  Buffers: shared hit=6  ->  Bitmap Index Scan on t_a_idx  (cost=0.00..74.19 rows=5019 width=0) (actual time=0.011..0.012 rows=1 loops=1)        Index Cond: ((a @@ '1'::query_int) AND (a @@ '!2'::query_int))        Buffers: shared hit=5Planning Time: 0.059 msExecution Time: 0.040 ms
(9 rows)




Patch #3 again contains a similar ugly solution -- we have to remove already
initialized GinScanKeys with theirs GinScanEntries.  GinScanEntries can be 
shared, so the reference counting was added.


Also modifications of cost estimation in patch #3 are questionable. 
GinQualCounts  are simply not incremented when haveFullScan flag is set, 
because the counters anyway will be overwritten by the caller.


-- 
Nikita Glukhov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
Attachment

Re: Avoid full GIN index scan when possible

From
Tom Lane
Date:
Nikita Glukhov <n.gluhov@postgrespro.ru> writes:
> Attached 6th version of the patches.

I spent a bit of time looking at these.  Attached is a proposed revision
of the 0001 patch, with some minor changes:

* I didn't adopt your move of the "Non-default modes require the index
to have placeholders" test to after the stanza handling zero-key cases.
I think that move would be correct for 0001 as it stands, but it's far
less clear that it's still safe after 0002/0003 or whatever variant of
those we end up with.  We should leave that code where it is for now,
enforcing the v1-index requirement for all non-default search modes, and
reconsider after the dust settles.  (Or if we never do reconsider, it
won't be a big deal --- I doubt many v0 indexes are still out there.)

* Rearranged the selfuncs.c logic to match ginNewScanKey better.

* Cleaned up my own sloppiness in the new gin.sql test cases.

I think this would be committable as it stands, except that replacing
an ALL scan with an EVERYTHING scan could be a performance regression
if the index contains many null items.  We need to do something about
that before committing.

Unfortunately I'm not sold on either 0002 or 0003 as they stand;
they seem overly complicated, I'm not convinced they're correct,
and you haven't really provided examples showing that all this
extra complexity is worthwhile.

In particular:

* I don't really like the whole business about detecting a constant-true
ALL condition by applying the consistentFn at this stage.  That just
feels wrong to me: the consistentFn should be expecting some data about
the index contents and we don't have any to give.  If it works, it's
accidental, and probably it's fragile.  Moreover, the only gain we'd get
from it is maybe not having to set forcedRecheck, and that doesn't look
to me like it would make all that much difference.

* The code seems to be assuming that a zero-key ALL query is necessarily
precisely equivalent to a NOT NULL condition.  This seems flat out wrong.
At the very least it's possible for such a query to be constant-false,
rather than constant-true-for-non-null-items.  Admittedly, that would
suggest rather stupid coding of the opclass query-extract function, as
it could have reported a constant-false condition in an optimizable way
rather than an unoptimizable one.  But we aren't entitled to assume that
the opclass isn't being dumb; the API says it can do this, so it can.
I think we have to check the scankey regardless, either in the index or
via forcedRecheck.

* I really dislike the refcount business in 0003.  It's not clear why we
need that or whether it's correct, and I think it would be unmaintainable
even if it were documented (which it isn't).


ISTM we could get where we need to go in a much simpler way.  A couple
of alternative ideas:

* During ginNewScanKey, separate out ALL-mode queries and don't add them
to the scankey list immediately.  After examining all the keys, if we
found any normal (DEFAULT or INCLUDE_EMPTY) queries, then go ahead and
add in the ALL-mode queries so that we can check them in the index, but
don't cause a full scan.  Otherwise, discard all the ALL-mode queries
and emit a NOT_NULL scan key, setting forcedRecheck so that we apply the
filtering that way.

* Or we could just discard ALL-mode queries on sight, setting
forcedRecheck always, and then emit NOT_NULL if we had any
of those and no normal queries.

The thing that seems hard to predict here is whether it is worth tracking
the presence of additional keys in the index to avoid a recheck in the
heap.  It's fairly easy to imagine that for common keys, avoiding the
recheck would be a net loss because it requires more work in the index.
If we had statistics about key counts, which of course we don't, you
could imagine making this decision dynamically depending on whether an
ALL query is asking about common or uncommon keys.

BTW --- any way you slice this, it seems like we'd end up with a situation
where we never execute an ALL query against the index in the way we do
now, meaning that the relevant code in the scanning logic is dead and
could be removed.  Given that, we don't really need a new NOT_NULL search
mode; we could just redefine what ALL mode actually does at execution.
This would be good IMO, because it's not obvious what the difference
between ALL and NOT_NULL modes is anyway.

            regards, tom lane

diff --git a/contrib/pg_trgm/expected/pg_trgm.out b/contrib/pg_trgm/expected/pg_trgm.out
index b3e709f..3e5ba9b 100644
--- a/contrib/pg_trgm/expected/pg_trgm.out
+++ b/contrib/pg_trgm/expected/pg_trgm.out
@@ -3498,6 +3498,68 @@ select count(*) from test_trgm where t ~ '[qwerty]{2}-?[qwerty]{2}';
   1000
 (1 row)

+-- check handling of indexquals that generate no searchable conditions
+explain (costs off)
+select count(*) from test_trgm where t like '%99%' and t like '%qwerty%';
+                                 QUERY PLAN
+-----------------------------------------------------------------------------
+ Aggregate
+   ->  Bitmap Heap Scan on test_trgm
+         Recheck Cond: ((t ~~ '%99%'::text) AND (t ~~ '%qwerty%'::text))
+         ->  Bitmap Index Scan on trgm_idx
+               Index Cond: ((t ~~ '%99%'::text) AND (t ~~ '%qwerty%'::text))
+(5 rows)
+
+select count(*) from test_trgm where t like '%99%' and t like '%qwerty%';
+ count
+-------
+    19
+(1 row)
+
+explain (costs off)
+select count(*) from test_trgm where t like '%99%' and t like '%qw%';
+                               QUERY PLAN
+-------------------------------------------------------------------------
+ Aggregate
+   ->  Bitmap Heap Scan on test_trgm
+         Recheck Cond: ((t ~~ '%99%'::text) AND (t ~~ '%qw%'::text))
+         ->  Bitmap Index Scan on trgm_idx
+               Index Cond: ((t ~~ '%99%'::text) AND (t ~~ '%qw%'::text))
+(5 rows)
+
+select count(*) from test_trgm where t like '%99%' and t like '%qw%';
+ count
+-------
+    19
+(1 row)
+
+-- ensure that pending-list items are handled correctly, too
+create temp table t_test_trgm(t text COLLATE "C");
+create index t_trgm_idx on t_test_trgm using gin (t gin_trgm_ops);
+insert into t_test_trgm values ('qwerty99'), ('qwerty01');
+explain (costs off)
+select count(*) from t_test_trgm where t like '%99%' and t like '%qwerty%';
+                                 QUERY PLAN
+-----------------------------------------------------------------------------
+ Aggregate
+   ->  Bitmap Heap Scan on t_test_trgm
+         Recheck Cond: ((t ~~ '%99%'::text) AND (t ~~ '%qwerty%'::text))
+         ->  Bitmap Index Scan on t_trgm_idx
+               Index Cond: ((t ~~ '%99%'::text) AND (t ~~ '%qwerty%'::text))
+(5 rows)
+
+select count(*) from t_test_trgm where t like '%99%' and t like '%qwerty%';
+ count
+-------
+     1
+(1 row)
+
+select count(*) from t_test_trgm where t like '%99%' and t like '%qw%';
+ count
+-------
+     1
+(1 row)
+
 create table test2(t text COLLATE "C");
 insert into test2 values ('abcdef');
 insert into test2 values ('quark');
diff --git a/contrib/pg_trgm/sql/pg_trgm.sql b/contrib/pg_trgm/sql/pg_trgm.sql
index 08459e6..dcfd3c2 100644
--- a/contrib/pg_trgm/sql/pg_trgm.sql
+++ b/contrib/pg_trgm/sql/pg_trgm.sql
@@ -55,6 +55,22 @@ select t,similarity(t,'gwertyu0988') as sml from test_trgm where t % 'gwertyu098
 select t,similarity(t,'gwertyu1988') as sml from test_trgm where t % 'gwertyu1988' order by sml desc, t;
 select count(*) from test_trgm where t ~ '[qwerty]{2}-?[qwerty]{2}';

+-- check handling of indexquals that generate no searchable conditions
+explain (costs off)
+select count(*) from test_trgm where t like '%99%' and t like '%qwerty%';
+select count(*) from test_trgm where t like '%99%' and t like '%qwerty%';
+explain (costs off)
+select count(*) from test_trgm where t like '%99%' and t like '%qw%';
+select count(*) from test_trgm where t like '%99%' and t like '%qw%';
+-- ensure that pending-list items are handled correctly, too
+create temp table t_test_trgm(t text COLLATE "C");
+create index t_trgm_idx on t_test_trgm using gin (t gin_trgm_ops);
+insert into t_test_trgm values ('qwerty99'), ('qwerty01');
+explain (costs off)
+select count(*) from t_test_trgm where t like '%99%' and t like '%qwerty%';
+select count(*) from t_test_trgm where t like '%99%' and t like '%qwerty%';
+select count(*) from t_test_trgm where t like '%99%' and t like '%qw%';
+
 create table test2(t text COLLATE "C");
 insert into test2 values ('abcdef');
 insert into test2 values ('quark');
diff --git a/src/backend/access/gin/ginget.c b/src/backend/access/gin/ginget.c
index b18ae2b..65ed8b2 100644
--- a/src/backend/access/gin/ginget.c
+++ b/src/backend/access/gin/ginget.c
@@ -1814,7 +1814,7 @@ scanPendingInsert(IndexScanDesc scan, TIDBitmap *tbm, int64 *ntids)
          * consistent functions.
          */
         oldCtx = MemoryContextSwitchTo(so->tempCtx);
-        recheck = false;
+        recheck = so->forcedRecheck;
         match = true;

         for (i = 0; i < so->nkeys; i++)
@@ -1888,9 +1888,14 @@ gingetbitmap(IndexScanDesc scan, TIDBitmap *tbm)
     {
         CHECK_FOR_INTERRUPTS();

+        /* Get next item ... */
         if (!scanGetItem(scan, iptr, &iptr, &recheck))
             break;

+        /* ... apply forced recheck if required ... */
+        recheck |= so->forcedRecheck;
+
+        /* ... and transfer it into bitmap */
         if (ItemPointerIsLossyPage(&iptr))
             tbm_add_page(tbm, ItemPointerGetBlockNumber(&iptr));
         else
diff --git a/src/backend/access/gin/ginscan.c b/src/backend/access/gin/ginscan.c
index 74d9821..7b8de10 100644
--- a/src/backend/access/gin/ginscan.c
+++ b/src/backend/access/gin/ginscan.c
@@ -286,6 +286,7 @@ ginNewScanKey(IndexScanDesc scan)
         palloc(so->allocentries * sizeof(GinScanEntry));

     so->isVoidRes = false;
+    so->forcedRecheck = false;

     for (i = 0; i < scan->numberOfKeys; i++)
     {
@@ -333,16 +334,28 @@ ginNewScanKey(IndexScanDesc scan)
         if (searchMode != GIN_SEARCH_MODE_DEFAULT)
             hasNullQuery = true;

-        /*
-         * In default mode, no keys means an unsatisfiable query.
-         */
+        /* Special cases for queries that contain no keys */
         if (queryValues == NULL || nQueryValues <= 0)
         {
             if (searchMode == GIN_SEARCH_MODE_DEFAULT)
             {
+                /* In default mode, no keys means an unsatisfiable query */
                 so->isVoidRes = true;
                 break;
             }
+            else if (searchMode == GIN_SEARCH_MODE_ALL)
+            {
+                /*
+                 * The query probably matches all non-null items, but rather
+                 * than scanning the index in ALL mode, we use forced rechecks
+                 * to verify matches of this scankey.  This wins if there are
+                 * any non-ALL scankeys; otherwise we end up adding an
+                 * EVERYTHING scankey below.
+                 */
+                so->forcedRecheck = true;
+                continue;
+            }
+
             nQueryValues = 0;    /* ensure sane value */
         }

diff --git a/src/backend/utils/adt/selfuncs.c b/src/backend/utils/adt/selfuncs.c
index 7eba59e..5e0a922 100644
--- a/src/backend/utils/adt/selfuncs.c
+++ b/src/backend/utils/adt/selfuncs.c
@@ -6320,10 +6320,24 @@ gincost_pattern(IndexOptInfo *index, int indexcol,
                          PointerGetDatum(&nullFlags),
                          PointerGetDatum(&searchMode));

-    if (nentries <= 0 && searchMode == GIN_SEARCH_MODE_DEFAULT)
+    /* Special cases for queries that contain no keys */
+    if (nentries <= 0)
     {
-        /* No match is possible */
-        return false;
+        if (searchMode == GIN_SEARCH_MODE_DEFAULT)
+        {
+            /* In default mode, no keys means an unsatisfiable query */
+            return false;
+        }
+        else if (searchMode == GIN_SEARCH_MODE_ALL)
+        {
+            /*
+             * ginNewScanKey() does not emit scankeys for a key-less ALL
+             * query.  Instead it will emit an EVERYTHING key, but only if
+             * there are no other regular keys.  We don't know that yet, so
+             * postpone setting the haveFullScan flag.
+             */
+            return true;
+        }
     }

     for (i = 0; i < nentries; i++)
@@ -6485,6 +6499,10 @@ gincost_scalararrayopexpr(PlannerInfo *root,
             /* We ignore array elements that are unsatisfiable patterns */
             numPossible++;

+            /* If no regular scan keys, assume an EVERYTHING scan is needed */
+            if (elemcounts.searchEntries == 0)
+                elemcounts.haveFullScan = true;
+
             if (elemcounts.haveFullScan)
             {
                 /*
@@ -6709,6 +6727,10 @@ gincostestimate(PlannerInfo *root, IndexPath *path, double loop_count,
         return;
     }

+    /* If no regular scan keys, assume an EVERYTHING scan is needed */
+    if (counts.searchEntries == 0)
+        counts.haveFullScan = true;
+
     if (counts.haveFullScan || indexQuals == NIL)
     {
         /*
diff --git a/src/include/access/gin_private.h b/src/include/access/gin_private.h
index 78fcd82..9d2ee3a 100644
--- a/src/include/access/gin_private.h
+++ b/src/include/access/gin_private.h
@@ -359,6 +359,7 @@ typedef struct GinScanOpaqueData
     MemoryContext keyCtx;        /* used to hold key and entry data */

     bool        isVoidRes;        /* true if query is unsatisfiable */
+    bool        forcedRecheck;    /* must recheck all returned tuples */
 } GinScanOpaqueData;

 typedef GinScanOpaqueData *GinScanOpaque;
diff --git a/src/test/regress/expected/gin.out b/src/test/regress/expected/gin.out
index a3911a6..5ba96fa 100644
--- a/src/test/regress/expected/gin.out
+++ b/src/test/regress/expected/gin.out
@@ -1,7 +1,7 @@
 --
 -- Test GIN indexes.
 --
--- There are other tests to test different GIN opclassed. This is for testing
+-- There are other tests to test different GIN opclasses. This is for testing
 -- GIN itself.
 -- Create and populate a test table with a GIN index.
 create table gin_test_tbl(i int4[]) with (autovacuum_enabled = off);
@@ -35,3 +35,32 @@ insert into gin_test_tbl select array[1, 2, g] from generate_series(1, 1000) g;
 insert into gin_test_tbl select array[1, 3, g] from generate_series(1, 1000) g;
 delete from gin_test_tbl where i @> array[2];
 vacuum gin_test_tbl;
+-- Test optimization of empty queries
+create temp table t_gin_test_tbl(i int4[], j int4[]);
+create index on t_gin_test_tbl using gin (i, j);
+insert into t_gin_test_tbl select array[100,g], array[200,g]
+from generate_series(1, 10) g;
+insert into t_gin_test_tbl values(array[0,0], null);
+set enable_seqscan = off;
+explain (costs off)
+select * from t_gin_test_tbl where array[0] <@ i;
+                    QUERY PLAN
+---------------------------------------------------
+ Bitmap Heap Scan on t_gin_test_tbl
+   Recheck Cond: ('{0}'::integer[] <@ i)
+   ->  Bitmap Index Scan on t_gin_test_tbl_i_j_idx
+         Index Cond: (i @> '{0}'::integer[])
+(4 rows)
+
+select * from t_gin_test_tbl where array[0] <@ i;
+   i   | j
+-------+---
+ {0,0} |
+(1 row)
+
+select * from t_gin_test_tbl where array[0] <@ i and '{}'::int4[] <@ j;
+ i | j
+---+---
+(0 rows)
+
+reset enable_seqscan;
diff --git a/src/test/regress/sql/gin.sql b/src/test/regress/sql/gin.sql
index c566e9b..f98fb7e 100644
--- a/src/test/regress/sql/gin.sql
+++ b/src/test/regress/sql/gin.sql
@@ -1,7 +1,7 @@
 --
 -- Test GIN indexes.
 --
--- There are other tests to test different GIN opclassed. This is for testing
+-- There are other tests to test different GIN opclasses. This is for testing
 -- GIN itself.

 -- Create and populate a test table with a GIN index.
@@ -34,3 +34,16 @@ insert into gin_test_tbl select array[1, 3, g] from generate_series(1, 1000) g;

 delete from gin_test_tbl where i @> array[2];
 vacuum gin_test_tbl;
+
+-- Test optimization of empty queries
+create temp table t_gin_test_tbl(i int4[], j int4[]);
+create index on t_gin_test_tbl using gin (i, j);
+insert into t_gin_test_tbl select array[100,g], array[200,g]
+from generate_series(1, 10) g;
+insert into t_gin_test_tbl values(array[0,0], null);
+set enable_seqscan = off;
+explain (costs off)
+select * from t_gin_test_tbl where array[0] <@ i;
+select * from t_gin_test_tbl where array[0] <@ i;
+select * from t_gin_test_tbl where array[0] <@ i and '{}'::int4[] <@ j;
+reset enable_seqscan;

Re: Avoid full GIN index scan when possible

From
Alvaro Herrera
Date:
On 2019-Aug-07, Tom Lane wrote:

> I think this would be committable as it stands, except that replacing
> an ALL scan with an EVERYTHING scan could be a performance regression
> if the index contains many null items.  We need to do something about
> that before committing.

Nikita, any word on getting this change done?

> Unfortunately I'm not sold on either 0002 or 0003 as they stand;
> they seem overly complicated, I'm not convinced they're correct,
> and you haven't really provided examples showing that all this
> extra complexity is worthwhile.

I suppose we should call ourselves satisfied if we get 0001 done during
this cycle (or at least this commitfest).  Further refinement can be had
in the future, as needed -- even within pg13, if Nikita or anybody else
wants to tackle Tom's suggested approaches (or something completely new,
or just contest Tom's points) quickly enough.  But I don't think we need
that in order to call this CF entry committed.

-- 
Álvaro Herrera                https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: Avoid full GIN index scan when possible

From
Nikita Glukhov
Date:
Attached 8th version of the patches.


A brief description of the patches and their improvements/overheads:

1. Avoid full scan in "empty-ALL AND regular" case.  One EVERYTHING entry with unconditional recheck is used instead of multiple  ALL entries.  Overhead for rechecking NULLs and "empty-ALL" keys is introduced.  Overhead of merging ALL-lists for multicolumn indexes is eliminated.

2. Fix overhead of rechecking NULLs.  Returned back overhead of merging NOT_NULL-lists for multicolumn indexes.

3. Fix overhead of unnecessary recheck of "empty-ALL" keys using consistentFn().  Performance of "empty-ALL [AND empty-ALL ...]" case now is the same as on   master.

4. Avoid full scan in "non-empty-ALL AND regular" case.  New variant of list-merging logic added to scanGetItem().

On 07.08.2019 23:32, Tom Lane wrote:
Nikita Glukhov <n.gluhov@postgrespro.ru> writes:
Attached 6th version of the patches.
I spent a bit of time looking at these.  Attached is a proposed revision
of the 0001 patch, with some minor changes:

* I didn't adopt your move of the "Non-default modes require the index
to have placeholders" test to after the stanza handling zero-key cases.
I think that move would be correct for 0001 as it stands, but it's far
less clear that it's still safe after 0002/0003 or whatever variant of
those we end up with.  We should leave that code where it is for now,
enforcing the v1-index requirement for all non-default search modes, and
reconsider after the dust settles.  (Or if we never do reconsider, it
won't be a big deal --- I doubt many v0 indexes are still out there.)
Ok.

* Rearranged the selfuncs.c logic to match ginNewScanKey better.

* Cleaned up my own sloppiness in the new gin.sql test cases.

I think this would be committable as it stands, except that replacing
an ALL scan with an EVERYTHING scan could be a performance regression
if the index contains many null items.  We need to do something about
that before committing.
Yes, such performance regression does exist (see test results at the end).  
And it exists not only if there are many NULLs -- recheck overhead is 
significant even in the simple cases like  "array @> '{}'".  This really 
makes patch 0001 non-committable.

And the only thing I can here offer to fix this is the patches 0002 and 0003.

Unfortunately I'm not sold on either 0002 or 0003 as they stand;
they seem overly complicated, I'm not convinced they're correct,
and you haven't really provided examples showing that all this
extra complexity is worthwhile.
Yes, they are complicated, but I tried to simplify 0002 a bit, and even
divided it into two separate patches 0002 and 0003.  For the performance 
improvements, see the test results at the end.

In particular:

* I don't really like the whole business about detecting a constant-true
ALL condition by applying the consistentFn at this stage.  That just
feels wrong to me: the consistentFn should be expecting some data about
the index contents and we don't have any to give.  If it works, it's
accidental, and probably it's fragile.  
If we have no entries, then there is nothing to pass to consistentFn() and it
should always return the same value for a given scankey.  The similar 
technique is used in startScanKey() where the fake entryRes[] is passed to it.

Moreover, the only gain we'd get from it is maybe not having to set 
forcedRecheck, and that doesn't look to me like it would make all that 
much difference.
The forced recheck has a non-zero cost, so this makes real sense.
* The code seems to be assuming that a zero-key ALL query is necessarily
precisely equivalent to a NOT NULL condition.  This seems flat out wrong.
At the very least it's possible for such a query to be constant-false,
rather than constant-true-for-non-null-items.  Admittedly, that would
suggest rather stupid coding of the opclass query-extract function, as
it could have reported a constant-false condition in an optimizable way
rather than an unoptimizable one.  But we aren't entitled to assume that
the opclass isn't being dumb; the API says it can do this, so it can.
I think we have to check the scankey regardless, either in the index or
via forcedRecheck.
Yes, empty ALL queries are equivalent to NOT NULL with or without recheck.
Patches 0001 and 0002 use unconditional forcedRecheck.  Patch 0003 uses 
conditional recheck depending on the result of triConsistentFn() invocation.
I added missing check for GIN_FALSE to eliminate constant-false empty 
ALL queries.  So, the empty ALL scankey is always checked in the index,
but only once at the initialization stage.

* I really dislike the refcount business in 0003.  It's not clear why we
need that or whether it's correct, and I think it would be unmaintainable
even if it were documented (which it isn't).


ISTM we could get where we need to go in a much simpler way.  A couple
of alternative ideas:

* During ginNewScanKey, separate out ALL-mode queries and don't add them
to the scankey list immediately.  After examining all the keys, if we
found any normal (DEFAULT or INCLUDE_EMPTY) queries, then go ahead and
add in the ALL-mode queries so that we can check them in the index, but
don't cause a full scan.  Otherwise, discard all the ALL-mode queries
and emit a NOT_NULL scan key, setting forcedRecheck so that we apply the
filtering that way.

* Or we could just discard ALL-mode queries on sight, setting
forcedRecheck always, and then emit NOT_NULL if we had any
of those and no normal queries.
I completely rewrote this logic in patch 0004, the reference counting is no 
longer needed.

Non-empty ALL keys are immediately added to the list, but the initialization 
of hidden ALL entries in them is postponed, and these full scan entries added 
only if there are no normal keys.  But if we have normal keys, then for each
ALL key enabled special list-merging logic in scanGetItem(), so the items 
matching normal keys are emitted to the result even if they have no entries 
required for ALL scankeys.


For example, the following intarray query
 arr @@ '1' AND arr @@ '!2'

produces two 1-entry scankeys:
 DEFAULT('1') ALL('2')       (previously there were 2 entries: '2' and ALL)

When the item lists for the entries '1' and '2' are merged, emitted all items - having '1' and not having '2', without forced recheck (new logic)- having both '1' and '2', if triConsistentFn(ALL('2')) returns not FALSE   (ordinary logic, each item must have at least one entry of each scankey)


This helps to do as much work as possible in the index, and to avoid a
unnecessary recheck.


I'm not sure that code changes in scanGetItem() are correct (especially due to 
the presence of lossy items), and the whole patch 0004 was not carefully tested,
but if the performance results are interesting, I could work further on this 
optimization.

The thing that seems hard to predict here is whether it is worth tracking
the presence of additional keys in the index to avoid a recheck in the
heap.  It's fairly easy to imagine that for common keys, avoiding the
recheck would be a net loss because it requires more work in the index.
If we had statistics about key counts, which of course we don't, you
could imagine making this decision dynamically depending on whether an
ALL query is asking about common or uncommon keys.

BTW --- any way you slice this, it seems like we'd end up with a situation
where we never execute an ALL query against the index in the way we do
now, meaning that the relevant code in the scanning logic is dead and
could be removed.  Given that, we don't really need a new NOT_NULL search
mode; we could just redefine what ALL mode actually does at execution.
This would be good IMO, because it's not obvious what the difference
between ALL and NOT_NULL modes is anyway.
The ALL mode is still used now for non-empty ALL queries without normal queries.

Simple performance test:

create table t (a int[], b int[], c int[]);

-- 1M NULLs
insert into t select NULL, NULL, NULL 
from generate_series(0, 999999) i;

-- 1M 1-element arrays
insert into t select array[i], array[i], array[i] 
from generate_series(0, 999999) i;

-- 10k 2-element arrays with common element
insert into t select array[-1,i], array[-1,i], array[-1,i] 
from generate_series(0, 9999) i;

create extension intarray;
create index on t using gin (a gin__int_ops, b gin__int_ops, c gin__int_ops);

                                      |           Query time, ms           WHERE condition            | master |          patches                                      |        |  #1  |  #2  |  #3  |  #4
---------------------------------------+--------+------+------+------+------a @> '{}'                             |    272 |  473 |  369 |  271 |  261a @> '{}' and b @> '{}'               |    374 |  548 |  523 |  368 |  353 a @> '{}' and b @> '{}' and c @> '{}' |    479 |  602 |  665 |  461 |  446 
a @> '{}' and a @@ '1'                |   52.2 |  0.4 |  0.4 |  0.4 |  0.4a @> '{}' and a @@ '-1'               |   56.2 |  4.0 |  4.0 |  2.3 |  2.3
a @@ '!-1' and a @@ '1'               |   52.8 | 53.0 | 52.7 | 52.9 |  0.3a @@ '!1' and a @@ '-1'               |   54.9 | 55.2 | 55.1 | 55.3 |  2.4

--
Nikita Glukhov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
Attachment

Re: Avoid full GIN index scan when possible

From
Michael Paquier
Date:
On Sat, Nov 23, 2019 at 02:35:50AM +0300, Nikita Glukhov wrote:
> Attached 8th version of the patches.

Please be careful here.  The CF entry was still marked as waiting on
author, but you sent a new patch series which has not been reviewed.
I have moved this patc to next CF instead.
--
Michael

Attachment

Re: Avoid full GIN index scan when possible

From
Tom Lane
Date:
Michael Paquier <michael@paquier.xyz> writes:
> On Sat, Nov 23, 2019 at 02:35:50AM +0300, Nikita Glukhov wrote:
>> Attached 8th version of the patches.

> Please be careful here.  The CF entry was still marked as waiting on
> author, but you sent a new patch series which has not been reviewed.
> I have moved this patc to next CF instead.

That seems a bit premature --- the new patch is only a couple of days
old.  The CF entry should've been moved back to "Needs Review",
sure.

(Having said that, the end of the month isn't that far away,
so it may well end up in the next CF anyway.)

            regards, tom lane



Re: Avoid full GIN index scan when possible

From
Alexander Korotkov
Date:
On Sat, Nov 23, 2019 at 2:39 AM Nikita Glukhov <n.gluhov@postgrespro.ru> wrote:
> Attached 8th version of the patches.

I've read this thread.  I decided to rewrite the patch in the way,
which I find simple and more clear.  Attached is the draft patch
written from scratch except regression tests, which were copied "as
is".  It based on the discussion in this thread as well as my own
ideas.  It works as following.

1) New GinScanKey->excludeOnly flag is introduced.  This flag means
that scan key might be satisfied even if no of its entries match the
row.  So, such scan keys are useful only for additional check of
results returned by other keys.  That is excludeOnly scan key is
designed for exclusion of already obtained results.
2) Initially no hidden scan entries are appended to
GIN_SEARCH_MODE_ALL scan keys.  They are appended after getting
statistics about search modes applied to particular attributes.
3) We append at only one GIN_CAT_EMPTY_QUERY scan entry when all scan
keys GIN_SEARCH_MODE_ALL.  If there is at least one normal scan key,
no GIN_CAT_EMPTY_QUERY is appended.
4) No hidden entries are appended to GIN_SEARCH_MODE_ALL scan key if
there are normal scan keys for the same column.  Otherwise
GIN_CAT_NULL_KEY hidden entry is appended.
5) GIN_SEARCH_MODE_ALL scan keys, which don't have GIN_CAT_EMPTY_QUERY
hidden entry, are marked with excludeOnly flag.  So, they are used to
filter results of other scan keys.
6) GIN_CAT_NULL_KEY hidden entry is found, then scan key doesn't match
independently on result of consistent function call.

Therefore, attached patch removes unnecessary GIN_CAT_EMPTY_QUERY scan
entries without removing positive effect of filtering in
GIN_SEARCH_MODE_ALL scan keys.

Patch requires further polishing including comments, minor refactoring
etc.  I'm going to continue work on this.

------
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachment

Re: Avoid full GIN index scan when possible

From
Alexander Korotkov
Date:
On Wed, Dec 25, 2019 at 8:25 AM Alexander Korotkov <a.korotkov@postgrespro.ru> wrote:
Patch requires further polishing including comments, minor refactoring
etc.  I'm going to continue work on this.

I also run the same performance comparison as Nikita [1] on my laptop.
The results are shown below.  PostgreSQL was built with -O2 and
asserts enabled.

                                       | Query time, ms |
            WHERE condition            | master | patch |
---------------------------------------+--------+-------+
 a @> '{}'                             |    117 |   116 |
 a @> '{}' and b @> '{}'               |    150 |   146 |
 a @> '{}' and b @> '{}' and c @> '{}' |    168 |   167 |
 a @> '{}' and a @@ '1'                |    126 |   0.6 |
 a @> '{}' and a @@ '-1'               |    128 |   3.2 |
 a @@ '!-1' and a @@ '1'               |    127 |   0.7 |
 a @@ '!1' and a @@ '-1'               |    122 |   4.4 |

Performance effect looks similar to patch #4 by Nikita.  I've tried to
add patch #4 to comparison, but I've catch assertion failure.

TRAP: FailedAssertion("key->includeNonMatching", File: "ginget.c", Line: 1340)

I'm going to continue polishing my version of patch.

Links
The Russian Postgres Company 

Re: Avoid full GIN index scan when possible

From
Nikita Glukhov
Date:
On 26.12.2019 4:59, Alexander Korotkov wrote:

 I've tried to add patch #4 to comparison, but I've catch assertion failure.

TRAP: FailedAssertion("key->includeNonMatching", File: "ginget.c", Line: 1340)
There simply should be inverted condition in the assertion: 
Assert(!key->includeNonMatching);

I have looked at v9 patch, and here is my review:

1. I agree with NULL-flag handling simplifications in ginNewScanKey(),
ginScanKeyAddHiddenEntry() extraction.

2. I also agree that usage of nrequired/nadditional in keyGetItem() is a more
natural solution to implement exclusion keys than my previous attempt of doing
that in scanGetKey().

But there are some questions:

Can we avoid referencing excludeOnly flag keyGetItem() by replacing these
references with !nrequired?

Maybe it would be better to move the whole block of keyGetItem() code
starting from the first loop over required keys and ending before the loop over
additional keys inside 'if (key->nrequired) { ... }'?

Can we avoid introducing excludeOnly flag by reusing searchMode and/or by
moving the initialization of nrequired/nadditional into ginNewScanKey()?


3. The following two times repeated NULL-filtering check looks too complicated
and needs to be refactored somehow:

-	res = key->triConsistentFn(key);
+	if (key->excludeOnly &&
+		key->nuserentries < key->nentries &&
+		key->scanEntry[key->nuserentries]->queryCategory == GIN_CAT_NULL_KEY &&
+		key->entryRes[key->nuserentries] == GIN_TRUE)
+		res = GIN_FALSE;
+	else
+		res = key->triConsistentFn(key);

For example, a special consistentFn() can be introduced for such NOT_NULL
scankeys.  Or even a hidden separate one-entry scankey with a trivial
consistentFn() can be added instead of adding hidden entry.


4. forcedRecheck flag that was previously used for discarded empty ALL scankeys
is removed now.  0-entry exclusion keys can appear instead, and their
consistentFn() simply returns constant value.  Could this lead to tangible
overhead in some cases (in comparison to forcedRecheck flag)?


5. A hidden GIN_CAT_EMPTY_QUERY is added only for the first empty ALL-scankey,
NULLs in other columns are filtered out with GIN_CAT_NULL_KEY.  This looks like
asymmetric, and it leads to accelerations is some cases and slowdowns in others
(depending on NULL fractions and their correlations in columns).

The following test shows a significant performance regression of v9:

insert into t select array[i], NULL, NULL from generate_series(1, 1000000) i;
                                      |        Query time, ms           WHERE condition            | master |   v8   |    v9
---------------------------------------+--------+--------+---------a @> '{}'                             |    224 |    213 |    212a @> '{}' and b @> '{}'               |     52 |     57 |    255a @> '{}' and b @> '{}' and c @> '{}' |     51 |     58 |    290


In the older version of the patch I tried to do the similar things (initialize
only one NOT_NULL entry for the first column), but refused to do this in v8.

So, to avoid slowdowns relative to master, I can offer simply to add 
GIN_CAT_EMPTY_QUERY entry for each column with empty ALL-keys if there are 
no normal keys.


-- 
Nikita Glukhov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Re: Avoid full GIN index scan when possible

From
Tomas Vondra
Date:
On Fri, Dec 27, 2019 at 04:36:14AM +0300, Nikita Glukhov wrote:
>On 26.12.2019 4:59, Alexander Korotkov wrote:
>>
>> I've tried to add patch #4 to comparison, but I've catch assertion 
>>failure.
>>
>>TRAP: FailedAssertion("key->includeNonMatching", File: "ginget.c", 
>>Line: 1340)
>There simply should be inverted condition in the assertion:
>Assert(!key->includeNonMatching);
>
>I have looked at v9 patch, and here is my review:
>
>1. I agree with NULL-flag handling simplifications in ginNewScanKey(),
>ginScanKeyAddHiddenEntry() extraction.
>
>2. I also agree that usage of nrequired/nadditional in keyGetItem() is a more
>natural solution to implement exclusion keys than my previous attempt of doing
>that in scanGetKey().
>
>But there are some questions:
>
>Can we avoid referencing excludeOnly flag keyGetItem() by replacing these
>references with !nrequired?
>
>Maybe it would be better to move the whole block of keyGetItem() code
>starting from the first loop over required keys and ending before the loop over
>additional keys inside 'if (key->nrequired) { ... }'?
>
>Can we avoid introducing excludeOnly flag by reusing searchMode and/or by
>moving the initialization of nrequired/nadditional into ginNewScanKey()?
>
>
>3. The following two times repeated NULL-filtering check looks too complicated
>and needs to be refactored somehow:
>
>-    res = key->triConsistentFn(key);
>+    if (key->excludeOnly &&
>+        key->nuserentries < key->nentries &&
>+        key->scanEntry[key->nuserentries]->queryCategory == GIN_CAT_NULL_KEY &&
>+        key->entryRes[key->nuserentries] == GIN_TRUE)
>+        res = GIN_FALSE;
>+    else
>+        res = key->triConsistentFn(key);
>
>For example, a special consistentFn() can be introduced for such NOT_NULL
>scankeys.  Or even a hidden separate one-entry scankey with a trivial
>consistentFn() can be added instead of adding hidden entry.
>
>
>4. forcedRecheck flag that was previously used for discarded empty ALL scankeys
>is removed now.  0-entry exclusion keys can appear instead, and their
>consistentFn() simply returns constant value.  Could this lead to tangible
>overhead in some cases (in comparison to forcedRecheck flag)?
>
>
>5. A hidden GIN_CAT_EMPTY_QUERY is added only for the first empty ALL-scankey,
>NULLs in other columns are filtered out with GIN_CAT_NULL_KEY.  This looks like
>asymmetric, and it leads to accelerations is some cases and slowdowns in others
>(depending on NULL fractions and their correlations in columns).
>
>The following test shows a significant performance regression of v9:
>
>insert into t select array[i], NULL, NULL from generate_series(1, 1000000) i;
>
>                                       |        Query time, ms
>            WHERE condition            | master |   v8   |    v9
>---------------------------------------+--------+--------+---------
> a @> '{}'                             |    224 |    213 |    212
> a @> '{}' and b @> '{}'               |     52 |     57 |    255
> a @> '{}' and b @> '{}' and c @> '{}' |     51 |     58 |    290
>
>
>In the older version of the patch I tried to do the similar things (initialize
>only one NOT_NULL entry for the first column), but refused to do this in v8.
>
>So, to avoid slowdowns relative to master, I can offer simply to add
>GIN_CAT_EMPTY_QUERY entry for each column with empty ALL-keys if there are
>no normal keys.
>

Yeah, I can confirm those results, although on my system the timings are
a bit different (I haven't tested v8):

                                        |        Query time, ms
             WHERE condition            | master |    v9
---------------------------------------+--------+---------
  a @> '{}'                             |    610 |    589
  a @> '{}' and b @> '{}'               |    185 |    665
  a @> '{}' and b @> '{}' and c @> '{}' |    185 |    741

So that's something we probably need to address, perhaps by using the
GIN_CAT_EMPTY_QUERY entries as proposed.

I've also tested this on a database storing mailing lists archives with
a trigram index, and in that case the performance with short values gets
much better. The "messages" table has two text fields with a GIN trigram
index - subject and body, and querying them with short/long values works
like this:

                    WHERE                    |  master  |    v9
  --------------------------------------------------------------
  subject LIKE '%aa%' AND body LIKE '%xx%'   |    4943  |  4052
  subject LIKE '%aaa%' AND body LIKE '%xx%'  |      10  |    10
  subject LIKE '%aa%' AND body LIKE '%xxx%'  |     380  |    13
  subject LIKE '%aaa%' AND BODY LIKE '%xxx%' |       2  |     2

which seems fairly nice. I've done tests with individual columns, and
that seems to be working fine too.


regards

-- 
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: Avoid full GIN index scan when possible

From
Alexander Korotkov
Date:
Hi, Tomas!

Thank you for your feedback!

On Mon, Jan 6, 2020 at 6:22 PM Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:
Yeah, I can confirm those results, although on my system the timings are
a bit different (I haven't tested v8):

                                        |        Query time, ms
             WHERE condition            | master |    v9
---------------------------------------+--------+---------
  a @> '{}'                             |    610 |    589
  a @> '{}' and b @> '{}'               |    185 |    665
  a @> '{}' and b @> '{}' and c @> '{}' |    185 |    741

So that's something we probably need to address, perhaps by using the
GIN_CAT_EMPTY_QUERY entries as proposed.

Yeah, handling nulls better without regression in some cases is hard.
For now I see at least 3 different ways of nulls handling, assuming there
is another non-excluding scan key:

1) Collect non-null matches by full scan of all non-null entries.
2) Exclude null marches using scan of null entry.
3) Force recheck.

Each method have its own advantages and disadvantages.  We probably
would need some cost-based decision making algorithm based on statistics.
I'm not entirely sure it's OK to do this execution time.  However, it probably
could be classified as "adaptive query processing", which is considered as
cool trend in DBMS.

Attached version 10 of patch doesn't change null handling in comparison
with master.  It eliminates full index scan only if there is another scan on the
same column.  So, it never adds null item to the scan key.  I've rerun tests
from Nikita [1].

   |                                        | Query time, ms |
 # |             WHERE condition            | master |   v10 |
---+----------------------------------------+--------+-------+
 1 |  a @> '{}'                             |    223 |   218 |
 2 |  a @> '{}' and b @> '{}'               |    302 |   308 |
 3 |  a @> '{}' and b @> '{}' and c @> '{}' |    405 |   404 |
 4 |  a @> '{}' and a @@ '1'                |     59 |   0.3 |
 5 |  a @> '{}' and a @@ '-1'               |     64 |   2.2 |
 6 |  a @@ '!-1' and a @@ '1'               |     63 |   0.3 |
 7 |  a @@ '!1' and a @@ '-1'               |     62 |   3.0 |

It appears that absolute numbers for master are higher than they were
previous time [2].  I've rechecked multiple times that current numbers are
correct.  So, it might be I didn't turn off sequential scan previous time.

We can see that cases #1, #2, #3, which have quals over multiple attributes
have the same execution time as in master.  That's expected since scanning
strategy is the same.  Cases #4, #5, #6, #7 have about the same improvement
as in v9.

I've also rerun many nulls test from Nikita [3].

                                       | Query time, ms |
            WHERE condition            | master |   v10 |
---------------------------------------+--------+-------+
 a @> '{}'                             |    190 |   192 |
 a @> '{}' and b @> '{}'               |     55 |    57 |
 a @> '{}' and b @> '{}' and c @> '{}' |     60 |    58 |

The results are the same as in master again.

I've also tested this on a database storing mailing lists archives with
a trigram index, and in that case the performance with short values gets
much better. The "messages" table has two text fields with a GIN trigram
index - subject and body, and querying them with short/long values works
like this:

                    WHERE                    |  master  |    v9
  --------------------------------------------------------------
  subject LIKE '%aa%' AND body LIKE '%xx%'   |    4943  |  4052
  subject LIKE '%aaa%' AND body LIKE '%xx%'  |      10  |    10
  subject LIKE '%aa%' AND body LIKE '%xxx%'  |     380  |    13
  subject LIKE '%aaa%' AND BODY LIKE '%xxx%' |       2  |     2

which seems fairly nice. I've done tests with individual columns, and
that seems to be working fine too.

Cool, thanks!

So, I think v10 is a version of patch, which can be committed after
some cleanup.  And we can try doing better nulls handling in a separate
patch.

Links

------
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company 
Attachment

Re: Avoid full GIN index scan when possible

From
Tom Lane
Date:
Alexander Korotkov <a.korotkov@postgrespro.ru> writes:
> So, I think v10 is a version of patch, which can be committed after
> some cleanup.  And we can try doing better nulls handling in a separate
> patch.

The cfbot reports that this doesn't pass regression testing.
I haven't looked into why not.

            regards, tom lane



Re: Avoid full GIN index scan when possible

From
Alexander Korotkov
Date:
On Fri, Jan 10, 2020 at 6:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
Alexander Korotkov <a.korotkov@postgrespro.ru> writes:
> So, I think v10 is a version of patch, which can be committed after
> some cleanup.  And we can try doing better nulls handling in a separate
> patch.

The cfbot reports that this doesn't pass regression testing.
I haven't looked into why not.

Thank you for noticing.  I'll take care of it.

 ------
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Re: Avoid full GIN index scan when possible

From
Alexander Korotkov
Date:
On Fri, Jan 10, 2020 at 7:36 PM Alexander Korotkov
<a.korotkov@postgrespro.ru> wrote:
>
> On Fri, Jan 10, 2020 at 6:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
>>
>> Alexander Korotkov <a.korotkov@postgrespro.ru> writes:
>> > So, I think v10 is a version of patch, which can be committed after
>> > some cleanup.  And we can try doing better nulls handling in a separate
>> > patch.
>>
>> The cfbot reports that this doesn't pass regression testing.
>> I haven't looked into why not.
>
>
> Thank you for noticing.  I'll take care of it.


In the v10 I've fixed a bug with nulls handling, but it appears that
test contained wrong expected result.  I've modified this test so that
it directly compares sequential scan results with bitmap indexscan
results.

------
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachment

Re: Avoid full GIN index scan when possible

From
Julien Rouhaud
Date:
Hi,

On Sat, Jan 11, 2020 at 1:10 AM Alexander Korotkov
<a.korotkov@postgrespro.ru> wrote:
>
> On Fri, Jan 10, 2020 at 7:36 PM Alexander Korotkov
> <a.korotkov@postgrespro.ru> wrote:
> >
> > On Fri, Jan 10, 2020 at 6:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
> >>
> >> Alexander Korotkov <a.korotkov@postgrespro.ru> writes:
> >> > So, I think v10 is a version of patch, which can be committed after
> >> > some cleanup.  And we can try doing better nulls handling in a separate
> >> > patch.
> >>
> >> The cfbot reports that this doesn't pass regression testing.
> >> I haven't looked into why not.
> >
> >
> > Thank you for noticing.  I'll take care of it.
>
>
> In the v10 I've fixed a bug with nulls handling, but it appears that
> test contained wrong expected result.  I've modified this test so that
> it directly compares sequential scan results with bitmap indexscan
> results.

Thanks a lot for working on that!  I'm not very familiar with gin
internals so additional eyes are definitely needed here but I agree
that this approach is simpler and cleaner.  I didn't find any problem
with the modified logic, the patch applies cleanly, compiles without
warning and all regression tests pass, so it all seems good to me.

Here are a few comments:

- In keyGetItem(), it seems that some comments would need to be
updated wrt. the new excludeOnly flag.  I'm thinking of:

     * Ok, we now know that there are no matches < minItem.
     *
     * If minItem is lossy, it means that there were no exact items on the
     * page among requiredEntries, because lossy pointers sort after exact
     * items. However, there might be exact items for the same page among
     * additionalEntries, so we mustn't advance past them.

and

        /*
         * Normally, none of the items in additionalEntries can have a curItem
         * larger than minItem. But if minItem is a lossy page, then there
         * might be exact items on the same page among additionalEntries.
         */        if (ginCompareItemPointers(&entry->curItem, &minItem) < 0)
        {
            Assert(ItemPointerIsLossyPage(&minItem) || key->nrequired == 0);
            minItem = entry->curItem;
        }

While at it, IIUC only excludeOnly key can have nrequired == 0 (if
that's the case, this could be explicitly said in startScanKey
relevant comment), so it'd be more consistent to also use excludeOnly
rather than nrequired in this assert?

- the pg_trgm regression tests check for the number of rows returned
with the new "excludeOnly" permutations, but only with an indexscan,
should we make sure that the same results are returned with a seq
scan?



Re: Avoid full GIN index scan when possible

From
Alexander Korotkov
Date:
Hi!

Thank you for feedback!

On Sat, Jan 11, 2020 at 3:19 PM Julien Rouhaud <rjuju123@gmail.com> wrote:
> On Sat, Jan 11, 2020 at 1:10 AM Alexander Korotkov
> <a.korotkov@postgrespro.ru> wrote:
> >
> > On Fri, Jan 10, 2020 at 7:36 PM Alexander Korotkov
> > <a.korotkov@postgrespro.ru> wrote:
> > >
> > > On Fri, Jan 10, 2020 at 6:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
> > >>
> > >> Alexander Korotkov <a.korotkov@postgrespro.ru> writes:
> > >> > So, I think v10 is a version of patch, which can be committed after
> > >> > some cleanup.  And we can try doing better nulls handling in a separate
> > >> > patch.
> > >>
> > >> The cfbot reports that this doesn't pass regression testing.
> > >> I haven't looked into why not.
> > >
> > >
> > > Thank you for noticing.  I'll take care of it.
> >
> >
> > In the v10 I've fixed a bug with nulls handling, but it appears that
> > test contained wrong expected result.  I've modified this test so that
> > it directly compares sequential scan results with bitmap indexscan
> > results.
>
> Thanks a lot for working on that!  I'm not very familiar with gin
> internals so additional eyes are definitely needed here but I agree
> that this approach is simpler and cleaner.  I didn't find any problem
> with the modified logic, the patch applies cleanly, compiles without
> warning and all regression tests pass, so it all seems good to me.
>
> Here are a few comments:
>
> - In keyGetItem(), it seems that some comments would need to be
> updated wrt. the new excludeOnly flag.  I'm thinking of:
>
>      * Ok, we now know that there are no matches < minItem.
>      *
>      * If minItem is lossy, it means that there were no exact items on the
>      * page among requiredEntries, because lossy pointers sort after exact
>      * items. However, there might be exact items for the same page among
>      * additionalEntries, so we mustn't advance past them.
>
> and
>
>         /*
>          * Normally, none of the items in additionalEntries can have a curItem
>          * larger than minItem. But if minItem is a lossy page, then there
>          * might be exact items on the same page among additionalEntries.
>          */        if (ginCompareItemPointers(&entry->curItem, &minItem) < 0)
>         {
>             Assert(ItemPointerIsLossyPage(&minItem) || key->nrequired == 0);
>             minItem = entry->curItem;
>         }

Sure, thank you for pointing.  I'm working on improving comments.
I'll provide updated patch soon.

> While at it, IIUC only excludeOnly key can have nrequired == 0 (if
> that's the case, this could be explicitly said in startScanKey
> relevant comment), so it'd be more consistent to also use excludeOnly
> rather than nrequired in this assert?

Make sense.  I'll adjust the assert as well as comment.

> - the pg_trgm regression tests check for the number of rows returned
> with the new "excludeOnly" permutations, but only with an indexscan,
> should we make sure that the same results are returned with a seq
> scan?

Yes, I recently fixed similar issue in gin regression test.  I'll
adjust pg_trgm test as well.

------
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



Re: Avoid full GIN index scan when possible

From
Alexander Korotkov
Date:
Updated patch is attached.  It contains more comments as well as commit message.

On Sun, Jan 12, 2020 at 12:10 AM Alexander Korotkov
<a.korotkov@postgrespro.ru> wrote:
> On Sat, Jan 11, 2020 at 3:19 PM Julien Rouhaud <rjuju123@gmail.com> wrote:
> > While at it, IIUC only excludeOnly key can have nrequired == 0 (if
> > that's the case, this could be explicitly said in startScanKey
> > relevant comment), so it'd be more consistent to also use excludeOnly
> > rather than nrequired in this assert?
>
> Make sense.  I'll adjust the assert as well as comment.

The changes to this assertion are not actually needed.  I just
accidentally forgot to revert them.

------
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachment

Re: Avoid full GIN index scan when possible

From
Tom Lane
Date:
Alexander Korotkov <a.korotkov@postgrespro.ru> writes:
> Updated patch is attached.  It contains more comments as well as commit message.

I reviewed this a little bit.  I agree this seems way more straightforward
than the patches we've been considering so far.  I wasn't too happy with
the state of the comments, so I rewrote them a bit in the attached v13.

One thing I'm still not happy about is the comment in
collectMatchesForHeapRow.  v12 failed to touch that at all, so I tried to
fill it in, but I'm not sure if my explanation is good.  Also, if we know
that excludeOnly keys are going to be ignored, can we save any work in
the main loop of that function?

The test cases needed some work too.  Notably, some of the places where
you tried to use EXPLAIN ANALYZE are unportable because they expose "Heap
Blocks" counts that are not stable.  (I checked the patch on a 32-bit
machine and indeed some of these failed.)  While it'd be possible to work
around that by filtering the EXPLAIN output, that would not be any simpler
or faster than our traditional style of just doing a plain EXPLAIN and a
separate execution.

It troubles me a bit as well that the test cases don't really expose
any difference between patched and unpatched code --- I checked, and
they "passed" without applying any of the code changes.  Maybe there's
not much to be done about that, since after all this is an optimization
that's not supposed to change any query results.

I didn't repeat any of the performance testing --- it seems fairly
clear that this can't make any cases worse.

Other than the collectMatchesForHeapRow issue, I think this is
committable.

            regards, tom lane

diff --git a/contrib/pg_trgm/expected/pg_trgm.out b/contrib/pg_trgm/expected/pg_trgm.out
index b3e709f..91596f8 100644
--- a/contrib/pg_trgm/expected/pg_trgm.out
+++ b/contrib/pg_trgm/expected/pg_trgm.out
@@ -3498,6 +3498,107 @@ select count(*) from test_trgm where t ~ '[qwerty]{2}-?[qwerty]{2}';
   1000
 (1 row)

+-- check handling of indexquals that generate no searchable conditions
+explain (costs off)
+select count(*) from test_trgm where t like '%99%' and t like '%qwerty%';
+                                 QUERY PLAN
+-----------------------------------------------------------------------------
+ Aggregate
+   ->  Bitmap Heap Scan on test_trgm
+         Recheck Cond: ((t ~~ '%99%'::text) AND (t ~~ '%qwerty%'::text))
+         ->  Bitmap Index Scan on trgm_idx
+               Index Cond: ((t ~~ '%99%'::text) AND (t ~~ '%qwerty%'::text))
+(5 rows)
+
+select count(*) from test_trgm where t like '%99%' and t like '%qwerty%';
+ count
+-------
+    19
+(1 row)
+
+explain (costs off)
+select count(*) from test_trgm where t like '%99%' and t like '%qw%';
+                               QUERY PLAN
+-------------------------------------------------------------------------
+ Aggregate
+   ->  Bitmap Heap Scan on test_trgm
+         Recheck Cond: ((t ~~ '%99%'::text) AND (t ~~ '%qw%'::text))
+         ->  Bitmap Index Scan on trgm_idx
+               Index Cond: ((t ~~ '%99%'::text) AND (t ~~ '%qw%'::text))
+(5 rows)
+
+select count(*) from test_trgm where t like '%99%' and t like '%qw%';
+ count
+-------
+    19
+(1 row)
+
+-- ensure that pending-list items are handled correctly, too
+create temp table t_test_trgm(t text COLLATE "C");
+create index t_trgm_idx on t_test_trgm using gin (t gin_trgm_ops);
+insert into t_test_trgm values ('qwerty99'), ('qwerty01');
+explain (costs off)
+select count(*) from t_test_trgm where t like '%99%' and t like '%qwerty%';
+                                 QUERY PLAN
+-----------------------------------------------------------------------------
+ Aggregate
+   ->  Bitmap Heap Scan on t_test_trgm
+         Recheck Cond: ((t ~~ '%99%'::text) AND (t ~~ '%qwerty%'::text))
+         ->  Bitmap Index Scan on t_trgm_idx
+               Index Cond: ((t ~~ '%99%'::text) AND (t ~~ '%qwerty%'::text))
+(5 rows)
+
+select count(*) from t_test_trgm where t like '%99%' and t like '%qwerty%';
+ count
+-------
+     1
+(1 row)
+
+explain (costs off)
+select count(*) from t_test_trgm where t like '%99%' and t like '%qw%';
+                               QUERY PLAN
+-------------------------------------------------------------------------
+ Aggregate
+   ->  Bitmap Heap Scan on t_test_trgm
+         Recheck Cond: ((t ~~ '%99%'::text) AND (t ~~ '%qw%'::text))
+         ->  Bitmap Index Scan on t_trgm_idx
+               Index Cond: ((t ~~ '%99%'::text) AND (t ~~ '%qw%'::text))
+(5 rows)
+
+select count(*) from t_test_trgm where t like '%99%' and t like '%qw%';
+ count
+-------
+     1
+(1 row)
+
+-- run the same queries with sequential scan to check the results
+set enable_bitmapscan=off;
+set enable_seqscan=on;
+select count(*) from test_trgm where t like '%99%' and t like '%qwerty%';
+ count
+-------
+    19
+(1 row)
+
+select count(*) from test_trgm where t like '%99%' and t like '%qw%';
+ count
+-------
+    19
+(1 row)
+
+select count(*) from t_test_trgm where t like '%99%' and t like '%qwerty%';
+ count
+-------
+     1
+(1 row)
+
+select count(*) from t_test_trgm where t like '%99%' and t like '%qw%';
+ count
+-------
+     1
+(1 row)
+
+reset enable_bitmapscan;
 create table test2(t text COLLATE "C");
 insert into test2 values ('abcdef');
 insert into test2 values ('quark');
diff --git a/contrib/pg_trgm/sql/pg_trgm.sql b/contrib/pg_trgm/sql/pg_trgm.sql
index 08459e6..2019d1f 100644
--- a/contrib/pg_trgm/sql/pg_trgm.sql
+++ b/contrib/pg_trgm/sql/pg_trgm.sql
@@ -55,6 +55,33 @@ select t,similarity(t,'gwertyu0988') as sml from test_trgm where t % 'gwertyu098
 select t,similarity(t,'gwertyu1988') as sml from test_trgm where t % 'gwertyu1988' order by sml desc, t;
 select count(*) from test_trgm where t ~ '[qwerty]{2}-?[qwerty]{2}';

+-- check handling of indexquals that generate no searchable conditions
+explain (costs off)
+select count(*) from test_trgm where t like '%99%' and t like '%qwerty%';
+select count(*) from test_trgm where t like '%99%' and t like '%qwerty%';
+explain (costs off)
+select count(*) from test_trgm where t like '%99%' and t like '%qw%';
+select count(*) from test_trgm where t like '%99%' and t like '%qw%';
+-- ensure that pending-list items are handled correctly, too
+create temp table t_test_trgm(t text COLLATE "C");
+create index t_trgm_idx on t_test_trgm using gin (t gin_trgm_ops);
+insert into t_test_trgm values ('qwerty99'), ('qwerty01');
+explain (costs off)
+select count(*) from t_test_trgm where t like '%99%' and t like '%qwerty%';
+select count(*) from t_test_trgm where t like '%99%' and t like '%qwerty%';
+explain (costs off)
+select count(*) from t_test_trgm where t like '%99%' and t like '%qw%';
+select count(*) from t_test_trgm where t like '%99%' and t like '%qw%';
+
+-- run the same queries with sequential scan to check the results
+set enable_bitmapscan=off;
+set enable_seqscan=on;
+select count(*) from test_trgm where t like '%99%' and t like '%qwerty%';
+select count(*) from test_trgm where t like '%99%' and t like '%qw%';
+select count(*) from t_test_trgm where t like '%99%' and t like '%qwerty%';
+select count(*) from t_test_trgm where t like '%99%' and t like '%qw%';
+reset enable_bitmapscan;
+
 create table test2(t text COLLATE "C");
 insert into test2 values ('abcdef');
 insert into test2 values ('quark');
diff --git a/src/backend/access/gin/ginget.c b/src/backend/access/gin/ginget.c
index 7ae4ef0..86e224f 100644
--- a/src/backend/access/gin/ginget.c
+++ b/src/backend/access/gin/ginget.c
@@ -528,8 +528,20 @@ startScanKey(GinState *ginstate, GinScanOpaque so, GinScanKey key)
      * order, until the consistent function says that none of the remaining
      * entries can form a match, without any items from the required set. The
      * rest go to the additional set.
+     *
+     * Exclude-only scan keys are known to have no required entries.
      */
-    if (key->nentries > 1)
+    if (key->excludeOnly)
+    {
+        MemoryContextSwitchTo(so->keyCtx);
+
+        key->nrequired = 0;
+        key->nadditional = key->nentries;
+        key->additionalEntries = palloc(key->nadditional * sizeof(GinScanEntry));
+        for (i = 0; i < key->nadditional; i++)
+            key->additionalEntries[i] = key->scanEntry[i];
+    }
+    else if (key->nentries > 1)
     {
         MemoryContextSwitchTo(so->tempCtx);

@@ -1008,37 +1020,52 @@ keyGetItem(GinState *ginstate, MemoryContext tempCtx, GinScanKey key,
             minItem = entry->curItem;
     }

-    if (allFinished)
+    if (allFinished && !key->excludeOnly)
     {
         /* all entries are finished */
         key->isFinished = true;
         return;
     }

-    /*
-     * Ok, we now know that there are no matches < minItem.
-     *
-     * If minItem is lossy, it means that there were no exact items on the
-     * page among requiredEntries, because lossy pointers sort after exact
-     * items. However, there might be exact items for the same page among
-     * additionalEntries, so we mustn't advance past them.
-     */
-    if (ItemPointerIsLossyPage(&minItem))
+    if (!key->excludeOnly)
     {
-        if (GinItemPointerGetBlockNumber(&advancePast) <
-            GinItemPointerGetBlockNumber(&minItem))
+        /*
+         * For a normal scan key, we now know there are no matches < minItem.
+         *
+         * If minItem is lossy, it means that there were no exact items on the
+         * page among requiredEntries, because lossy pointers sort after exact
+         * items. However, there might be exact items for the same page among
+         * additionalEntries, so we mustn't advance past them.
+         */
+        if (ItemPointerIsLossyPage(&minItem))
+        {
+            if (GinItemPointerGetBlockNumber(&advancePast) <
+                GinItemPointerGetBlockNumber(&minItem))
+            {
+                ItemPointerSet(&advancePast,
+                               GinItemPointerGetBlockNumber(&minItem),
+                               InvalidOffsetNumber);
+            }
+        }
+        else
         {
+            Assert(GinItemPointerGetOffsetNumber(&minItem) > 0);
             ItemPointerSet(&advancePast,
                            GinItemPointerGetBlockNumber(&minItem),
-                           InvalidOffsetNumber);
+                           OffsetNumberPrev(GinItemPointerGetOffsetNumber(&minItem)));
         }
     }
     else
     {
-        Assert(GinItemPointerGetOffsetNumber(&minItem) > 0);
-        ItemPointerSet(&advancePast,
-                       GinItemPointerGetBlockNumber(&minItem),
-                       OffsetNumberPrev(GinItemPointerGetOffsetNumber(&minItem)));
+        /*
+         * excludeOnly scan keys don't have any entries that are necessarily
+         * present in matching items.  So, we consider the item just after
+         * advancePast.
+         */
+        Assert(key->nrequired == 0);
+        ItemPointerSet(&minItem,
+                       GinItemPointerGetBlockNumber(&advancePast),
+                       OffsetNumberNext(GinItemPointerGetOffsetNumber(&advancePast)));
     }

     /*
@@ -1736,11 +1763,13 @@ collectMatchesForHeapRow(IndexScanDesc scan, pendingPosition *pos)
     }

     /*
-     * Now return "true" if all scan keys have at least one matching datum
+     * Now return "true" if all scan keys have at least one matching datum.
+     * But we should ignore excludeOnly keys (they mustn't exclude the row,
+     * since their implied GIN_CAT_EMPTY_QUERY scanEntry would match).
      */
     for (i = 0; i < so->nkeys; i++)
     {
-        if (pos->hasMatchKey[i] == false)
+        if (pos->hasMatchKey[i] == false && !so->keys[i].excludeOnly)
             return false;
     }

diff --git a/src/backend/access/gin/ginscan.c b/src/backend/access/gin/ginscan.c
index c15d06c..0a685bd 100644
--- a/src/backend/access/gin/ginscan.c
+++ b/src/backend/access/gin/ginscan.c
@@ -126,6 +126,27 @@ ginFillScanEntry(GinScanOpaque so, OffsetNumber attnum,
 }

 /*
+ * Append hidden scan entry of given category to the scan key.
+ *
+ * NB: this had better be called at most once per scan key, since
+ * ginFillScanKey leaves room for only one hidden entry.  Currently,
+ * it seems sufficiently clear that this is true that we don't bother
+ * with any cross-check logic.
+ */
+static void
+ginScanKeyAddHiddenEntry(GinScanOpaque so, GinScanKey key,
+                         GinNullCategory queryCategory)
+{
+    int            i = key->nentries++;
+
+    /* strategy is of no interest because this is not a partial-match item */
+    key->scanEntry[i] = ginFillScanEntry(so, key->attnum,
+                                         InvalidStrategy, key->searchMode,
+                                         (Datum) 0, queryCategory,
+                                         false, NULL);
+}
+
+/*
  * Initialize the next GinScanKey using the output from the extractQueryFn
  */
 static void
@@ -137,17 +158,16 @@ ginFillScanKey(GinScanOpaque so, OffsetNumber attnum,
 {
     GinScanKey    key = &(so->keys[so->nkeys++]);
     GinState   *ginstate = &so->ginstate;
-    uint32        nUserQueryValues = nQueryValues;
     uint32        i;

-    /* Non-default search modes add one "hidden" entry to each key */
-    if (searchMode != GIN_SEARCH_MODE_DEFAULT)
-        nQueryValues++;
     key->nentries = nQueryValues;
-    key->nuserentries = nUserQueryValues;
+    key->nuserentries = nQueryValues;

-    key->scanEntry = (GinScanEntry *) palloc(sizeof(GinScanEntry) * nQueryValues);
-    key->entryRes = (GinTernaryValue *) palloc0(sizeof(GinTernaryValue) * nQueryValues);
+    /* Allocate one extra array slot for possible "hidden" entry */
+    key->scanEntry = (GinScanEntry *) palloc(sizeof(GinScanEntry) *
+                                             (nQueryValues + 1));
+    key->entryRes = (GinTernaryValue *) palloc0(sizeof(GinTernaryValue) *
+                                                (nQueryValues + 1));

     key->query = query;
     key->queryValues = queryValues;
@@ -157,6 +177,12 @@ ginFillScanKey(GinScanOpaque so, OffsetNumber attnum,
     key->searchMode = searchMode;
     key->attnum = attnum;

+    /*
+     * Initially, scan keys of GIN_SEARCH_MODE_ALL mode are marked
+     * excludeOnly.  This might get changed later.
+     */
+    key->excludeOnly = (searchMode == GIN_SEARCH_MODE_ALL);
+
     ItemPointerSetMin(&key->curItem);
     key->curItemMatches = false;
     key->recheckCurItem = false;
@@ -168,6 +194,7 @@ ginFillScanKey(GinScanOpaque so, OffsetNumber attnum,

     ginInitConsistentFunction(ginstate, key);

+    /* Set up normal scan entries using extractQueryFn's outputs */
     for (i = 0; i < nQueryValues; i++)
     {
         Datum        queryKey;
@@ -175,54 +202,28 @@ ginFillScanKey(GinScanOpaque so, OffsetNumber attnum,
         bool        isPartialMatch;
         Pointer        this_extra;

-        if (i < nUserQueryValues)
-        {
-            /* set up normal entry using extractQueryFn's outputs */
-            queryKey = queryValues[i];
-            queryCategory = queryCategories[i];
-            isPartialMatch =
-                (ginstate->canPartialMatch[attnum - 1] && partial_matches)
-                ? partial_matches[i] : false;
-            this_extra = (extra_data) ? extra_data[i] : NULL;
-        }
-        else
-        {
-            /* set up hidden entry */
-            queryKey = (Datum) 0;
-            switch (searchMode)
-            {
-                case GIN_SEARCH_MODE_INCLUDE_EMPTY:
-                    queryCategory = GIN_CAT_EMPTY_ITEM;
-                    break;
-                case GIN_SEARCH_MODE_ALL:
-                    queryCategory = GIN_CAT_EMPTY_QUERY;
-                    break;
-                case GIN_SEARCH_MODE_EVERYTHING:
-                    queryCategory = GIN_CAT_EMPTY_QUERY;
-                    break;
-                default:
-                    elog(ERROR, "unexpected searchMode: %d", searchMode);
-                    queryCategory = 0;    /* keep compiler quiet */
-                    break;
-            }
-            isPartialMatch = false;
-            this_extra = NULL;
-
-            /*
-             * We set the strategy to a fixed value so that ginFillScanEntry
-             * can combine these entries for different scan keys.  This is
-             * safe because the strategy value in the entry struct is only
-             * used for partial-match cases.  It's OK to overwrite our local
-             * variable here because this is the last loop iteration.
-             */
-            strategy = InvalidStrategy;
-        }
+        queryKey = queryValues[i];
+        queryCategory = queryCategories[i];
+        isPartialMatch =
+            (ginstate->canPartialMatch[attnum - 1] && partial_matches)
+            ? partial_matches[i] : false;
+        this_extra = (extra_data) ? extra_data[i] : NULL;

         key->scanEntry[i] = ginFillScanEntry(so, attnum,
                                              strategy, searchMode,
                                              queryKey, queryCategory,
                                              isPartialMatch, this_extra);
     }
+
+    /*
+     * For GIN_SEARCH_MODE_INCLUDE_EMPTY and GIN_SEARCH_MODE_EVERYTHING search
+     * modes, we add the "hidden" entry immediately.  GIN_SEARCH_MODE_ALL is
+     * handled later, since we might be able to omit the hidden entry for it.
+     */
+    if (searchMode == GIN_SEARCH_MODE_INCLUDE_EMPTY)
+        ginScanKeyAddHiddenEntry(so, key, GIN_CAT_EMPTY_ITEM);
+    else if (searchMode == GIN_SEARCH_MODE_EVERYTHING)
+        ginScanKeyAddHiddenEntry(so, key, GIN_CAT_EMPTY_QUERY);
 }

 /*
@@ -265,6 +266,7 @@ ginNewScanKey(IndexScanDesc scan)
     GinScanOpaque so = (GinScanOpaque) scan->opaque;
     int            i;
     bool        hasNullQuery = false;
+    bool        attrHasNormalScan[INDEX_MAX_KEYS] = {false};
     MemoryContext oldCtx;

     /*
@@ -371,6 +373,33 @@ ginNewScanKey(IndexScanDesc scan)
                        skey->sk_argument, nQueryValues,
                        queryValues, categories,
                        partial_matches, extra_data);
+
+        /* Remember if we had any non-excludeOnly keys */
+        if (searchMode != GIN_SEARCH_MODE_ALL)
+            attrHasNormalScan[skey->sk_attno - 1] = true;
+    }
+
+    /*
+     * Processing GIN_SEARCH_MODE_ALL scan keys requires us to make a second
+     * pass over the scan keys.  Above we marked each such scan key as
+     * excludeOnly.  If the involved column has any normal (not excludeOnly)
+     * scan key as well, then we can leave it like that.  Otherwise, one
+     * excludeOnly scan key must receive a GIN_CAT_EMPTY_QUERY hidden entry
+     * and be set to normal (excludeOnly = false).
+     */
+    for (i = 0; i < so->nkeys; i++)
+    {
+        GinScanKey    key = &so->keys[i];
+
+        if (key->searchMode != GIN_SEARCH_MODE_ALL)
+            continue;
+
+        if (!attrHasNormalScan[key->attnum - 1])
+        {
+            key->excludeOnly = false;
+            ginScanKeyAddHiddenEntry(so, key, GIN_CAT_EMPTY_QUERY);
+            attrHasNormalScan[key->attnum - 1] = true;
+        }
     }

     /*
diff --git a/src/backend/utils/adt/selfuncs.c b/src/backend/utils/adt/selfuncs.c
index 18d77ac..7c6f057 100644
--- a/src/backend/utils/adt/selfuncs.c
+++ b/src/backend/utils/adt/selfuncs.c
@@ -6356,7 +6356,8 @@ spgcostestimate(PlannerInfo *root, IndexPath *path, double loop_count,

 typedef struct
 {
-    bool        haveFullScan;
+    bool        attHasFullScan[INDEX_MAX_KEYS];
+    bool        attHasNormalScan[INDEX_MAX_KEYS];
     double        partialEntries;
     double        exactEntries;
     double        searchEntries;
@@ -6452,16 +6453,21 @@ gincost_pattern(IndexOptInfo *index, int indexcol,
         counts->searchEntries++;
     }

-    if (searchMode == GIN_SEARCH_MODE_INCLUDE_EMPTY)
+    if (searchMode == GIN_SEARCH_MODE_DEFAULT)
+    {
+        counts->attHasNormalScan[indexcol] = true;
+    }
+    else if (searchMode == GIN_SEARCH_MODE_INCLUDE_EMPTY)
     {
         /* Treat "include empty" like an exact-match item */
+        counts->attHasNormalScan[indexcol] = true;
         counts->exactEntries++;
         counts->searchEntries++;
     }
-    else if (searchMode != GIN_SEARCH_MODE_DEFAULT)
+    else
     {
         /* It's GIN_SEARCH_MODE_ALL */
-        counts->haveFullScan = true;
+        counts->attHasFullScan[indexcol] = true;
     }

     return true;
@@ -6597,7 +6603,8 @@ gincost_scalararrayopexpr(PlannerInfo *root,
             /* We ignore array elements that are unsatisfiable patterns */
             numPossible++;

-            if (elemcounts.haveFullScan)
+            if (elemcounts.attHasFullScan[indexcol] &&
+                !elemcounts.attHasNormalScan[indexcol])
             {
                 /*
                  * Full index scan will be required.  We treat this as if
@@ -6654,6 +6661,7 @@ gincostestimate(PlannerInfo *root, IndexPath *path, double loop_count,
                 numEntries;
     GinQualCounts counts;
     bool        matchPossible;
+    bool        fullIndexScan;
     double        partialScale;
     double        entryPagesFetched,
                 dataPagesFetched,
@@ -6665,6 +6673,7 @@ gincostestimate(PlannerInfo *root, IndexPath *path, double loop_count,
     Relation    indexRel;
     GinStatsData ginStats;
     ListCell   *lc;
+    int            i;

     /*
      * Obtain statistical information from the meta page, if possible.  Else
@@ -6821,7 +6830,23 @@ gincostestimate(PlannerInfo *root, IndexPath *path, double loop_count,
         return;
     }

-    if (counts.haveFullScan || indexQuals == NIL)
+    /*
+     * If attribute has a full scan and at the same time doesn't have normal
+     * scan, then we'll have to scan all non-null entries of that attribute.
+     * Currently, we don't have per-attribute statistics for GIN.  Thus, we
+     * must assume the whole GIN index has to be scanned in this case.
+     */
+    fullIndexScan = false;
+    for (i = 0; i < index->nkeycolumns; i++)
+    {
+        if (counts.attHasFullScan[i] && !counts.attHasNormalScan[i])
+        {
+            fullIndexScan = true;
+            break;
+        }
+    }
+
+    if (fullIndexScan || indexQuals == NIL)
     {
         /*
          * Full index scan will be required.  We treat this as if every key in
diff --git a/src/include/access/gin_private.h b/src/include/access/gin_private.h
index a136f7f..71eeac2 100644
--- a/src/include/access/gin_private.h
+++ b/src/include/access/gin_private.h
@@ -304,6 +304,20 @@ typedef struct GinScanKeyData
     OffsetNumber attnum;

     /*
+     * An excludeOnly scan key is not able to enumerate all matching tuples.
+     * That is, to be semantically correct on its own, it would need to have a
+     * GIN_CAT_EMPTY_QUERY scanEntry, but it doesn't.  Such a key can still be
+     * used to filter tuples returned by other scan keys, so we will get the
+     * right answers as long as there's at least one non-excludeOnly scan key
+     * for each index attribute considered by the search.  For efficiency
+     * reasons we don't want to have unnecessary GIN_CAT_EMPTY_QUERY entries,
+     * so we will convert an excludeOnly scan key to non-excludeOnly (by
+     * adding a GIN_CAT_EMPTY_QUERY scanEntry) only if there are no other
+     * non-excludeOnly scan keys.
+     */
+    bool        excludeOnly;
+
+    /*
      * Match status data.  curItem is the TID most recently tested (could be a
      * lossy-page pointer).  curItemMatches is true if it passes the
      * consistentFn test; if so, recheckCurItem is the recheck flag.
diff --git a/src/test/regress/expected/gin.out b/src/test/regress/expected/gin.out
index a3911a6..e3c4805 100644
--- a/src/test/regress/expected/gin.out
+++ b/src/test/regress/expected/gin.out
@@ -1,7 +1,7 @@
 --
 -- Test GIN indexes.
 --
--- There are other tests to test different GIN opclassed. This is for testing
+-- There are other tests to test different GIN opclasses. This is for testing
 -- GIN itself.
 -- Create and populate a test table with a GIN index.
 create table gin_test_tbl(i int4[]) with (autovacuum_enabled = off);
@@ -35,3 +35,132 @@ insert into gin_test_tbl select array[1, 2, g] from generate_series(1, 1000) g;
 insert into gin_test_tbl select array[1, 3, g] from generate_series(1, 1000) g;
 delete from gin_test_tbl where i @> array[2];
 vacuum gin_test_tbl;
+-- Test optimization of empty queries
+create temp table t_gin_test_tbl(i int4[], j int4[]);
+create index on t_gin_test_tbl using gin (i, j);
+insert into t_gin_test_tbl
+values
+  (null,    null),
+  ('{}',    null),
+  ('{1}',   null),
+  ('{1,2}', null),
+  (null,    '{}'),
+  (null,    '{10}'),
+  ('{1,2}', '{10}'),
+  ('{2}',   '{10}'),
+  ('{1,3}', '{}'),
+  ('{1,1}', '{10}');
+set enable_seqscan = off;
+explain (costs off)
+select * from t_gin_test_tbl where array[0] <@ i;
+                    QUERY PLAN
+---------------------------------------------------
+ Bitmap Heap Scan on t_gin_test_tbl
+   Recheck Cond: ('{0}'::integer[] <@ i)
+   ->  Bitmap Index Scan on t_gin_test_tbl_i_j_idx
+         Index Cond: (i @> '{0}'::integer[])
+(4 rows)
+
+select * from t_gin_test_tbl where array[0] <@ i;
+ i | j
+---+---
+(0 rows)
+
+select * from t_gin_test_tbl where array[0] <@ i and '{}'::int4[] <@ j;
+ i | j
+---+---
+(0 rows)
+
+explain (costs off)
+select * from t_gin_test_tbl where i @> '{}';
+                    QUERY PLAN
+---------------------------------------------------
+ Bitmap Heap Scan on t_gin_test_tbl
+   Recheck Cond: (i @> '{}'::integer[])
+   ->  Bitmap Index Scan on t_gin_test_tbl_i_j_idx
+         Index Cond: (i @> '{}'::integer[])
+(4 rows)
+
+select * from t_gin_test_tbl where i @> '{}';
+   i   |  j
+-------+------
+ {}    |
+ {1}   |
+ {1,2} |
+ {1,2} | {10}
+ {2}   | {10}
+ {1,3} | {}
+ {1,1} | {10}
+(7 rows)
+
+create function explain_query_json(query_sql text)
+returns table (explain_line json)
+language plpgsql as
+$$
+begin
+  set enable_seqscan = off;
+  set enable_bitmapscan = on;
+  return query execute 'EXPLAIN (ANALYZE, FORMAT json) ' || query_sql;
+end;
+$$;
+create function execute_text_query_index(query_sql text)
+returns setof text
+language plpgsql
+as
+$$
+begin
+  set enable_seqscan = off;
+  set enable_bitmapscan = on;
+  return query execute query_sql;
+end;
+$$;
+create function execute_text_query_heap(query_sql text)
+returns setof text
+language plpgsql
+as
+$$
+begin
+  set enable_seqscan = on;
+  set enable_bitmapscan = off;
+  return query execute query_sql;
+end;
+$$;
+-- check number of rows returned by index and removed by recheck
+select
+  query,
+  js->0->'Plan'->'Plans'->0->'Actual Rows' as "return by index",
+  js->0->'Plan'->'Rows Removed by Index Recheck' as "removed by recheck",
+  (res_index = res_heap) as "match"
+from
+  (values
+    ($$ i @> '{}' $$),
+    ($$ j @> '{}' $$),
+    ($$ i @> '{}' and j @> '{}' $$),
+    ($$ i @> '{1}' $$),
+    ($$ i @> '{1}' and j @> '{}' $$),
+    ($$ i @> '{1}' and i @> '{}' and j @> '{}' $$),
+    ($$ j @> '{10}' $$),
+    ($$ j @> '{10}' and i @> '{}' $$),
+    ($$ j @> '{10}' and j @> '{}' and i @> '{}' $$),
+    ($$ i @> '{1}' and j @> '{10}' $$)
+  ) q(query),
+  lateral explain_query_json($$select * from t_gin_test_tbl where $$ || query) js,
+  lateral execute_text_query_index($$select string_agg((i, j)::text, ' ') from t_gin_test_tbl where $$ || query)
res_index,
+  lateral execute_text_query_heap($$select string_agg((i, j)::text, ' ') from t_gin_test_tbl where $$ || query)
res_heap;
+                   query                   | return by index | removed by recheck | match
+-------------------------------------------+-----------------+--------------------+-------
+  i @> '{}'                                | 7               | 0                  | t
+  j @> '{}'                                | 6               | 0                  | t
+  i @> '{}' and j @> '{}'                  | 4               | 0                  | t
+  i @> '{1}'                               | 5               | 0                  | t
+  i @> '{1}' and j @> '{}'                 | 3               | 0                  | t
+  i @> '{1}' and i @> '{}' and j @> '{}'   | 3               | 0                  | t
+  j @> '{10}'                              | 4               | 0                  | t
+  j @> '{10}' and i @> '{}'                | 3               | 0                  | t
+  j @> '{10}' and j @> '{}' and i @> '{}'  | 3               | 0                  | t
+  i @> '{1}' and j @> '{10}'               | 2               | 0                  | t
+(10 rows)
+
+reset enable_seqscan;
+reset enable_bitmapscan;
+drop table t_gin_test_tbl;
diff --git a/src/test/regress/expected/tsearch.out b/src/test/regress/expected/tsearch.out
index 7af2899..fe1cd9d 100644
--- a/src/test/regress/expected/tsearch.out
+++ b/src/test/regress/expected/tsearch.out
@@ -337,6 +337,41 @@ SELECT count(*) FROM test_tsvector WHERE a @@ '!no_such_lexeme';
    508
 (1 row)

+-- Test optimization of non-empty GIN_SEARCH_MODE_ALL queries
+EXPLAIN (COSTS OFF)
+SELECT count(*) FROM test_tsvector WHERE a @@ '!qh';
+                     QUERY PLAN
+-----------------------------------------------------
+ Aggregate
+   ->  Bitmap Heap Scan on test_tsvector
+         Recheck Cond: (a @@ '!''qh'''::tsquery)
+         ->  Bitmap Index Scan on wowidx
+               Index Cond: (a @@ '!''qh'''::tsquery)
+(5 rows)
+
+SELECT count(*) FROM test_tsvector WHERE a @@ '!qh';
+ count
+-------
+   410
+(1 row)
+
+EXPLAIN (COSTS OFF)
+SELECT count(*) FROM test_tsvector WHERE a @@ 'wr' AND a @@ '!qh';
+                                     QUERY PLAN
+------------------------------------------------------------------------------------
+ Aggregate
+   ->  Bitmap Heap Scan on test_tsvector
+         Recheck Cond: ((a @@ '''wr'''::tsquery) AND (a @@ '!''qh'''::tsquery))
+         ->  Bitmap Index Scan on wowidx
+               Index Cond: ((a @@ '''wr'''::tsquery) AND (a @@ '!''qh'''::tsquery))
+(5 rows)
+
+SELECT count(*) FROM test_tsvector WHERE a @@ 'wr' AND a @@ '!qh';
+ count
+-------
+    60
+(1 row)
+
 RESET enable_seqscan;
 INSERT INTO test_tsvector VALUES ('???', 'DFG:1A,2B,6C,10 FGH');
 SELECT * FROM ts_stat('SELECT a FROM test_tsvector') ORDER BY ndoc DESC, nentry DESC, word LIMIT 10;
diff --git a/src/test/regress/sql/gin.sql b/src/test/regress/sql/gin.sql
index c566e9b..836717c 100644
--- a/src/test/regress/sql/gin.sql
+++ b/src/test/regress/sql/gin.sql
@@ -1,7 +1,7 @@
 --
 -- Test GIN indexes.
 --
--- There are other tests to test different GIN opclassed. This is for testing
+-- There are other tests to test different GIN opclasses. This is for testing
 -- GIN itself.

 -- Create and populate a test table with a GIN index.
@@ -34,3 +34,92 @@ insert into gin_test_tbl select array[1, 3, g] from generate_series(1, 1000) g;

 delete from gin_test_tbl where i @> array[2];
 vacuum gin_test_tbl;
+
+-- Test optimization of empty queries
+create temp table t_gin_test_tbl(i int4[], j int4[]);
+create index on t_gin_test_tbl using gin (i, j);
+insert into t_gin_test_tbl
+values
+  (null,    null),
+  ('{}',    null),
+  ('{1}',   null),
+  ('{1,2}', null),
+  (null,    '{}'),
+  (null,    '{10}'),
+  ('{1,2}', '{10}'),
+  ('{2}',   '{10}'),
+  ('{1,3}', '{}'),
+  ('{1,1}', '{10}');
+
+set enable_seqscan = off;
+explain (costs off)
+select * from t_gin_test_tbl where array[0] <@ i;
+select * from t_gin_test_tbl where array[0] <@ i;
+select * from t_gin_test_tbl where array[0] <@ i and '{}'::int4[] <@ j;
+
+explain (costs off)
+select * from t_gin_test_tbl where i @> '{}';
+select * from t_gin_test_tbl where i @> '{}';
+
+create function explain_query_json(query_sql text)
+returns table (explain_line json)
+language plpgsql as
+$$
+begin
+  set enable_seqscan = off;
+  set enable_bitmapscan = on;
+  return query execute 'EXPLAIN (ANALYZE, FORMAT json) ' || query_sql;
+end;
+$$;
+
+create function execute_text_query_index(query_sql text)
+returns setof text
+language plpgsql
+as
+$$
+begin
+  set enable_seqscan = off;
+  set enable_bitmapscan = on;
+  return query execute query_sql;
+end;
+$$;
+
+create function execute_text_query_heap(query_sql text)
+returns setof text
+language plpgsql
+as
+$$
+begin
+  set enable_seqscan = on;
+  set enable_bitmapscan = off;
+  return query execute query_sql;
+end;
+$$;
+
+-- check number of rows returned by index and removed by recheck
+select
+  query,
+  js->0->'Plan'->'Plans'->0->'Actual Rows' as "return by index",
+  js->0->'Plan'->'Rows Removed by Index Recheck' as "removed by recheck",
+  (res_index = res_heap) as "match"
+from
+  (values
+    ($$ i @> '{}' $$),
+    ($$ j @> '{}' $$),
+    ($$ i @> '{}' and j @> '{}' $$),
+    ($$ i @> '{1}' $$),
+    ($$ i @> '{1}' and j @> '{}' $$),
+    ($$ i @> '{1}' and i @> '{}' and j @> '{}' $$),
+    ($$ j @> '{10}' $$),
+    ($$ j @> '{10}' and i @> '{}' $$),
+    ($$ j @> '{10}' and j @> '{}' and i @> '{}' $$),
+    ($$ i @> '{1}' and j @> '{10}' $$)
+  ) q(query),
+  lateral explain_query_json($$select * from t_gin_test_tbl where $$ || query) js,
+  lateral execute_text_query_index($$select string_agg((i, j)::text, ' ') from t_gin_test_tbl where $$ || query)
res_index,
+  lateral execute_text_query_heap($$select string_agg((i, j)::text, ' ') from t_gin_test_tbl where $$ || query)
res_heap;
+
+reset enable_seqscan;
+reset enable_bitmapscan;
+
+drop table t_gin_test_tbl;
diff --git a/src/test/regress/sql/tsearch.sql b/src/test/regress/sql/tsearch.sql
index ece80b9..14da7ed 100644
--- a/src/test/regress/sql/tsearch.sql
+++ b/src/test/regress/sql/tsearch.sql
@@ -111,6 +111,15 @@ SELECT count(*) FROM test_tsvector WHERE a @@ any ('{wr,qh}');
 SELECT count(*) FROM test_tsvector WHERE a @@ 'no_such_lexeme';
 SELECT count(*) FROM test_tsvector WHERE a @@ '!no_such_lexeme';

+-- Test optimization of non-empty GIN_SEARCH_MODE_ALL queries
+EXPLAIN (COSTS OFF)
+SELECT count(*) FROM test_tsvector WHERE a @@ '!qh';
+SELECT count(*) FROM test_tsvector WHERE a @@ '!qh';
+
+EXPLAIN (COSTS OFF)
+SELECT count(*) FROM test_tsvector WHERE a @@ 'wr' AND a @@ '!qh';
+SELECT count(*) FROM test_tsvector WHERE a @@ 'wr' AND a @@ '!qh';
+
 RESET enable_seqscan;

 INSERT INTO test_tsvector VALUES ('???', 'DFG:1A,2B,6C,10 FGH');

Re: Avoid full GIN index scan when possible

From
Alexander Korotkov
Date:
Hi!

On Tue, Jan 14, 2020 at 9:43 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Alexander Korotkov <a.korotkov@postgrespro.ru> writes:
> > Updated patch is attached.  It contains more comments as well as commit message.
>
> I reviewed this a little bit.  I agree this seems way more straightforward
> than the patches we've been considering so far.  I wasn't too happy with
> the state of the comments, so I rewrote them a bit in the attached v13.

Thank you!

> One thing I'm still not happy about is the comment in
> collectMatchesForHeapRow.  v12 failed to touch that at all, so I tried to
> fill it in, but I'm not sure if my explanation is good.

I've tried to rephrase this comment making it better from my point of
view.  It's hard for me to be sure about this, since I'm not native
English speaker.  I'd like you to take a look on it.

> Also, if we know
> that excludeOnly keys are going to be ignored, can we save any work in
> the main loop of that function?

It doesn't look so for me.  We still need to collect matches for
consistent function call afterwards.  We may skip calling consistent
function for excludeOnly keys by forcing a recheck.  But that's not
going to be a plain win.

I thought about different optimization.  We now check for at least one
matching entry.  Can we check for at least one *required* entry?  It
seems we can save some consistent function calls.

> The test cases needed some work too.  Notably, some of the places where
> you tried to use EXPLAIN ANALYZE are unportable because they expose "Heap
> Blocks" counts that are not stable.  (I checked the patch on a 32-bit
> machine and indeed some of these failed.)  While it'd be possible to work
> around that by filtering the EXPLAIN output, that would not be any simpler
> or faster than our traditional style of just doing a plain EXPLAIN and a
> separate execution.

Thanks!

> It troubles me a bit as well that the test cases don't really expose
> any difference between patched and unpatched code --- I checked, and
> they "passed" without applying any of the code changes.  Maybe there's
> not much to be done about that, since after all this is an optimization
> that's not supposed to change any query results.

Yep, it seems like we can't do much in this field unless we're going
to expose too much internals at user level.

I also had concerns about how excludeOnly keys work with lossy pages.
I didn't find exact error.  But I've added code, which skips
excludeOnly keys checks for lossy pages.  They aren't going to exclude
any lossy page anyway.  So, we can save some resources by skipping
this.

------
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachment

Re: Avoid full GIN index scan when possible

From
Alexander Korotkov
Date:
On Wed, Jan 15, 2020 at 1:47 AM Alexander Korotkov
<a.korotkov@postgrespro.ru> wrote:
> I also had concerns about how excludeOnly keys work with lossy pages.
> I didn't find exact error.  But I've added code, which skips
> excludeOnly keys checks for lossy pages.  They aren't going to exclude
> any lossy page anyway.  So, we can save some resources by skipping
> this.

I also found the way we combine lossy pages and exact TIDs pretty
asymmetric.  Imagine one scan key A matches a lossy page, while
another key B have set of matching TIDs on the same page.  If key A
goes first, we will report a lossy page.  But if key B goes first, we
will report a set of TIDs with recheck set.  It would be nice to
improve.  But this is definitely subject of a separate patch.

------
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



Re: Avoid full GIN index scan when possible

From
Tom Lane
Date:
Alexander Korotkov <a.korotkov@postgrespro.ru> writes:
> On Tue, Jan 14, 2020 at 9:43 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> One thing I'm still not happy about is the comment in
>> collectMatchesForHeapRow.  v12 failed to touch that at all, so I tried to
>> fill it in, but I'm not sure if my explanation is good.

> I've tried to rephrase this comment making it better from my point of
> view.  It's hard for me to be sure about this, since I'm not native
> English speaker.  I'd like you to take a look on it.

Yeah, that's not great as-is.  Maybe like

+     * All scan keys except excludeOnly require at least one entry to match.
+     * excludeOnly keys are an exception, because their implied
+     * GIN_CAT_EMPTY_QUERY scanEntry always matches.  So return "true"
+     * if all non-excludeOnly scan keys have at least one match.

>> Also, if we know
>> that excludeOnly keys are going to be ignored, can we save any work in
>> the main loop of that function?

> It doesn't look so for me.  We still need to collect matches for
> consistent function call afterwards.

Ah, right.

> I also had concerns about how excludeOnly keys work with lossy pages.
> I didn't find exact error.  But I've added code, which skips
> excludeOnly keys checks for lossy pages.  They aren't going to exclude
> any lossy page anyway.  So, we can save some resources by skipping
> this.

Hmm ... yeah, these test cases are not large enough to exercise any
lossy-page cases, are they?  I doubt we should try to make a new regression
test that is that big.  (But if there is one already, maybe we could add
more test queries with it, instead of creating whole new tables?)

            regards, tom lane



Re: Avoid full GIN index scan when possible

From
Alexander Korotkov
Date:
On Wed, Jan 15, 2020 at 2:03 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Alexander Korotkov <a.korotkov@postgrespro.ru> writes:
> > On Tue, Jan 14, 2020 at 9:43 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
> >> One thing I'm still not happy about is the comment in
> >> collectMatchesForHeapRow.  v12 failed to touch that at all, so I tried to
> >> fill it in, but I'm not sure if my explanation is good.
>
> > I've tried to rephrase this comment making it better from my point of
> > view.  It's hard for me to be sure about this, since I'm not native
> > English speaker.  I'd like you to take a look on it.
>
> Yeah, that's not great as-is.  Maybe like
>
> +        * All scan keys except excludeOnly require at least one entry to match.
> +        * excludeOnly keys are an exception, because their implied
> +        * GIN_CAT_EMPTY_QUERY scanEntry always matches.  So return "true"
> +        * if all non-excludeOnly scan keys have at least one match.

Looks good to me.

> >> Also, if we know
> >> that excludeOnly keys are going to be ignored, can we save any work in
> >> the main loop of that function?
>
> > It doesn't look so for me.  We still need to collect matches for
> > consistent function call afterwards.
>
> Ah, right.
>
> > I also had concerns about how excludeOnly keys work with lossy pages.
> > I didn't find exact error.  But I've added code, which skips
> > excludeOnly keys checks for lossy pages.  They aren't going to exclude
> > any lossy page anyway.  So, we can save some resources by skipping
> > this.
>
> Hmm ... yeah, these test cases are not large enough to exercise any
> lossy-page cases, are they?  I doubt we should try to make a new regression
> test that is that big.  (But if there is one already, maybe we could add
> more test queries with it, instead of creating whole new tables?)

I've checked that none of existing tests for GIN can produce lossy
bitmap page with minimal work_mem = '64kB'.  I've tried to generate
sample table with single integer column to get lossy page.  It appears
that we need at least 231425 rows to get it.  With wider rows, we
would need less number of rows, but I think total heap size wouldn't
be less.

So, I think we don't need so huge regression test to exercise this corner case.

------
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachment

Re: Avoid full GIN index scan when possible

From
Alexander Korotkov
Date:
On Sat, Jan 18, 2020 at 12:33 AM Alexander Korotkov
<a.korotkov@postgrespro.ru> wrote:
> So, I think we don't need so huge regression test to exercise this corner case.

Forgot to mention.  I'm going to push v15 if no objections.

------
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



Re: Avoid full GIN index scan when possible

From
Tom Lane
Date:
Alexander Korotkov <a.korotkov@postgrespro.ru> writes:
> On Wed, Jan 15, 2020 at 2:03 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> Hmm ... yeah, these test cases are not large enough to exercise any
>> lossy-page cases, are they?  I doubt we should try to make a new regression
>> test that is that big.  (But if there is one already, maybe we could add
>> more test queries with it, instead of creating whole new tables?)

> I've checked that none of existing tests for GIN can produce lossy
> bitmap page with minimal work_mem = '64kB'.  I've tried to generate
> sample table with single integer column to get lossy page.  It appears
> that we need at least 231425 rows to get it.  With wider rows, we
> would need less number of rows, but I think total heap size wouldn't
> be less.
> So, I think we don't need so huge regression test to exercise this corner case.

Ugh.  Yeah, I don't want a regression test case that big either.

v15 looks good to me.

            regards, tom lane



Re: Avoid full GIN index scan when possible

From
Alexander Korotkov
Date:
On Sat, Jan 18, 2020 at 12:48 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Alexander Korotkov <a.korotkov@postgrespro.ru> writes:
> > On Wed, Jan 15, 2020 at 2:03 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
> >> Hmm ... yeah, these test cases are not large enough to exercise any
> >> lossy-page cases, are they?  I doubt we should try to make a new regression
> >> test that is that big.  (But if there is one already, maybe we could add
> >> more test queries with it, instead of creating whole new tables?)
>
> > I've checked that none of existing tests for GIN can produce lossy
> > bitmap page with minimal work_mem = '64kB'.  I've tried to generate
> > sample table with single integer column to get lossy page.  It appears
> > that we need at least 231425 rows to get it.  With wider rows, we
> > would need less number of rows, but I think total heap size wouldn't
> > be less.
> > So, I think we don't need so huge regression test to exercise this corner case.
>
> Ugh.  Yeah, I don't want a regression test case that big either.
>
> v15 looks good to me.

Thanks! Pushed!

------
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company