On 2017/09/02 12:44, Thomas Munro wrote:
> On Wed, Aug 16, 2017 at 9:23 PM, Konstantin Knizhnik
> <k.knizhnik@postgrespro.ru> wrote:
>> postgres=# explain select * from bt where k between 1 and 20000 and v = 100;
>> QUERY PLAN
>> ----------------------------------------------------------------------
>> Append (cost=0.29..15.63 rows=2 width=8)
>> -> Index Scan using dti1 on dt1 (cost=0.29..8.30 rows=1 width=8)
>> Index Cond: (v = 100)
>> -> Index Scan using dti2 on dt2 (cost=0.29..7.33 rows=1 width=8)
>> Index Cond: (v = 100)
>> Filter: (k <= 20000)
>> (6 rows)
>
> +1
>
> This seems like a good feature to me: filtering stuff that is
> obviously true is a waste of CPU cycles and may even require people to
> add redundant stuff to indexes. I was pondering something related to
> this over in the partition-wise join thread (join quals that are
> implied by partition constraints and should be discarded).
>
> It'd be interesting to get Amit Langote's feedback, so I CC'd him.
> I'd be surprised if he and others haven't got a plan or a patch for
> this down the back of the sofa.
I agree that that's a good optimization in the cases it's correct. Given
that check_index_predicates() already applies the same optimization when
considering using a partial index, it might make sense to try to do the
same even earlier for the table itself using its CHECK / NOT NULL
constraints as predicates (I said *earlier* because
relation_excluded_by_constrains happens for a relation before we look at
its indexes). Also, at the end of relation_excluded_by_constraints() may
not be such a bad place to do this.
By the way, I read in check_index_predicates() that we should not apply
this optimization if the relation in question is a target of UPDATE /
DELETE / SELECT FOR UPDATE.
Thanks,
Amit