Re: [PATCH] Keeps tracking the uniqueness with UniqueKey - Mailing list pgsql-hackers

From Andy Fan
Subject Re: [PATCH] Keeps tracking the uniqueness with UniqueKey
Date
Msg-id CAKU4AWrGrs0Vk5OrZmS1gbTA2ijDH18NHKnXZTPZNuupn++ing@mail.gmail.com
Whole thread Raw
In response to Re: [PATCH] Keeps tracking the uniqueness with UniqueKey  (David Rowley <dgrowleyml@gmail.com>)
Responses Re: [PATCH] Keeps tracking the uniqueness with UniqueKey  (David Rowley <dgrowleyml@gmail.com>)
List pgsql-hackers
Thanks David for your time,  I will acknowledge every item you mentioned 
with the updated patch.  Now I just talk about part of them.


1. There seem to be some cases where joins are no longer being
detected as unique. This is evident in postgres_fdw.out. We shouldn't
be regressing any of these cases.

You are correct,  the issue here is  I didn't distinguish the one_row case 
in UniqueKey struct.  Actually when a outer relation is join with a relation
which has only one row,  there must be at most one row match the outer join.
The only-one-row case in postgres_fdw.out come from aggregation 
call. I will added one more field "bool onerow" in UniqueKey struct.  and
also try your optimization suggestion for the onerow UniqueKey.
 
2. The following change does not seem like it should be part of this
patch.  I understand you perhaps have done as you think it will
improve the performance of checking if an expression is in a list of
expressions.

- COMPARE_SCALAR_FIELD(varno);
+ /* Compare varattno first since it has higher selectivity than varno */
  COMPARE_SCALAR_FIELD(varattno);
+ COMPARE_SCALAR_FIELD(varno);

I did have a hesitation when I make this changes.  so I'd rollback this change
at the following patch. 
 
If you think that is true, then please do it as a separate effort and
provide benchmarks with your findings.

3. list_all_members_in. I think this would be better named as
list_is_subset. Please follow the lead of bms_is_subset().
Additionally, you should Assert that IsPointerList is true as there's
nothing else to indicate that it can't be used for an int or Oid list.

4. guarantee is not a very good name for the field in UniqueKey.
Maybe something like is_not_null?

 
5. I think you should be performing a bms_del_member during join
removal rather than removing this Assert()

- Assert(bms_equal(rel->relids, root->all_baserels));

FWIW, it's far from perfect that you've needed to delay the left join
removal, but I do understand why you've done it. It's also far from
perfect that you're including removed relations in the
total_table_pages calculation. c6e4133fae1 took some measures to
improve this calculation and this is making it worse again.

Since the removed relation depends on the UniqueKey which has to be
calculated after  total_table_pages calculation in current code, so that's 
something I must do.  But if the relation is not removable,  there is no waste
at all.  If it is removable,  such gain will much higher than the loss.  I'm 
not sure this should be a concern.   

Actually looks the current remove_useless_join has some limits which can't
remove a joinrel,  I still didn't figure out why.  In the past we have some limited
ability to detect the unqiueness after join, so that's would be ok.  Since  we have
such ability now,  this may be another opportunity to improve the join_is_removable
function, but I'd not like put such thing in this patch. 

Since you said "far from perfect" twice for this point and I only get one reason (we 
may plan a node which we removed later),  do I miss the other one? 

6. Can you explain why you moved the populate_baserel_uniquekeys()
call out of set_plain_rel_size()?

This is to be consistent with populate_partitionedrel_uniquekeys, which
is set at set_append_rel_pathlist.  
  
7. I don't think the removal of rel_supports_distinctness() is
warranted.  Is it not ok to check if the relation has any uniquekeys?

I think this is a good suggestion.  I will follow that.  

8. Your spelling of unique is incorrect in many places:

src/backend/nodes/makefuncs.c: * makeUnqiueKey
src/backend/optimizer/path/uniquekeys.c:static List
*initililze_unqiuecontext_for_joinrel(RelOptInfo *joinrel,
src/backend/optimizer/path/uniquekeys.c: * check if combination of
unqiuekeys from both side is still useful for us,
src/backend/optimizer/path/uniquekeys.c:        outerrel_uniquekey_ctx
= initililze_unqiuecontext_for_joinrel(joinrel, outerrel);
src/backend/optimizer/path/uniquekeys.c:        innerrel_uniquekey_ctx
= initililze_unqiuecontext_for_joinrel(joinrel, innerrel);
src/backend/optimizer/path/uniquekeys.c: * we need to convert the
UnqiueKey from sub_final_rel to currel via the positions info in
src/backend/optimizer/path/uniquekeys.c:                ctx->pos =
pos; /* the position in current targetlist,  will be used to set
UnqiueKey */
src/backend/optimizer/path/uniquekeys.c: * Check if Unqiue key of the
innerrel is valid after join. innerrel's UniqueKey
src/backend/optimizer/path/uniquekeys.c: * initililze_unqiuecontext_for_joinrel
src/backend/optimizer/path/uniquekeys.c: * all the unqiuekeys which
are not possible to use later
src/backend/optimizer/path/uniquekeys.c:initililze_unqiuecontext_for_joinrel(RelOptInfo
*joinrel,  RelOptInfo *inputrel)
src/backend/optimizer/plan/analyzejoins.c:                      /*
This UnqiueKey is what we want */
src/backend/optimizer/plan/planner.c:   /* If we the result if unqiue
already, we just return the input_rel directly */
src/include/nodes/pathnodes.h: * exprs is a list of exprs which is
unqiue on current RelOptInfo.
src/test/regress/expected/join.out:-- XXXX: since b.id is unqiue now
so the group by cluase is erased, so
src/test/regress/expected/select_distinct.out:-- create unqiue index on dist_p
src/test/regress/expected/select_distinct.out:-- we also support
create unqiue index on each child tables
src/test/regress/sql/join.sql:-- XXXX: since b.id is unqiue now so the
group by cluase is erased, so
src/test/regress/sql/select_distinct.sql:-- create unqiue index on dist_p
src/test/regress/sql/select_distinct.sql:-- we also support create
unqiue index on each child tables

9. A few things wrong with the following fragment:

/* set the not null info now */
ListCell *lc;
foreach(lc, find_nonnullable_vars(qual))
{
Var *var = lfirst_node(Var, lc);
RelOptInfo *rel = root->simple_rel_array[var->varno];
if (var->varattno > InvalidAttrNumber)
rel->not_null_cols = bms_add_member(rel->not_null_cols, var->varattno);
}

a. including a function call in the foreach macro is not a practise
that we really follow. It's true that the macro now assigns the 2nd
param to a variable. Previous to 1cff1b95ab6 this was not the case and
it's likely best not to leave any bad examples around that code which
might get backported might follow.
b. We generally subtract InvalidAttrNumber from varattno when
including in a Bitmapset.
c. not_null_cols is not well named. I think notnullattrs
d. not_null_cols should not be a Relids type, it should be Bitmapset.

If it is a Bitmapset,  we have to pass it with "&" usually.  is it our practice? 
 
10. add_uniquekey_for_onerow() seems pretty wasteful.  Is there really
a need to add each item in the rel's targetlist to the uniquekey list?
What if we just add an empty list to the unique keys, that way if we
need to test if some expr is a superset of any uniquekey, then we'll
see it is as any set is a superset of an empty set.  Likely the empty
set of uniquekeys should be the only one in the rel's uniquekey list.

11. In create_distinct_paths() the code is now calling
get_sortgrouplist_exprs() multiple times with the same input. I think
it would be better to just call it once and set the result in a local
variable.

12. The comment in the code below is not true. The List contains
Lists, of which contain UniqueKeys

List    *uniquekeys; /* List of UniqueKey */

It is a list of UniqueKey,  the UniqueKey can have a list of exprs. 
 
13. I'm having trouble parsing the final sentence in:

+ * can only guarantee the uniqueness without considering the null values. This
+ * field is necessary for remove_useless_join & reduce_unique_semijions since
+ * these cases don't care about the null values.

Why is the field which stores the nullability of the key required for
code that does not care about the nullability of the key?

The guarantee is introduced to for the following cases:

create table t1 (a int primary key, b int);
create table t2 (a int primary key, b int);
select .. from t1,  (select b from t2 group by t2) t2 ..;

-- b is nullable.  so t2(b) can't be a normal UniqueKey (which means b may have some 
duplicated rows)
create unique index t2_uk_b on t2(b);  

-- the left join still can be removed since t2.b is a unique index and the nullable 
doesn't matter here.
select t1.* from t1 left join t2 on (t1.b = t2.b);  

do you think we have can do some optimization in this case? I don't understand 
your question well. 
 
15. The tests you've changed the expected outcome of in join.out
should be updated so that the GROUP BY and DISTINCT clause is not
removed. This will allow the test to continue testing what it was
intended to test. You can do this by changing the columns in the GROUP
BY clause so that the new code does not find uniquekeys for those
columns.   

Thanks for your explanation, very impressive!
 
16. The tests in aggregates.out are in a similar situation. There are
various tests trying to ensure that remove_useless_groupby_columns()
does what it's meant to do. You can modify these tests to add a join
which is non-unique to effectively duplicate the PK column.

17. In your select_distinct tests, can you move away from naming the
tables starting with select_distinct?  It makes reading queries pretty
hard.

e.g. explain (costs off) select distinct uk1, uk2 from
select_distinct_a where uk2 is not null;

When I first glanced that, I failed to see the underscores and the
query looked invalid.

18. Check the spelling if "erased". You have it spelt as "ereased" in
a couple of locations.

OK,  I just installed a spell check plugin for my editor, hope it will catch such
errors next time. 
 
 
19. Please pay attention to the capitalisation of SQL keywords in the
test files you've modified. I understand we're very inconsistent in
this department in general, but we do at least try not to mix
capitalisation within the same file.  Basically, please upper case the
keywords in select_distinct.sql

20. In addition to the above, please try to wrap long SQL lines so
they're below 80 chars.

I'll review the patch in more detail once the above points have been addressed.

David

pgsql-hackers by date:

Previous
From: James Coleman
Date:
Subject: Re: [PATCH] Incremental sort (was: PoC: Partial sort)
Next
From: Alexey Bashtanov
Date:
Subject: Re: control max length of parameter values logged