On Sunday, June 23, 2013, Simon Riggs wrote:
On 23 June 2013 03:16, Stephen Frost <sfrost@snowman.net> wrote:
> Still doesn't really address the issue of dups though.
Checking for duplicates in all cases would be wasteful, since often we
are joining to the PK of a smaller table.
Well, that's what ndistinct is there to help us figure out. If we don't trust that though...
If duplicates are possible at all for a join, then it would make sense
to build the hash table more carefully to remove dupes. I think we
should treat that as a separate issue.
We can't simply remove the dups... We have to return all the matching dups in the join. I did write a patch which created a 2-level list structure where the first level was uniques and the 2nd was dups, but it was extremely expensive to create the hash table then and scanning it wasn't much cheaper.
Thanks,
Stephen