On Thu, Jun 27, 2013 at 12:01 AM, Jeff Janes <jeff.janes@gmail.com> wrote:
> I don't think that sounds all that promising. When the hash table does not
> fit in memory, it is either partitioned into multiple passes, each of which
> do fit in memory, or it chooses a different plan altogether.
Yeah, my point is, we could potentially be looking at not bringing in
all of the memory blocks for the hash tables. Even if they are
processed in parts, we still are looking at all of them.
Why not do a local bloom filter lookup, and never bring in the tuples
that are given negative the bloom filter? This could save us some
reads anyhow.
> Do we know
> under what conditions a Bloom filter would be superior to those options, and
> could we reliably detect those conditions?
Yes, this needs to be researched.
Regards,
Atri
--
Regards,
Atri
l'apprenant