Re: Group by more efficient than distinct? - Mailing list pgsql-performance

From Mark Mielke
Subject Re: Group by more efficient than distinct?
Date
Msg-id 480DD390.7020405@mark.mielke.cc
Whole thread Raw
In response to Re: Group by more efficient than distinct?  (Matthew Wakeling <matthew@flymine.org>)
Responses Re: Group by more efficient than distinct?
List pgsql-performance
Matthew Wakeling wrote:
> On Mon, 21 Apr 2008, Mark Mielke wrote:
>> This surprises me - hash values are lossy, so it must still need to
>> confirm against the real list of values, which at a minimum should
>> require references to the rows to check against?
>>
>> Is PostgreSQL doing something beyond my imagination? :-)
>
> Not too far beyond your imagination, I hope.
>
> It's simply your assumption that the hash table is lossy. Sure, hash
> values are lossy, but a hash table isn't. Postgres stores in memory
> not only the hash values, but the rows they refer to as well, having
> checked them all on disc beforehand. That way, it doesn't need to look
> up anything on disc for that branch of the join again, and it has a
> rapid in-memory lookup for each row.

I said hash *values* are lossy. I did not say hash table is lossy.

The poster I responded to said that the memory required for a hash join
was relative to the number of distinct values, not the number of rows.
They gave an example of millions of rows, but only a few distinct
values. Above, you agree with me that it it would include the rows (or
at least references to the rows) as well. If it stores rows, or
references to rows, then memory *is* relative to the number of rows, and
millions of records would require millions of rows (or row references).

Cheers,
mark

--
Mark Mielke <mark@mielke.cc>


pgsql-performance by date:

Previous
From: Matthew Wakeling
Date:
Subject: Re: Group by more efficient than distinct?
Next
From: Matthew Wakeling
Date:
Subject: Re: Group by more efficient than distinct?