Simon Riggs <simon@2ndquadrant.com> writes:
> Sure hash table is dynamic, but we read all inner rows to create the
> hash table (nodeHash) before we get the outer rows (nodeHJ).
But our idea of the number of batches needed can change during that
process, resulting in some inner tuples being initially assigned to the
wrong temp file. This would also be true for hashagg.
> Why would we continue to dynamically build the hash table after the
> start of the outer scan?
The number of tuples written to a temp file might exceed what we want to
hold in memory; we won't detect this until the batch is read back in,
and in that case we have to split the batch at that time. For hashagg
this point would apply to the aggregate states not the input tuples, but
it's still a live problem (especially if the aggregate states aren't
fixed-size values ... consider a "concat" aggregate for instance).
regards, tom lane