Re: [ADMIN] question on hash joins - Mailing list pgsql-admin

From Tom Lane
Subject Re: [ADMIN] question on hash joins
Date
Msg-id 30523.1508422449@sss.pgh.pa.us
Whole thread Raw
In response to Re: [ADMIN] question on hash joins  ("Hartranft, Robert M. (GSFC-423.0)[RAYTHEON CO]"<robert.m.hartranft@nasa.gov>)
Responses Re: [ADMIN] question on hash joins  ("Hartranft, Robert M. (GSFC-423.0)[RAYTHEON CO]"<robert.m.hartranft@nasa.gov>)
List pgsql-admin
"Hartranft, Robert M. (GSFC-423.0)[RAYTHEON CO]" <robert.m.hartranft@nasa.gov> writes:
> Sorry if I am being dense, but I still have a question…
> Is it possible for me to estimate the size of the hash and a value for
> the temp_file_limit setting using information in the explain plan?

Well, it'd be (row_overhead + data_width) * number_of_rows.

Poking around in the source code, it looks like the row_overhead in
a tuplestore temp file is 10 bytes (can be more if you have nulls in
the data).  Your example seemed to be storing one bigint column,
so data_width is 8 bytes.  data_width can be a fairly squishy thing
to estimate if the data being passed through the join involves variable-
width columns, but the planner's number is usually an OK place to start.

> For example, one possibility is that the hash contains the entire tuple for each
> matching row.

No, it's just the columns that need to be used in or passed through the
join.  If you want to be clear about this you can use EXPLAIN VERBOSE
and check what columns are emitted by the plan node just below the hash.
        regards, tom lane


--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

pgsql-admin by date:

Previous
From: "Hartranft, Robert M. (GSFC-423.0)[RAYTHEON CO]"
Date:
Subject: Re: [ADMIN] question on hash joins
Next
From: "Hartranft, Robert M. (GSFC-423.0)[RAYTHEON CO]"
Date:
Subject: Re: [ADMIN] question on hash joins