On Sun, Dec 18, 2022 at 12:48:03PM +0100, Frits Jalvingh wrote:
> Hi list,
>
> I have a misbehaving query which uses all available disk space and then
> terminates with a "cannot write block" error. To prevent other processes
> from running into trouble I've set the following:
>
> temp_file_limit = 100GB
> The comment in the file states that this is a per-session parameter, so
> what is going wrong here?
Do you mean the comment in postgresql.conf ?
commit d1f822e58 changed to say that temp_file_limit is actually
per-process and not per-session.
Could you send the query plan, preferably "explain analyze" (if the
query finishes sometimes) ?
log_temp_files may be helpful here.
> The query does parallelize and uses one parallel worker while executing,
> but it does not abort when the temp file limit is reached:
>
> 345G pgsql_tmp
>
> It does abort way later, after using around 300+ GB:
> [53400] ERROR: temporary file size exceeds temp_file_limit (104857600kB)
> Where: parallel worker
Are you sure the 345G are from only one instance of the query ?
Or is it running multiple times, or along with other queries writing
100GB of tempfiles.
It seems possible that it sometimes runs with more than one parallel
worker. Also, are there old/stray tempfiles there which need to be
cleaned up?
--
Justin