Re: temporary files - Mailing list pgsql-general

From Tomas Vondra
Subject Re: temporary files
Date
Msg-id 20191007210933.7kla65cvnddnvlwm@development
Whole thread Raw
In response to temporary files  (dangal <danielito.gallo@gmail.com>)
Responses Re: temporary files
List pgsql-general
On Mon, Oct 07, 2019 at 01:52:41PM -0700, dangal wrote:
>Hello everyone, I have a question to see if you can help me, I have set
>work_mem in 100 MB but I have the following in the pgbadger
>
>Queries generating the most temporary files (N)
>
>Count Total size   Min size      Max size    Avg size
>   58   3.24 GiB     57.15 MiB  57.15 MiB 57.15 MiB
>
>How could it be that if you require 57 MB, 100 MB will not reach ?
>thanks a lot
>

The on-disk and in-memory representations are not the same, and the
on-disk one is often significantly more efficient. Data that needs 57MB
temporary file may beed ~150MB memory to perform in-memory sort, for
example.

Yet another reason may be that e.g. hash join splits the memory into
batches, and each one has to fit into work_mem. And whevener we hit the
limit, we double the number of batches, i.e. we cut the batch size in
half. Thus the size is somewhere between 50 and 100MB, with ~75MB on
average. Furthermore, we also estimate the batch size before running the
query, so we may start with unnecessarily many batches, pushing the
average batch size down even more.


FWIW it's somewhat suspicious you have 58 temp files with almost no
variability in size. That seems as if a single query was executed
repeatedly. Maybe try looking into what query that is.


regards

-- 
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



pgsql-general by date:

Previous
From: Artur Zając
Date:
Subject: Table locking during backup
Next
From: Tomas Vondra
Date:
Subject: Re: Table locking during backup