PHJ file leak. - Mailing list pgsql-hackers

From Kyotaro Horiguchi
Subject PHJ file leak.
Date
Msg-id 20191111.212418.2222262873417235945.horikyota.ntt@gmail.com
Whole thread Raw
Responses Re: PHJ file leak.
List pgsql-hackers
Hello. While looking a patch, I found that PHJ sometimes complains for
file leaks if accompanied by LIMIT.

Repro is very simple:

create table t as (select a, a as b from generate_series(0, 999999) a);
analyze t;
select t.a from t join t t2 on (t.a = t2.a) limit 1;

Once in several (or dozen of) times execution of the last query
complains as follows.

WARNING:  temporary file leak: File 15 still referenced
WARNING:  temporary file leak: File 17 still referenced

This is using PHJ and the leaked file was a shared tuplestore for
outer tuples, which was opend by sts_parallel_scan_next() called from
ExecParallelHashJoinOuterGetTuple(). It seems to me that
ExecHashTableDestroy is forgeting to release shared tuplestore
accessors. Please find the attached.

regards.

-- 
Kyotaro Horiguchi
NTT Open Source Software Center
diff --git a/src/backend/executor/nodeHash.c b/src/backend/executor/nodeHash.c
index 224cbb32ba..8399683569 100644
--- a/src/backend/executor/nodeHash.c
+++ b/src/backend/executor/nodeHash.c
@@ -871,6 +871,9 @@ ExecHashTableDestroy(HashJoinTable hashtable)
         }
     }
 
+    /* Also, free up accessors to shared tuplestores if any */
+    ExecParallelHashCloseBatchAccessors(hashtable);
+
     /* Release working memory (batchCxt is a child, so it goes away too) */
     MemoryContextDelete(hashtable->hashCxt);
 
@@ -2991,6 +2994,7 @@ ExecParallelHashCloseBatchAccessors(HashJoinTable hashtable)
         sts_end_parallel_scan(hashtable->batches[i].outer_tuples);
     }
     pfree(hashtable->batches);
+    hashtable->nbatch = 0;
     hashtable->batches = NULL;
 }


pgsql-hackers by date:

Previous
From: Dilip Kumar
Date:
Subject: Re: cost based vacuum (parallel)
Next
From: Amit Kapila
Date:
Subject: Re: cost based vacuum (parallel)