Re: BUG #19438: segfault with temp_file_limit inside cursor - Mailing list pgsql-bugs
| From | Tom Lane |
|---|---|
| Subject | Re: BUG #19438: segfault with temp_file_limit inside cursor |
| Date | |
| Msg-id | 1106026.1774573371@sss.pgh.pa.us Whole thread Raw |
| In response to | BUG #19438: segfault with temp_file_limit inside cursor (PG Bug reporting form <noreply@postgresql.org>) |
| List | pgsql-bugs |
PG Bug reporting form <noreply@postgresql.org> writes:
> I experimented with setting temp_file_limit within a cursor and discovered a
> segmentation fault under certain circumstances.
> The issue exist in the current minors of 14 and 15 (14.22 and 15.17), but I
> was unable to reproduce it in version 16 or higher.
> To reproduce, simply run the following code.
> begin;
> declare cur1 cursor for select c, c c2 from generate_series(0, 1000000)
> x(c) order by c;
> \o /dev/null
> fetch all from cur1;
> set temp_file_limit TO '1MB';
> fetch backward all from cur1;
> rollback ;
Many thanks for the report! I confirm your results that this fails
in v14 and v15 but not later branches. However, I'm quite mystified
why v16 and v17 don't fail. The attached patch fixes it in v15,
and I think we need to apply it to all branches.
What is happening is that the last FETCH is trying to fill the
holdStore of the Portal holding the FETCH execution, and we soon run
out of work_mem and start dumping the tuples into a temp file. While
doing that, we run up against the temp_file_limit and fd.c throws an
error. This leaves the Portal's holdStore in a corrupted state, as a
result of the oversight described and fixed in the attached patch:
we've already deleted some tuples from its in-memory array, but the
tuplestore's state doesn't reflect that. Then during transaction
abort we must clean up the tuplestore (since it's part of a long-lived
data structure), and tuplestore_end therefore tries to delete all the
tuples in the in-memory array. Double free. Kaboom.
At least, that's what happens in v15 and (probably) all prior branches
for a long way back. v18 and later fortuitously avoid the failure
because they got rid of tuplestore_end's retail tuple deletion loop
in favor of a memory context deletion (cf 590b045c3). v16 and v17
*should* fail, but somehow they don't, and I don't understand why not.
I bisected it and determined that the failures stop with
c6e0fe1f2a08505544c410f613839664eea9eb21 is the first new commit
commit c6e0fe1f2a08505544c410f613839664eea9eb21
Author: David Rowley <drowley@postgresql.org>
Date: Mon Aug 29 17:15:00 2022 +1200
Improve performance of and reduce overheads of memory management
which makes no sense whatsoever. Somehow, we are not crashing on a
double free with the new memory chunk header infrastructure. David,
have you any idea why not?
Even though no failure manifests with this example in v16+, we are
clearly at risk by leaving corrupted tuplestore state behind,
so I think the attached has to go into all branches.
regards, tom lane
diff --git a/src/backend/utils/sort/tuplestore.c b/src/backend/utils/sort/tuplestore.c
index f605ece721e..f12e8d23a9c 100644
--- a/src/backend/utils/sort/tuplestore.c
+++ b/src/backend/utils/sort/tuplestore.c
@@ -1221,6 +1221,17 @@ dumptuples(Tuplestorestate *state)
if (i >= state->memtupcount)
break;
WRITETUP(state, state->memtuples[i]);
+
+ /*
+ * Increase memtupdeleted to track the fact that we just deleted that
+ * tuple. Think not to remove this on the grounds that we'll reset
+ * memtupdeleted to zero below. We might not get there, if some later
+ * WRITETUP fails (e.g. due to overrunning temp_file_limit). If so,
+ * we'd error out leaving an effectively-corrupt tuplestore, which
+ * would be quite bad if it's a persistent data structure such as a
+ * Portal's holdStore.
+ */
+ state->memtupdeleted++;
}
state->memtupdeleted = 0;
state->memtupcount = 0;
pgsql-bugs by date: