Thread: Avoid memory leaks during ANALYZE's compute_index_stats() ?
I looked into the out-of-memory problem reported by Jakub Ouhrabka here: http://archives.postgresql.org/pgsql-general/2010-11/msg00353.php It's pretty simple to reproduce, even in HEAD; what you need is an index expression that computes a bulky intermediate result. His example is md5(array_to_string(f1, ''::text)) where f1 is a bytea array occupying typically 15kB per row. Even though the final result of md5() is only 32 bytes, evaluation of this expression will eat about 15kB for the detoasted value of f1, roughly double that for the results of the per-element output function calls done inside array_to_string, and another 30k for the final result string of array_to_string. And *none of that gets freed* until compute_index_stats() is all done. In my testing, with the default stats target of 100, this gets repeated for 30k sample rows, requiring something in excess of 2GB in transient space. Jakub was using stats target 500 so it'd be closer to 10GB for him. AFAICS the only practical fix for this is to have the inner loop of compute_index_stats() copy each index expression value out of the per-tuple memory context and into the per-index "Analyze Index" context. That would allow it to reset the per-tuple memory context after each FormIndexDatum call and thus clean up whatever intermediate result trash the evaluation left behind. The extra copying is a bit annoying, since it would add cycles while accomplishing nothing useful for index expressions with no intermediate results, but I'm thinking this is a must-fix. Comments? regards, tom lane
On 11/8/10 5:04 PM, Tom Lane wrote: > The extra copying is a bit annoying, since > it would add cycles while accomplishing nothing useful for index > expressions with no intermediate results, but I'm thinking this is a > must-fix. I'd say that performance numbers is what to check on this. How much does it affect low-memory expressions to do the copying? -- -- Josh Berkus PostgreSQL Experts Inc. http://www.pgexperts.com
Josh Berkus <josh@agliodbs.com> writes: > On 11/8/10 5:04 PM, Tom Lane wrote: >> The extra copying is a bit annoying, since >> it would add cycles while accomplishing nothing useful for index >> expressions with no intermediate results, but I'm thinking this is a >> must-fix. > I'd say that performance numbers is what to check on this. How much > does it affect low-memory expressions to do the copying? It's noticeable but not horrible. I tried this test case: regression=# \d tst Table "public.tst" Column | Type | Modifiers --------+------------------+----------- f1 | double precision | Indexes: "tsti" btree ((f1 + 1.0::double precision)) with 100000 rows on a 32-bit machine (so that float8 is pass-by-ref). The ANALYZE time went from about 625 msec to about 685. I believe that this is pretty much the worst case percentage-wise: the table is small enough to fit in RAM, so no I/O is involved, and the index expression is about as simple and cheap to evaluate as it could possibly be, and the amount of work done analyzing the main table is about as small as it could possibly be. In any other situation those other components of the ANALYZE cost would grow proportionally more than the copying cost. Not-too-well-tested-yet patch attached. regards, tom lane diff --git a/src/backend/commands/analyze.c b/src/backend/commands/analyze.c index 68a92dbac0142ee45be3ceb33e6c76c19c12c25d..15464c3dc66d35b254ba0f2f93228449265fcdfa 100644 *** a/src/backend/commands/analyze.c --- b/src/backend/commands/analyze.c *************** compute_index_stats(Relation onerel, dou *** 695,700 **** --- 695,706 ---- { HeapTuple heapTuple = rows[rowno]; + /* + * Reset the per-tuple context each time, to reclaim any cruft + * left behind by evaluating the predicate or index expressions. + */ + ResetExprContext(econtext); + /* Set up for predicate or expression evaluation */ ExecStoreTuple(heapTuple, slot, InvalidBuffer, false); *************** compute_index_stats(Relation onerel, dou *** 719,733 **** isnull); /* ! * Save just the columns we care about. */ for (i = 0; i < attr_cnt; i++) { VacAttrStats *stats = thisdata->vacattrstats[i]; int attnum = stats->attr->attnum; ! exprvals[tcnt] = values[attnum - 1]; ! exprnulls[tcnt] = isnull[attnum - 1]; tcnt++; } } --- 725,750 ---- isnull); /* ! * Save just the columns we care about. We copy the values ! * into ind_context from the estate's per-tuple context. */ for (i = 0; i < attr_cnt; i++) { VacAttrStats *stats = thisdata->vacattrstats[i]; int attnum = stats->attr->attnum; ! if (isnull[attnum - 1]) ! { ! exprvals[tcnt] = (Datum) 0; ! exprnulls[tcnt] = true; ! } ! else ! { ! exprvals[tcnt] = datumCopy(values[attnum - 1], ! stats->attrtype->typbyval, ! stats->attrtype->typlen); ! exprnulls[tcnt] = false; ! } tcnt++; } }
Hi Tom, thanks for brilliant analysis - now we know how to avoid the problem. As a side note: from the user's point of view it would be really nice to know that the error was caused by auto-ANALYZE - at least on 8.2 it's not that obvious from the server log. It was the first message with given backend PID so it seemed to me as it's problem during backend startup - we have log_connections to on... Thanks, Kuba Dne 9.11.2010 2:04, Tom Lane napsal(a): > I looked into the out-of-memory problem reported by Jakub Ouhrabka here: > http://archives.postgresql.org/pgsql-general/2010-11/msg00353.php > > It's pretty simple to reproduce, even in HEAD; what you need is an index > expression that computes a bulky intermediate result. His example is > > md5(array_to_string(f1, ''::text)) > > where f1 is a bytea array occupying typically 15kB per row. Even > though the final result of md5() is only 32 bytes, evaluation of this > expression will eat about 15kB for the detoasted value of f1, roughly > double that for the results of the per-element output function calls > done inside array_to_string, and another 30k for the final result string > of array_to_string. And *none of that gets freed* until > compute_index_stats() is all done. In my testing, with the default > stats target of 100, this gets repeated for 30k sample rows, requiring > something in excess of 2GB in transient space. Jakub was using stats > target 500 so it'd be closer to 10GB for him. > > AFAICS the only practical fix for this is to have the inner loop of > compute_index_stats() copy each index expression value out of the > per-tuple memory context and into the per-index "Analyze Index" context. > That would allow it to reset the per-tuple memory context after each > FormIndexDatum call and thus clean up whatever intermediate result trash > the evaluation left behind. The extra copying is a bit annoying, since > it would add cycles while accomplishing nothing useful for index > expressions with no intermediate results, but I'm thinking this is a > must-fix. > > Comments? > > regards, tom lane