On Fri, Feb 7, 2020 at 5:32 PM Kuntal Ghosh <kuntalghosh.2007@gmail.com> wrote:
>
> On Tue, Feb 4, 2020 at 2:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:
> >
> > I don't think we can just back-patch that part of code as it is linked
> > to the way we are maintaining a cache (~8MB) for frequently allocated
> > objects. See the comments around the definition of
> > max_cached_tuplebufs. But probably, we can do something once we reach
> > such a limit, basically, once we know that we have already allocated
> > max_cached_tuplebufs number of tuples of size MaxHeapTupleSize, we
> > don't need to allocate more of that size. Does this make sense?
> >
>
> Yeah, this makes sense. I've attached a patch that implements the
> same. It solves the problem reported earlier. This solution will at
> least slow down the process of going OOM even for very small sized
> tuples.
>
The patch seems to be in right direction and the test at my end shows
that it resolves the issue. One minor comment:
* those. Thus always allocate at least MaxHeapTupleSize. Note that tuples
* generated for oldtuples can be bigger, as they don't have out-of-line
* toast columns.
+ *
+ * But, if we've already allocated the memory required for building the
+ * cache later, we don't have to allocate memory more than the size of the
+ * tuple.
*/
How about modifying the existing comment as: "Most tuples are below
MaxHeapTupleSize, so we use a slab allocator for those. Thus always
allocate at least MaxHeapTupleSize till the slab cache is filled. Note
that tuples generated for oldtuples can be bigger, as they don't have
out-of-line toast columns."?
Have you tested this in 9.6 and 9.5?
--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com