On Sat, Feb 8, 2020 at 12:10 AM Andres Freund <andres@anarazel.de> wrote:
>
> Hi,
>
> On 2020-02-04 10:15:01 +0530, Kuntal Ghosh wrote:
> > I performed the same test in pg11 and reproduced the issue on the
> > commit prior to a4ccc1cef5a04 (Generational memory allocator).
> >
> > ulimit -s 1024
> > ulimit -v 300000
> >
> > wal_level = logical
> > max_replication_slots = 4
> >
> > [...]
>
> > After that, I applied the "Generational memory allocator" patch and
> > that solved the issue. From the error message, it is evident that the
> > underlying code is trying to allocate a MaxTupleSize memory for each
> > tuple. So, I re-introduced the following lines (which are removed by
> > a4ccc1cef5a04) on top of the patch:
>
> > --- a/src/backend/replication/logical/reorderbuffer.c
> > +++ b/src/backend/replication/logical/reorderbuffer.c
> > @@ -417,6 +417,9 @@ ReorderBufferGetTupleBuf(ReorderBuffer *rb, Size tuple_len)
> >
> > alloc_len = tuple_len + SizeofHeapTupleHeader;
> >
> > + if (alloc_len < MaxHeapTupleSize)
> > + alloc_len = MaxHeapTupleSize;
>
> Maybe I'm being slow here - but what does this actually prove? Before
> the generation contexts were introduced we avoided fragmentation (which
> would make things unusably slow) using a a brute force method (namely
> forcing all tuple allocations to be of the same/maximum size).
>
It seems for this we formed a cache of max_cached_tuplebufs number of
objects and we don't need to allocate more than that number of tuples
of size MaxHeapTupleSize because we will anyway return that memory to
aset.c.
--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com