On Sun, Feb 9, 2020 at 9:18 AM Amit Kapila <amit.kapila16@gmail.com> wrote:
>
> It seems for this we formed a cache of max_cached_tuplebufs number of
> objects and we don't need to allocate more than that number of tuples
> of size MaxHeapTupleSize because we will anyway return that memory to
> aset.c.
>
In the approach suggested by Amit (approach 1), once we allocate the
max_cached_tuplebufs number of MaxHeapTupleSize, we can use the actual
length of the tuple for allocating memory. So, if we have m
subtransactions, the memory usage at worst case will be,
(max_cached_tuplebufs * MaxHeapTupleSize) cache +
(Maximum changes in a subtransaction before spilling) * m * (Actual tuple size)
= 64 MB cache + 4095 * m * (Actual tuple size)
In the approach suggested by Andres (approach 2), we're going to
reduce the size of a cached tuple to 1024 bytes. So, if we have m
sub-transactions, the memory usage at worst case will be,
(max_cached_tuplebufs * 1024 bytes) cache + (Maximum changes in a
subtransaction before spilling) * m * 1024 bytes
= 8 MB cache + 4095 * m * 1024 (considering the size of the tuple is
less than 1024 bytes)
Once the cache is filled, for 1000 sub-transactions operating on tuple
size, say 100 bytes, approach 1 will allocate 390 MB of memory
(approx.) whereas approach 2 will allocate 4GB of memory
approximately. If there is no obvious error that I'm missing, I think
we should implement the first approach.
--
Thanks & Regards,
Kuntal Ghosh
EnterpriseDB: http://www.enterprisedb.com