Re: Copy data to DSA area - Mailing list pgsql-hackers

From Thomas Munro
Subject Re: Copy data to DSA area
Date
Msg-id CAEepm=1RTLNNsHaxoKONMcX39rpereVCKDCj0K2RVp4ckj+aOA@mail.gmail.com
Whole thread Raw
In response to RE: Copy data to DSA area  ("Ideriha, Takeshi" <ideriha.takeshi@jp.fujitsu.com>)
Responses RE: Copy data to DSA area
List pgsql-hackers
On Tue, Nov 13, 2018 at 10:59 PM Ideriha, Takeshi
<ideriha.takeshi@jp.fujitsu.com> wrote:
> Can I check my understanding?
> The situation you are talking about is the following:
> Data structure A and B will be allocated on DSA. Data A has a pointer to B.
> Another data X is already allocated on some area (might be on local heap) and has a pointer variable X->a,
> which will be linked to A.
>
>  old_context = MemoryContextSwitchTo(dsa_memory_context);
>  A = palloc();
>  B = palloc();
>  A->b = B;
>  X->a = A;
>  MemoryContextSwitchTo(old_context);
>
> If error happens in the way of this flow, palloc'd data (A & B) should be freed
> and pointer to freed data (X->a) should be back to its original one.
> So handling these cases introduces an "transaction" API like begin_allocate() and end_allocate().

I wasn't thinking of resetting X->a to its original value, just
freeing A and B.  For a shared cache in this memory area, you need to
make sure that you never leak memory; it must either be referenced by
the cache book keeping data structures so that you can find it and
manually free it eventually (probably with an LRU system?), or it must
somehow be cleaned up if an error occurs before you manage to insert
it into the cache.  During construction of an object graph, before you
attach it to your manual book keeping data structures, there is a
danger zone.

For example, what if, while copying a query plan or a catcache entry,
we successfully allocate a new cache entry with palloc(), but then
another palloc() call fails because we run out of memory when copying
the keys?  It looks like the current catcache.c coding just doesn't
care: memory will be leaked permanently in CacheMemoryContext.  That's
OK because it is process-local.  Such a process is probably doomed
anyway.  If you exit and then start a new backend the problem is gone.
Here we need something better, because once this new kind of memory
fills up with leaked objects, you have to restart the whole cluster to
recover.

If the solution involves significantly different control flows (like
PG_TRY(), PG_FINALLY() manual cleanup all over the place), then lots
of existing palloc-based code will need to be reviewed and rewritten
very carefully for use in this new kind of memory context.  If it can
be done automagically, then more of our existing code will be able to
participate in the construction of stuff that we want to put in shared
memory.

I first thought about this in the context of shared plan caches, a
topic that appears on this mailing list from time to time.  There are
some other fun problems, like cache invalidation for shared caches,
space recycling (LRUs etc), and of course locking/concurrency
(dsa_allocate() and dsa_free() are concurrency safe, but whatever data
structures you build with the memory of course need their own plan for
dealing with concurrency).  But a strategy for clean-up on errors
seems to be a major subproblem to deal with early in the project.  I
will be very happy if we can come up with something :-)

On Wed, Nov 14, 2018 at 12:40 AM Ideriha, Takeshi
<ideriha.takeshi@jp.fujitsu.com> wrote:
> From: Thomas Munro [mailto:thomas.munro@enterprisedb.com]
> >postgres=# call hoge_list(3);
> >NOTICE:  Contents of list:
> >NOTICE:   1
> >NOTICE:   2
> >NOTICE:   3
> >CALL
> >
> >You can call that procedure from various different backends to add and remove
> >integers from a List.  The point is that it could just as easily be a query plan or any
> >other palloc-based stuff in our tree, and the pointers work in any backend.
>
> Thanks for the patch. I know it's a rough sketch but basically that's what I want to do.
> I tried it and it works well.

After your message I couldn't resist a small experiment.  But it seems
there are plenty more problems to be solved to make it useful...

-- 
Thomas Munro
http://www.enterprisedb.com


pgsql-hackers by date:

Previous
From: Amit Langote
Date:
Subject: Re: Speeding up INSERTs and UPDATEs to partitioned tables
Next
From: "Higuchi, Daisuke"
Date:
Subject: RE: [Bug Fix]ECPG: cancellation of significant digits on ECPG