Hi,
On 2018-01-30 15:06:02 -0500, Robert Haas wrote:
> On Tue, Jan 30, 2018 at 2:08 PM, Andres Freund <andres@anarazel.de> wrote:
> >> That bites, although it's probably tolerable if we expect such errors
> >> only in exceptional situations such as a needed shared library failing
> >> to load or something. Killing the session when we run out of memory
> >> during JIT compilation is not very nice at all. Does the LLVM library
> >> have any useful hooks that we can leverage here, like a hypothetical
> >> function LLVMProvokeFailureAsSoonAsConvenient()?
> >
> > I don't see how that'd help if a memory allocation fails? We can't just
> > continue in that case? You could arguably have reserve memory pool that
> > you release in that case and then try to continue, but that seems
> > awfully fragile.
>
> Well, I'm just asking what the library supports. For example:
>
> https://curl.haxx.se/libcurl/c/CURLOPT_PROGRESSFUNCTION.html
I get that type of function, what I don't understand how that applies to
OOM:
> If you had something like that, you could arrange to safely interrupt
> the library the next time the progress-function was called.
Yea, but how are you going to *get* to the next time, given that an
allocator just couldn't allocate memory? You can't just return a NULL
pointer because the caller will use that memory?
> > The profiling one does dump to ~/.debug/jit/ - it seems a bit annoying
> > if profiling can only be done by a superuser? Hm :/
>
> The server's ~/.debug/jit? Or are you somehow getting the output to the client?
Yes, the servers - I'm not sure I understand the "client" bit? It's
about perf profiling, which isn't available to the client either?
Greetings,
Andres Freund