Re: On-demand running query plans using auto_explain and signals - Mailing list pgsql-hackers

From Pavel Stehule
Subject Re: On-demand running query plans using auto_explain and signals
Date
Msg-id CAFj8pRALc8=Ou8sV-VW-jVJyfWXkDF7SSMv1ywMgnNEhsAn7jQ@mail.gmail.com
Whole thread Raw
In response to Re: On-demand running query plans using auto_explain and signals  ("Shulgin, Oleksandr" <oleksandr.shulgin@zalando.de>)
Responses Re: On-demand running query plans using auto_explain and signals  ("Shulgin, Oleksandr" <oleksandr.shulgin@zalando.de>)
List pgsql-hackers


2015-09-01 17:20 GMT+02:00 Shulgin, Oleksandr <oleksandr.shulgin@zalando.de>:
I'm not familiar with the shared memory handling, but could we not allocate just enough shared memory to fit the data we're going to write instead of the fixed 8k?  It's not that we cannot know the length of the resulting plan text in advance.

the shared memory cannot be reused - (released) :(, so allocating enough memory is not effective. More - in this moment it has not sense. Shared memory queue can do almost all work.

A-ha, I've discovered the shared memory message queue facility and I see how we can use it.

But do we really need the slots mechanism?  Would it not be OK to just let the LWLock do the sequencing of concurrent requests?  Given that we only going to use one message queue per cluster, there's not much concurrency you can gain by introducing slots I believe.

I afraid of problems on production. When you have a queue related to any process, then all problems should be off after end of processes. One message queue per cluster needs restart cluster when some pathological problems are - and you cannot restart cluster in production week, sometimes weeks. The slots are more robust.

Pavel
 

--
Alex


pgsql-hackers by date:

Previous
From: Robert Haas
Date:
Subject: Re: Horizontal scalability/sharding
Next
From: Josh Berkus
Date:
Subject: Re: Horizontal scalability/sharding