Thread: Suggestions on message transfer among backends
Hi:
I need some function which requires some message exchange among different back-ends (connections).
specially I need a shared hash map and a message queue.
Message queue: it should be many writers, 1 reader. Looks POSIX message queue should be OK, but postgre doesn't use it. is there any equivalent in PG?
shared hash map: the number of items can be fixed and the value can be fixed as well.
any keywords or explanation will be extremely helpful.
Thanks
notes on the shared hash map: it needs multi writers and multi readers.
On Mon, Mar 11, 2019 at 9:36 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:
Hi:I need some function which requires some message exchange among different back-ends (connections).specially I need a shared hash map and a message queue.Message queue: it should be many writers, 1 reader. Looks POSIX message queue should be OK, but postgre doesn't use it. is there any equivalent in PG?shared hash map: the number of items can be fixed and the value can be fixed as well.any keywords or explanation will be extremely helpful.Thanks
Em seg, 11 de mar de 2019 às 10:36, Andy Fan <zhihui.fan1213@gmail.com> escreveu: > > I need some function which requires some message exchange among different back-ends (connections). > specially I need a shared hash map and a message queue. > It seems you are looking for LISTEN/NOTIFY. However, if it is part of a complex solution, a background worker with shared memory access is the way to go. -- Euler Taveira Timbira - http://www.timbira.com.br/ PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento
On 03/11/19 19:53, Euler Taveira wrote: > Em seg, 11 de mar de 2019 às 10:36, Andy Fan > <zhihui.fan1213@gmail.com> escreveu: >> >> I need some function which requires some message exchange among different back-ends (connections). >> specially I need a shared hash map and a message queue. >> > It seems you are looking for LISTEN/NOTIFY. However, if it is part of My own recollection from looking at LISTEN/NOTIFY is that, yes, it offers a mechanism for message passing among sessions, but the message /reception/ part is very closely bound to the frontend/backend protocol. That is, a message sent in session B can be received in session A, but it pretty much goes flying straight out the network connection to /the connected client associated with session A/. If you're actually working /in the backend/ of session A (say, in a server-side PL), it seemed to be unexpectedly difficult to find a way to hook those notifications. But I looked at it only briefly, and some time ago. Regards, -Chap
Hello. At Mon, 11 Mar 2019 21:37:32 +0800, Andy Fan <zhihui.fan1213@gmail.com> wrote in <CAKU4AWqhZn1v5CR85J74AAVXnTijWTzy6y-3pbYxqmpL5ETEig@mail.gmail.com> > notes on the shared hash map: it needs multi writers and multi readers. > > On Mon, Mar 11, 2019 at 9:36 PM Andy Fan <zhihui.fan1213@gmail.com> wrote: > > > Hi: > > I need some function which requires some message exchange among > > different back-ends (connections). > > specially I need a shared hash map and a message queue. > > > > Message queue: it should be many writers, 1 reader. Looks POSIX > > message queue should be OK, but postgre doesn't use it. is there any > > equivalent in PG? > > > > shared hash map: the number of items can be fixed and the value can be > > fixed as well. > > > > any keywords or explanation will be extremely helpful. I suppose that you are writing an extension or tweaking the core code in C source. dshash (dynamic shared hash) would work for you as shared hash map, and is shm_mq usable as the message queue? -- Kyotaro Horiguchi NTT Open Source Software Center
On 11/03/2019 18:36, Andy Fan wrote: > Hi: > I need some function which requires some message exchange among > different back-ends (connections). > specially I need a shared hash map and a message queue. > > Message queue: it should be many writers, 1 reader. Looks POSIX > message queue should be OK, but postgre doesn't use it. is there any > equivalent in PG? > > shared hash map: the number of items can be fixed and the value can be > fixed as well. > > any keywords or explanation will be extremely helpful. You may use shm_mq (shared memory queue) and hash tables (dynahash.c) in shared memory (see ShmemInitHash() + shmem_startup_hook) > > Thanks -- Andrey Lepikhov Postgres Professional https://postgrespro.com The Russian Postgres Company
On Tue, Mar 12, 2019 at 1:59 PM Andrey Lepikhov <a.lepikhov@postgrespro.ru> wrote:
On 11/03/2019 18:36, Andy Fan wrote:
> Hi:
> I need some function which requires some message exchange among
> different back-ends (connections).
> specially I need a shared hash map and a message queue.
>
> Message queue: it should be many writers, 1 reader. Looks POSIX
> message queue should be OK, but postgre doesn't use it. is there any
> equivalent in PG?
>
> shared hash map: the number of items can be fixed and the value can be
> fixed as well.
>
> any keywords or explanation will be extremely helpful.
You may use shm_mq (shared memory queue) and hash tables (dynahash.c) in
shared memory (see ShmemInitHash() + shmem_startup_hook)
>
> Thanks
--
Andrey Lepikhov
Postgres Professional
https://postgrespro.com
The Russian Postgres Company
I planned to use posix/system v message queue, since they are able to support multi readers/multi writer.
I just don't know why shm_mq is designed to single-reader & single-writer.
On Tue, Mar 12, 2019 at 2:36 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:
On Tue, Mar 12, 2019 at 1:59 PM Andrey Lepikhov <a.lepikhov@postgrespro.ru> wrote:On 11/03/2019 18:36, Andy Fan wrote:
> Hi:
> I need some function which requires some message exchange among
> different back-ends (connections).
> specially I need a shared hash map and a message queue.
>
> Message queue: it should be many writers, 1 reader. Looks POSIX
> message queue should be OK, but postgre doesn't use it. is there any
> equivalent in PG?
>
> shared hash map: the number of items can be fixed and the value can be
> fixed as well.
>
> any keywords or explanation will be extremely helpful.
You may use shm_mq (shared memory queue) and hash tables (dynahash.c) in
shared memory (see ShmemInitHash() + shmem_startup_hook)
>
> Thanks
--
Andrey Lepikhov
Postgres Professional
https://postgrespro.com
The Russian Postgres CompanyThanks Andrey and all people replied this! dynahash/ShmemInitHash is the one I'm using and it is ok for my purposes.I planned to use posix/system v message queue, since they are able to support multi readers/multi writer.
Posix/System v message queue is not a portable way for postgres since they are not widely support on all the os, like Darwin. I think that may be a reason why pg didn't use it. and I just hack for fun, so posix mq can be a solution for me.
I just don't know why shm_mq is designed to single-reader & single-writer.
Probably this will be simpler and enough for PostgreSQL.
That is just the thoughts per my current knowledge.
Andy Fan <zhihui.fan1213@gmail.com> wrote: > I just don't know why shm_mq is designed to single-reader & single-writer. shm_mq was implemented as a part of infrastructure for parallel query processing. The leader backend launches multiple parallel workers and sets up a few queues to communicate with each. One queue is used to send request (query plan) to the worker, one queue is there to receive data from it, and I think there's one more queue to receive error messages. -- Antonin Houska https://www.cybertec-postgresql.com
On Tue, Mar 12, 2019 at 4:34 AM Antonin Houska <ah@cybertec.at> wrote: > Andy Fan <zhihui.fan1213@gmail.com> wrote: > > I just don't know why shm_mq is designed to single-reader & single-writer. > > shm_mq was implemented as a part of infrastructure for parallel query > processing. The leader backend launches multiple parallel workers and sets up > a few queues to communicate with each. One queue is used to send request > (query plan) to the worker, one queue is there to receive data from it, and I > think there's one more queue to receive error messages. No, the queues aren't used to send anything to the worker. We know the size of the query plan before we create the DSM, so we can just allocate enough space to store the whole thing. We don't know the size of the result set, though, so we use a queue to retrieve that from the worker. And we also don't know the size of any warnings or errors or other such things that the worker might generate, so we use a queue to retrieve that stuff, too. It turned out to be better to have a separate queue for each of those things rather than a single queue for both. I admit that I could have design a system that supported multiple readers and writers and that it would have been useful, but it also would have been more work, and there's something to be said for finishing the feature before your boss fires you. Also, such a system would probably have more overhead; shm_mq can do a lot of things without locks that would need locks if you had multiple readers and writers. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
Robert Haas <robertmhaas@gmail.com> wrote: > On Tue, Mar 12, 2019 at 4:34 AM Antonin Houska <ah@cybertec.at> wrote: > > Andy Fan <zhihui.fan1213@gmail.com> wrote: > > > I just don't know why shm_mq is designed to single-reader & single-writer. > > > > shm_mq was implemented as a part of infrastructure for parallel query > > processing. The leader backend launches multiple parallel workers and sets up > > a few queues to communicate with each. One queue is used to send request > > (query plan) to the worker, one queue is there to receive data from it, and I > > think there's one more queue to receive error messages. > > No, the queues aren't used to send anything to the worker. We know > the size of the query plan before we create the DSM, so we can just > allocate enough space to store the whole thing. ok, I forgot that. (Last time I saw this part was when reading the parallel sequential scan patch a few years ago.) -- Antonin Houska https://www.cybertec-postgresql.com