Re: [patch] Imporve pqmq - Mailing list pgsql-hackers
From | Xiaoran Wang |
---|---|
Subject | Re: [patch] Imporve pqmq |
Date | |
Msg-id | CAGjhLkPUQ8NKto4Wat4rJFw5Ha_vSUop6vetaMtXEbeLctQQKA@mail.gmail.com Whole thread Raw |
In response to | Re: [patch] Imporve pqmq (Robert Haas <robertmhaas@gmail.com>) |
Responses |
Re: [patch] Imporve pqmq
|
List | pgsql-hackers |
| 03:24 (6小时前) | |||
|
On Wed, Aug 7, 2024 at 11:24 PM Xiaoran Wang <fanfuxiaoran@gmail.com> wrote:
> When I use the 'pqmq' recently, I found some issues, just fix them.
>
> Allow the param 'dsm_segment *seg' to be NULL in function
> 'pq_redirect_to_shm_mq'. As sometimes the shm_mq is created
> in shared memory instead of DSM.
>. Under what circumstances does this happen?
> When I use the 'pqmq' recently, I found some issues, just fix them.
>
> Allow the param 'dsm_segment *seg' to be NULL in function
> 'pq_redirect_to_shm_mq'. As sometimes the shm_mq is created
> in shared memory instead of DSM.
>. Under what circumstances does this happen?
I just create a shm_mq in shared memory, compared with DSM, it is easier.
And don't need to attach and detach to the DSM.
This shm_mq in shared memory can meet my requirement, which is used
in two different sessions, one session A dumps some information into another
session B through the shm_mq. Session B is actually a monitor session, user can
use it to monitor the state of slow queries, such as queries in session A.
Yes, I can choose to use DSM in such situation. But I think it's better to let the 'pqmq'
to support the shm_mq not in DSM.
> Add function 'pq_leave_shm_mq' to allow the process to go
> back to the previous pq environment.
>. In the code as it currently exists, a parallel worker never has a
>. connected client, and it talks to a shm_mq instead. So there's no need
>. for this. If a backend needs to communicate with both a connected
> client and also a shm_mq, it probably should not use pqmq but rather
> decide explicitly which messages should be sent to the client and
> which to the shm_mq. Otherwise, it seems hard to avoid possible loss
> of protocol sync.
As described above, session B will send a signal to session A, then
session A handle the signal and send the message into the shm_mq.
The message is sent by pq protocol. So session A will firstly call
'pq_redirect_to_shm_mq' and then call 'pq_leave_shm_mq' to
continue to do its work.
Robert Haas <robertmhaas@gmail.com> 于2024年8月9日周五 03:24写道:
On Wed, Aug 7, 2024 at 11:24 PM Xiaoran Wang <fanfuxiaoran@gmail.com> wrote:
> When I use the 'pqmq' recently, I found some issues, just fix them.
>
> Allow the param 'dsm_segment *seg' to be NULL in function
> 'pq_redirect_to_shm_mq'. As sometimes the shm_mq is created
> in shared memory instead of DSM.
Under what circumstances does this happen?
> Add function 'pq_leave_shm_mq' to allow the process to go
> back to the previous pq environment.
In the code as it currently exists, a parallel worker never has a
connected client, and it talks to a shm_mq instead. So there's no need
for this. If a backend needs to communicate with both a connected
client and also a shm_mq, it probably should not use pqmq but rather
decide explicitly which messages should be sent to the client and
which to the shm_mq. Otherwise, it seems hard to avoid possible loss
of protocol sync.
--
Robert Haas
EDB: http://www.enterprisedb.com
pgsql-hackers by date: