Re: NOTIFY with tuples - Mailing list pgsql-hackers

From Thomas Munro
Subject Re: NOTIFY with tuples
Date
Msg-id CADLWmXW90E1CKmvzKeb8tcXZoPdkrOYQybbpORZaKzycwTtMOg@mail.gmail.com
Whole thread Raw
In response to Re: NOTIFY with tuples  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: NOTIFY with tuples  (Merlin Moncure <mmoncure@gmail.com>)
List pgsql-hackers
On 14 December 2011 04:21, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Robert Haas <robertmhaas@gmail.com> writes:
>> On Tue, Dec 13, 2011 at 6:30 PM, Thomas Munro <munro@ip9.org> wrote:
>>> I imagine a very simple system like this, somehow built on top of
>>> the existing NOTIFY infrastructure:
>
>> I'm not sure whether we'd want something like this in core, so for a
>> first go-around, you might want to consider building it as an
>> extension. ...  I'm not sure you
>> need NOTIFY for anything anywhere in here.
>
> Actually, what I'd suggest is just some code to serialize and
> deserialize tuples and transmit 'em via the existing NOTIFY payload
> facility.  I agree that presenting it as some functions would be a lot
> less work than inventing bespoke syntax, but what you sketched still
> involves writing a lot of communications infrastructure from scratch,
> and I'm not sure it's worth doing that.

Thank you both for your feedback!

Looking at commands/async.c, it seems as thought it would be difficult
for function code running in the backend to get its hands on the
payload containing the serialized tuple, since the notification is
immediately passed to the client in NotifyMyFrontEnd and there is only
one queue for all notifications, you can't just put things back or not
consume some of them yet IIUC.  Maybe the code could changed to handle
payloads holding serialized tuples differently, and stash them
somewhere backend-local rather than sending to the client, so that a
function returning SETOF (or a new executor node type) could
deserialize them when the user asks for them.  Or did you mean that
libpq could support deserializing tuples on the client side?

Thinking about Robert's suggestion for extension-only implementation,
maybe pg_create_stream could create an unlogged table with a
monotonically increasing primary key plus the columns from the
composite type, and a high-water mark table to track subscribers,
foo_write could NOTIFY foo to wake up subscribed clients only (ie not
use the payload for the data, but clients need to use regular LISTEN
to know when to call foo_read), and foo_read could update the
per-subscriber high water mark and delete rows if the current session
is the slowest reader.  That does sound hideously heavyweight...  I
guess that wouldn't be anywhere near as fast as a circular buffer in a
plain old file and/or a bit of shared memory. A later version could
use files as suggested, bit I do want these streams to participate in
transactions, and that sounds incompatible to me (?).

I'm going to prototype that and see how it goes.

I do like the idea of using composite types to declare the stream
structure, and the foo_read function returning the SETOF composite
type seems good because it could be filtered and incorporated into
arbitrary queries with joins and so forth.


pgsql-hackers by date:

Previous
From: Greg Smith
Date:
Subject: Re: WIP: URI connection string support for libpq
Next
From: Pavel Stehule
Date:
Subject: Re: review: CHECK FUNCTION statement