Thread: proposal: LISTEN *

proposal: LISTEN *

From
Marko Tiikkaja
Date:
Hi,

I've in the past wanted to listen on all notification channels in the
current database for debugging purposes.  But recently I came across
another use case.  Since having multiple postgres backends listening for
notifications is very inefficient (one Thursday I found out 30% of our
CPU time was spent spinning on s_locks around the notification code), it
makes sense to implement a notification server of sorts which then
passes on notifications from postgres to interested clients.  A server
like this would be a lot more simple to implement if there was a way to
announce that the client wants to see all notifications, regardless of
the name of the channel.

Attached is a very crude proof-of-concept patch implementing this.  Any
thoughts on the idea?


.m

Attachment

Re: proposal: LISTEN *

From
Pavel Stehule
Date:
Hi

2015-11-19 4:40 GMT+01:00 Marko Tiikkaja <marko@joh.to>:
Hi,

I've in the past wanted to listen on all notification channels in the current database for debugging purposes.  But recently I came across another use case.  Since having multiple postgres backends listening for notifications is very inefficient (one Thursday I found out 30% of our CPU time was spent spinning on s_locks around the notification code), it makes sense to implement a notification server of sorts which then passes on notifications from postgres to interested clients.  A server like this would be a lot more simple to implement if there was a way to announce that the client wants to see all notifications, regardless of the name of the channel.

Attached is a very crude proof-of-concept patch implementing this.  Any thoughts on the idea?

It is looking much more like workaround. Proposed feature isn't bad, but optimization of current implementation should be better.

Isn't possible to fix internal implementation?

Pavel
 


.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: proposal: LISTEN *

From
Marko Tiikkaja
Date:
On 11/19/15 4:48 AM, Pavel Stehule wrote:
> 2015-11-19 4:40 GMT+01:00 Marko Tiikkaja <marko@joh.to>:
>> I've in the past wanted to listen on all notification channels in the
>> current database for debugging purposes.  But recently I came across
>> another use case.  Since having multiple postgres backends listening for
>> notifications is very inefficient (one Thursday I found out 30% of our CPU
>> time was spent spinning on s_locks around the notification code), it makes
>> sense to implement a notification server of sorts which then passes on
>> notifications from postgres to interested clients.  A server like this
>> would be a lot more simple to implement if there was a way to announce that
>> the client wants to see all notifications, regardless of the name of the
>> channel.
>>
>> Attached is a very crude proof-of-concept patch implementing this.  Any
>> thoughts on the idea?
>>
>
> It is looking much more like workaround. Proposed feature isn't bad, but
> optimization of current implementation should be better.
>
> Isn't possible to fix internal implementation?

It's probably possible to improve the internal implementation.  But the 
way I see it, it's always going to be less efficient than a dedicated 
process outside the system (or maybe as a background worker?) handing 
out notifications, so I don't see any point in spending my time on that.


.m



Re: proposal: LISTEN *

From
Tom Lane
Date:
Marko Tiikkaja <marko@joh.to> writes:
> I've in the past wanted to listen on all notification channels in the 
> current database for debugging purposes.  But recently I came across 
> another use case.  Since having multiple postgres backends listening for 
> notifications is very inefficient (one Thursday I found out 30% of our 
> CPU time was spent spinning on s_locks around the notification code), it 
> makes sense to implement a notification server of sorts which then 
> passes on notifications from postgres to interested clients.

... and then you gotta get the notifications to the clients, so it seems
like this just leaves the performance question hanging.  Why don't we find
and fix the actual performance issue (assuming that 07e4d03fb wasn't it)?

The reason I'm not terribly enthused about this proposal is that some
implementations of LISTEN (for example, our pre-9.0 one) would be unable
to support LISTEN *.  It's not too hard to imagine that at some point we
might wish to redo the existing implementation to reduce the overhead of
all listeners seeing all messages, and then having promised we could do
LISTEN * would be a serious restriction on our flexibility.  So while
I'm not necessarily trying to veto the idea, I think it has significant
opportunity cost, and I'd like to see a more solid rationale than this
one before we commit to it.

In any case, it would be good to understand exactly what's the performance
issue that's biting you.  Can you provide a test case that reproduces
that behavior?
        regards, tom lane



Re: proposal: LISTEN *

From
Marko Tiikkaja
Date:
On 11/19/15 4:21 PM, Tom Lane wrote:
> Marko Tiikkaja <marko@joh.to> writes:
>> I've in the past wanted to listen on all notification channels in the
>> current database for debugging purposes.  But recently I came across
>> another use case.  Since having multiple postgres backends listening for
>> notifications is very inefficient (one Thursday I found out 30% of our
>> CPU time was spent spinning on s_locks around the notification code), it
>> makes sense to implement a notification server of sorts which then
>> passes on notifications from postgres to interested clients.
>
> ... and then you gotta get the notifications to the clients, so it seems
> like this just leaves the performance question hanging.

I'm not sure which performance question you think is left hanging.  If
every client is connected to postgres, you're waking up tens if not
hundreds of processes tens if not hundreds of times per second.  Each of
them start a transaction, check which notifications are visible in the
queue from clog and friends, go through the tails of every other process
to see whether they should advance the tail pointer of the queue, commit
the transaction and go back to sleep only to be immediately woken up again.

If they're not connected to postgres directly, this kind of complex
processing only happens once, and then the notification server just
unconditionally serves all notifications to the clients based on a
simple map lookup.  It should be trivial to see how the overhead is avoided.

> Why don't we find
> and fix the actual performance issue (assuming that 07e4d03fb wasn't it)?

07e4d03fb wasn't it, no.

> The reason I'm not terribly enthused about this proposal is that some
> implementations of LISTEN (for example, our pre-9.0 one) would be unable
> to support LISTEN *.  It's not too hard to imagine that at some point we
> might wish to redo the existing implementation to reduce the overhead of
> all listeners seeing all messages, and then having promised we could do
> LISTEN * would be a serious restriction on our flexibility.  So while
> I'm not necessarily trying to veto the idea, I think it has significant
> opportunity cost, and I'd like to see a more solid rationale than this
> one before we commit to it.

A reasonable point.

> In any case, it would be good to understand exactly what's the performance
> issue that's biting you.  Can you provide a test case that reproduces
> that behavior?

I've attached a Go program which simulates quite accurately the
LISTEN/NOTIFY-part of our setup.  What it does is:

   1) Open 50 connections, and issue a LISTEN in all of them
   2) Open another 50 connections, and deliver one notification every
750 milliseconds from each of them

(My apologies for the fact that it's written in Go.  It's the only thing
I can produce without spending significant amount of time working on this.)

On the test server I'm running on, it doesn't look quite as bad as the
profiles we had in production, but s_lock is still the worst offender in
the profiles, called from:

   - 80.33% LWLockAcquire
     + 48.34% asyncQueueReadAllNotifications
     + 23.09% SIGetDataEntries
     + 16.92% SimpleLruReadPage_ReadOnly
     + 10.21% TransactionIdIsInProgress
     + 1.27% asyncQueueAdvanceTail

which roughly looks like what I recall from our actual production profiles.


.m

Attachment

Re: proposal: LISTEN *

From
Tom Lane
Date:
Marko Tiikkaja <marko@joh.to> writes:
> On 11/19/15 4:21 PM, Tom Lane wrote:
>> ... and then you gotta get the notifications to the clients, so it seems
>> like this just leaves the performance question hanging.

> I'm not sure which performance question you think is left hanging.  If 
> every client is connected to postgres, you're waking up tens if not 
> hundreds of processes tens if not hundreds of times per second.  Each of 
> them start a transaction, check which notifications are visible in the 
> queue from clog and friends, go through the tails of every other process 
> to see whether they should advance the tail pointer of the queue, commit 
> the transaction and go back to sleep only to be immediately woken up again.

> If they're not connected to postgres directly, this kind of complex 
> processing only happens once, and then the notification server just 
> unconditionally serves all notifications to the clients based on a 
> simple map lookup.  It should be trivial to see how the overhead is avoided.

Meh.  You've gotten rid of one single-point-of-contention by creating
another one.  Essentially, your argument why this is an improvement is
that any random application developer using Postgres is smarter than
the Postgres development team and can therefore produce something
better-performing off the cuff.  Which may indeed be true, but shall we
say it's unproven?

I don't buy the argument about removing transactional overhead, either.
The point of LISTEN/NOTIFY is to inform clients that something has
happened in the database that they ought to react to.  So necessarily,
there's going to be follow-on database activity.  If you're using
LISTEN/NOTIFY to wake up clients who otherwise wouldn't need a DB
connection at all, then you chose the wrong signaling mechanism.

>> In any case, it would be good to understand exactly what's the performance
>> issue that's biting you.  Can you provide a test case that reproduces
>> that behavior?

> I've attached a Go program which simulates quite accurately the 
> LISTEN/NOTIFY-part of our setup.  What it does is:

Thanks, I'll take a look later.
        regards, tom lane



Re: proposal: LISTEN *

From
Alvaro Herrera
Date:
Marko Tiikkaja wrote:

> On the test server I'm running on, it doesn't look quite as bad as the
> profiles we had in production, but s_lock is still the worst offender in the
> profiles, called from:
> 
>   - 80.33% LWLockAcquire
>     + 48.34% asyncQueueReadAllNotifications
>     + 23.09% SIGetDataEntries
>     + 16.92% SimpleLruReadPage_ReadOnly
>     + 10.21% TransactionIdIsInProgress
>     + 1.27% asyncQueueAdvanceTail
> 
> which roughly looks like what I recall from our actual production profiles.

So the problem is in the bad scalability of LWLock, not in async.c itself?
In master, the spinlock there has been replaced with atomics; does that branch
work better?

-- 
Álvaro Herrera                http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: proposal: LISTEN *

From
Marko Tiikkaja
Date:
On 11/19/15 5:32 PM, Tom Lane wrote:
> Marko Tiikkaja <marko@joh.to> writes:
>> On 11/19/15 4:21 PM, Tom Lane wrote:
>>> ... and then you gotta get the notifications to the clients, so it seems
>>> like this just leaves the performance question hanging.
>
>> I'm not sure which performance question you think is left hanging.  If
>> every client is connected to postgres, you're waking up tens if not
>> hundreds of processes tens if not hundreds of times per second.  Each of
>> them start a transaction, check which notifications are visible in the
>> queue from clog and friends, go through the tails of every other process
>> to see whether they should advance the tail pointer of the queue, commit
>> the transaction and go back to sleep only to be immediately woken up again.
>
>> If they're not connected to postgres directly, this kind of complex
>> processing only happens once, and then the notification server just
>> unconditionally serves all notifications to the clients based on a
>> simple map lookup.  It should be trivial to see how the overhead is avoided.
>
> Meh.  You've gotten rid of one single-point-of-contention by creating
> another one.  Essentially, your argument why this is an improvement is
> that any random application developer using Postgres is smarter than
> the Postgres development team and can therefore produce something
> better-performing off the cuff.

It's not about who's smarter and who's not.  All a notification server 
has to do is wait for notifications incoming from one socket and make 
sure they're delivered to the correct sockets in an epoll loop.  There's 
only one process being woken up, no need to synchronize with other 
processes, no need to see where the pointers of other processes are, no 
overhead of transaction visibility, transaction BEGIN, or transaction 
COMMIT.  The total amount of work done is trivially smaller.

> Which may indeed be true, but shall we
> say it's unproven?

Well it's not proof, but every time we moved an application from 
LISTENing in postgres to the notification server our CPU usage halved. 
The last time we did that (from ~100 connections to exactly 1), s_lock 
went down from 30% to 0.16% in our CPU profiles, and our CPU usage is 
only a fraction of what it used to be.  And this is with the 
notification server running on the same server with postgres, so it's 
not cheating, either.

There's no way we could keep running postgres if all 400+ clients 
interested in notifications had a LISTEN open in the database.


.m



Re: proposal: LISTEN *

From
Tom Lane
Date:
Alvaro Herrera <alvherre@2ndquadrant.com> writes:
> Marko Tiikkaja wrote:
>> On the test server I'm running on, it doesn't look quite as bad as the
>> profiles we had in production, but s_lock is still the worst offender in the
>> profiles, called from:
>> 
>> - 80.33% LWLockAcquire
>> + 48.34% asyncQueueReadAllNotifications
>> + 23.09% SIGetDataEntries
>> + 16.92% SimpleLruReadPage_ReadOnly
>> + 10.21% TransactionIdIsInProgress
>> + 1.27% asyncQueueAdvanceTail
>> 
>> which roughly looks like what I recall from our actual production profiles.

> So the problem is in the bad scalability of LWLock, not in async.c itself?
> In master, the spinlock there has been replaced with atomics; does that branch
> work better?

FWIW, I can't get results anything like this.  With 50 notifiers and 50
listeners, I only see LWLockAcquire eating about 6% of the system-wide
runtime:

-    6.23%     0.64%         11850       postmaster  postgres                          [.] LWLockAcquire  -
LWLockAcquire    + 33.16% asyncQueueReadAllNotifications     + 20.76% SimpleLruReadPage_ReadOnly     + 18.02%
TransactionIdIsInProgress    + 8.67% VirtualXactLockTableInsert     + 3.98% ProcArrayEndTransaction     + 3.03%
VirtualXactLockTableCleanup    + 1.77% PreCommit_Notify     + 1.68% ProcessCompletedNotifies     + 1.62%
asyncQueueAdvanceTail    + 1.28% ProcessNotifyInterrupt     + 1.04% StartTransaction     + 1.00% LockAcquireExtended
+ 0.96% ProcSleep     + 0.81% LockReleaseAll     + 0.69% TransactionIdSetPageStatus     + 0.54% GetNewTransactionId
 

There isn't any really sore-thumb place that we might fix.  Interestingly,
a big chunk of the runtime is getting eaten by the kernel, breaking down
more or less like this:

-   36.52%     0.05%           836       postmaster  [kernel.kallsyms]                 [k] system_call_fastpath  -
system_call_fastpath    + 37.47% __GI___kill     + 27.26% __poll     + 9.55% __write_nocancel     + 8.87% __GI___semop
  + 7.31% __read_nocancel     + 5.14% __libc_recv     + 3.77% __libc_send     + 0.53% __GI___setitimer
 

The kill() calls are evidently from async.c's SignalBackends() while most
of the other kernel activity is connected to client/server I/O.

Since the clients aren't doing any actual database work, just notify and
receive notifies, this doesn't seem like a representative workload.
So I can't get that excited about how asyncQueueReadAllNotifications and
subsidiary TransactionIdIsInProgress tests cause a large fraction of the
LWLock acquisitions --- that's only true because nothing else very useful
is happening.

BTW, this is HEAD, but I tried 9.4 quickly and didn't really see anything
much different.  Not sure why the discrepancy with Marko's results.
        regards, tom lane