Thread: Limiting number of connections to PostgreSQL per IP (not per DB/user)?

Limiting number of connections to PostgreSQL per IP (not per DB/user)?

From
Heiko Wundram
Date:
Hello!

Sorry for that subscribe post I've just sent, that was bad reading on my
part (for the subscribe info on the homepage).

Anyway, the title says it all: is there any possibility to limit the
number of connections that a client can have concurrently with a
PostgreSQL-Server with "on-board" means (where I can't influence which
user/database the clients use, rather, the clients mostly all use the
same user/database, and I want to make sure that a single client which
runs amok doesn't kill connectivity for other clients)? I could surely
implement this with a proxy sitting in front of the server, but I'd
rather implement this with PostgreSQL directly.

I'm using (and need to stick with) PostgreSQL 8.3 because of the
frontend software in question.

Thanks for any hints!

--
--- Heiko.

Re: Limiting number of connections to PostgreSQL per IP (not per DB/user)?

From
Heiko Wundram
Date:
Am 29.11.2011 20:44, schrieb Filip Rembiałkowski:
> no easy, "standard" way of doing this in postgres.
> before we go into workarounds - what's the underlying OS?

Okay, that's too bad that there's no standard way for this. The
underlying OS is Linux (Gentoo, to be exact), and I'd already thought
about setting up some form of iptables firewalling, but there's no real
framework for this (i.e., "count" the number of connected TCP-sockets
that originate from a single client) in iptables, only for connection
throttling from the same source (which won't cut it, as there are
"spikes" in connection setup where many connections are created almost
at once, meaning that hashlimit or recent and the likes are simply not
suited to the task at hand. I just need/want to give a "hard" upper
limit on the number of simultaneous connections from a single client as
an Anti-DoS-measure - the clients aren't hostile, but their programming
is broken...).

Is there (meaning do you know of) any form of generic TCP socket proxy
that can achieve this? I've looked through portage (the Gentoo package
set) to find something applicable, but none of the socket proxy packages
I found were able to connection-limit based on source IP out of the box,
either...

Anyway, thanks for your feedback!

--
--- Heiko.

Re: Limiting number of connections to PostgreSQL per IP (not per DB/user)?

From
Filip Rembiałkowski
Date:
W dniu 29 listopada 2011 23:18 użytkownik Heiko Wundram
<modelnine@modelnine.org> napisał:

> Okay, that's too bad that there's no standard way for this. The underlying
> OS is Linux (Gentoo, to be exact), and I'd already thought about setting up
> some form of iptables firewalling, but there's no real framework for this
> (i.e., "count" the number of connected TCP-sockets that originate from a
> single client) in iptables, only for connection throttling from the same
> source (which won't cut it, as there are "spikes" in connection setup where
> many connections are created almost at once, meaning that hashlimit or
> recent and the likes are simply not suited to the task at hand. I just
> need/want to give a "hard" upper limit on the number of simultaneous
> connections from a single client as an Anti-DoS-measure - the clients aren't
> hostile, but their programming is broken...).
>

did you look at connlimit?
http://www.netfilter.org/projects/patch-o-matic/pom-external.html#pom-external-connlimit
AFAIK, it applies only to ESTABLISHED state, so maybe it suits you.

I'm not sure how do you want to allow "many connections being created
almost at once" and "limit number of connections from same IP" at the
same time?

anyway, we are going offtopic here...

regards
Filip

Re: Limiting number of connections to PostgreSQL per IP (not per DB/user)?

From
Tom Lane
Date:
=?UTF-8?Q?Filip_Rembia=C5=82kowski?= <plk.zuber@gmail.com> writes:
> W dniu 29 listopada 2011 23:18 użytkownik Heiko Wundram
> <modelnine@modelnine.org> napisał:
>> Okay, that's too bad that there's no standard way for this.

> did you look at connlimit?
> http://www.netfilter.org/projects/patch-o-matic/pom-external.html#pom-external-connlimit

Another way that we've sometimes recommended people handle custom login
restrictions is
(1) use PAM for authentication
(2) find or write a PAM plugin that makes the kind of check you want

I think that there may well be a plugin out there already that does
this, or something close enough; but you'll have to do your own
research...

            regards, tom lane

Re: Limiting number of connections to PostgreSQL per IP (not per DB/user)?

From
Heiko Wundram
Date:
Am 29.11.2011 23:44, schrieb Filip Rembiałkowski:
> did you look at connlimit?
> http://www.netfilter.org/projects/patch-o-matic/pom-external.html#pom-external-connlimit
> AFAIK, it applies only to ESTABLISHED state, so maybe it suits you.

No, I didn't, and THANKS! That's exactly the hint I needed. I tried to
use the hashlimit and/or recent matches with high burst rates and low
limits, but that didn't work, and that's what I was hinting at.

> I'm not sure how do you want to allow "many connections being created
> almost at once" and "limit number of connections from same IP" at the
> same time?

The intention being that I'm trying to limit the total amount of
connections per client to something around 20; the behaviour of the
clients is such that they create 10-15 connections in a very short burst
(due to threaded accesses to the database), and (should) disconnect all
of these in an interval of 10 minutes. When a client runs amok (which
I've had twice this week), the batch of connections is not disconnected,
and a single client gradually starts eating up all connections to the
database, and thus hinders other clients from functioning. But: using
connlimit should do the trick. I'll try that out immediately.

Thanks again!

--
--- Heiko.

Re: Limiting number of connections to PostgreSQL per IP (not per DB/user)?

From
Heiko Wundram
Date:
Am 29.11.2011 23:49, schrieb Tom Lane:
> Another way that we've sometimes recommended people handle custom login
> restrictions is
> (1) use PAM for authentication
> (2) find or write a PAM plugin that makes the kind of check you want

Very interesting - I'll first try the connlimit approach hinted at by
Filip, but if PostgreSQL does normal session setup/teardown using PAM
(I've never used PAM authentication for PostgreSQL before), this should
be a workable solution in case using iptables doesn't turn out to
properly handle disconnected sessions quickly enough.

Thanks for pointing me at the PAM!

--
--- Heiko.

Re: Limiting number of connections to PostgreSQL per IP (not per DB/user)?

From
Magnus Hagander
Date:
On Wed, Nov 30, 2011 at 09:23, Heiko Wundram <modelnine@modelnine.org> wrote:
> Am 29.11.2011 23:49, schrieb Tom Lane:
>>
>> Another way that we've sometimes recommended people handle custom login
>> restrictions is
>> (1) use PAM for authentication
>> (2) find or write a PAM plugin that makes the kind of check you want
>
> Very interesting - I'll first try the connlimit approach hinted at by Filip,
> but if PostgreSQL does normal session setup/teardown using PAM (I've never
> used PAM authentication for PostgreSQL before), this should be a workable
> solution in case using iptables doesn't turn out to properly handle
> disconnected sessions quickly enough.

I don't believe we do teardown using PAM, just session start. So you'd
have to have your PAM module check the current state of postgresql
every time - not keep some internal state.

FWIW, another option for writing your authentication module is to
write a simple RADIUS server running on the same box. It's pretty
trivial to do, especially in a high level language. The end result is
the same as if you use PAM - you get custom authentication that can
apply specific checks.

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

Re: Limiting number of connections to PostgreSQL per IP (not per DB/user)?

From
Heiko Wundram
Date:
Am 30.11.2011 09:26, schrieb Magnus Hagander:
> I don't believe we do teardown using PAM, just session start. So you'd
> have to have your PAM module check the current state of postgresql
> every time - not keep some internal state.

Okay, that's too bad - if connlimit doesn't do the trick, I'll try and
see how PAM is used, and possibly patch the respective session
teardown-functionality into the server (which shouldn't be too hard, I
guess).

> FWIW, another option for writing your authentication module is to
> write a simple RADIUS server running on the same box. It's pretty
> trivial to do, especially in a high level language. The end result is
> the same as if you use PAM - you get custom authentication that can
> apply specific checks.

I'm much more used to writing PAM modules (which I've already done for
authentication used by an FTP-server), so that'd be my first route to
go, but keeping this in mind is handy, too. Thanks!

--
--- Heiko.

Re: Limiting number of connections to PostgreSQL per IP (not per DB/user)?

From
Merlin Moncure
Date:
On Tue, Nov 29, 2011 at 7:49 AM, Heiko Wundram <modelnine@modelnine.org> wrote:
> Hello!
>
> Sorry for that subscribe post I've just sent, that was bad reading on my
> part (for the subscribe info on the homepage).
>
> Anyway, the title says it all: is there any possibility to limit the number
> of connections that a client can have concurrently with a PostgreSQL-Server
> with "on-board" means (where I can't influence which user/database the
> clients use, rather, the clients mostly all use the same user/database, and
> I want to make sure that a single client which runs amok doesn't kill
> connectivity for other clients)? I could surely implement this with a proxy
> sitting in front of the server, but I'd rather implement this with
> PostgreSQL directly.
>
> I'm using (and need to stick with) PostgreSQL 8.3 because of the frontend
> software in question.
>
> Thanks for any hints!

I think the (hypothetical) general solution for these types of
problems is to have logon triggers.  It's one of the (very) few things
I envy from SQL Server -- see  here:
http://msdn.microsoft.com/en-us/library/bb326598.aspx.

Barring the above, if you can trust the client to call a function upon
connection I'd just do that and handle the error on the client with a
connection drop. Barring *that*, I'd be putting my clients in front of
pgbouncer with some patches to the same to get what I needed
(pgbouncer is single threaded making firewally type features quite
easy to implement in an ad hoc fashion).

merlin

Re: Limiting number of connections to PostgreSQL per IP (not per DB/user)?

From
Tomas Vondra
Date:
On 29.11.2011 14:49, Heiko Wundram wrote:
> Hello!
>
> Sorry for that subscribe post I've just sent, that was bad reading on my
> part (for the subscribe info on the homepage).
>
> Anyway, the title says it all: is there any possibility to limit the
> number of connections that a client can have concurrently with a
> PostgreSQL-Server with "on-board" means (where I can't influence which
> user/database the clients use, rather, the clients mostly all use the
> same user/database, and I want to make sure that a single client which
> runs amok doesn't kill connectivity for other clients)? I could surely
> implement this with a proxy sitting in front of the server, but I'd
> rather implement this with PostgreSQL directly.
>
> I'm using (and need to stick with) PostgreSQL 8.3 because of the
> frontend software in question.
>
> Thanks for any hints!

Hi,

maybe you could use a pgbouncer - it won't allow you to limit them by
source IP, but maybe you can group them by company or something.

For example like this

[databases]
conn_a = host=127.0.0.1 port=5432 user=user_a password='a' dbname=db_a
         pool_size=20
conn_b = host=127.0.0.1 port=5432 user=user_a password='a' dbname=db_a
         pool_size=10

The users will then connect just like today, but they'll connect to the
pgbouncer using dbnames conn_a and conn_b. Those using conn_a will be
able to use 20 connection, those using conn_b will be able to use 10
connections.

Each customer will get different credential and his own db name (in
pgbouncer).

Tomas

Re: Limiting number of connections to PostgreSQL per IP (not per DB/user)?

From
Filip Rembiałkowski
Date:
no easy, "standard" way of doing this in postgres.
before we go into workarounds - what's the underlying OS?



2011/11/29 Heiko Wundram <modelnine@modelnine.org>:
> Hello!
>
> Sorry for that subscribe post I've just sent, that was bad reading on my
> part (for the subscribe info on the homepage).
>
> Anyway, the title says it all: is there any possibility to limit the number
> of connections that a client can have concurrently with a PostgreSQL-Server
> with "on-board" means (where I can't influence which user/database the
> clients use, rather, the clients mostly all use the same user/database, and
> I want to make sure that a single client which runs amok doesn't kill
> connectivity for other clients)? I could surely implement this with a proxy
> sitting in front of the server, but I'd rather implement this with
> PostgreSQL directly.
>
> I'm using (and need to stick with) PostgreSQL 8.3 because of the frontend
> software in question.
>
> Thanks for any hints!
>
> --
> --- Heiko.
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general

Re: Limiting number of connections to PostgreSQL per IP (not per DB/user)?

From
Tomas Vondra
Date:
On 29.11.2011 23:38, Merlin Moncure wrote:
> On Tue, Nov 29, 2011 at 7:49 AM, Heiko Wundram <modelnine@modelnine.org> wrote:
>> Hello!
>>
>> Sorry for that subscribe post I've just sent, that was bad reading on my
>> part (for the subscribe info on the homepage).
>>
>> Anyway, the title says it all: is there any possibility to limit the number
>> of connections that a client can have concurrently with a PostgreSQL-Server
>> with "on-board" means (where I can't influence which user/database the
>> clients use, rather, the clients mostly all use the same user/database, and
>> I want to make sure that a single client which runs amok doesn't kill
>> connectivity for other clients)? I could surely implement this with a proxy
>> sitting in front of the server, but I'd rather implement this with
>> PostgreSQL directly.
>>
>> I'm using (and need to stick with) PostgreSQL 8.3 because of the frontend
>> software in question.
>>
>> Thanks for any hints!
>
> I think the (hypothetical) general solution for these types of
> problems is to have logon triggers.  It's one of the (very) few things
> I envy from SQL Server -- see  here:
> http://msdn.microsoft.com/en-us/library/bb326598.aspx.

I'd like to have logon triggers too, but I don't think that's the right
solution for this problem. For example the logon triggers would be
called after forking the backend, which means overhead.

The connection limits should be checked when creating the connection
(validation username/password etc.), before creating the backend.

Anyway, I do have an idea how this could be done using a shared library
(so it has the same disadvantages as logon triggers). Hopefully I'll
have time to implement a PoC of this over the weekend.

> Barring the above, if you can trust the client to call a function upon
> connection I'd just do that and handle the error on the client with a
> connection drop. Barring *that*, I'd be putting my clients in front of
> pgbouncer with some patches to the same to get what I needed
> (pgbouncer is single threaded making firewally type features quite
> easy to implement in an ad hoc fashion).

The connection pooler somehow easier and more complex at the same time.

You can use connect_query to execute whatever you want after connecting
to the database (not trusting the user to do that), but why would you do
that? But the database will see the IP of the pgbouncer, not the IP of
the original client. So executing the query is pointless.

You can modify pgbouncer and it should be quite simple, but you can
achieve different username/password (pgbouncer) to each customer,
different database, set pool_size for each of the connections. It won't
use IP to count connections, but the user's won't 'steal' connections
from the other.

Tomas

Re: Limiting number of connections to PostgreSQL per IP (not per DB/user)?

From
"Tomas Vondra"
Date:
On 1 Prosinec 2011, 13:47, Magnus Hagander wrote:
> On Thu, Dec 1, 2011 at 01:03, Tomas Vondra <tv@fuzzy.cz> wrote:
>> Anyway, I do have an idea how this could be done using a shared library
>> (so it has the same disadvantages as logon triggers). Hopefully I'll
>> have time to implement a PoC of this over the weekend.
>
> We have an authentication hook that could probably be used to
> implement this. See the authdelay module for an example that uses it.
> It does require it to be written in C, of course, but for a usecase
> like this that is probably not unreasonable..

Hm, I was thinking about that, but my original idea was to keep my own
counters and update them at backend start/end (using
local_preload_libraries). The auth hook handles just the logon event, not
logout, so I would be unable to update the counters when the user
disconnects.

But now I think it might actually work quite well with pg_stat_activity
instead of custom counters. And IIRC it's called before a separate backend
is forked, so it avoids the overhead of forking a backend and then finding
out the user/IP already uses too many connections.

Tomas


Re: Limiting number of connections to PostgreSQL per IP (not per DB/user)?

From
Merlin Moncure
Date:
On Wed, Nov 30, 2011 at 6:03 PM, Tomas Vondra <tv@fuzzy.cz> wrote:
> On 29.11.2011 23:38, Merlin Moncure wrote:
>> On Tue, Nov 29, 2011 at 7:49 AM, Heiko Wundram <modelnine@modelnine.org> wrote:
>>> Hello!
>>>
>>> Sorry for that subscribe post I've just sent, that was bad reading on my
>>> part (for the subscribe info on the homepage).
>>>
>>> Anyway, the title says it all: is there any possibility to limit the number
>>> of connections that a client can have concurrently with a PostgreSQL-Server
>>> with "on-board" means (where I can't influence which user/database the
>>> clients use, rather, the clients mostly all use the same user/database, and
>>> I want to make sure that a single client which runs amok doesn't kill
>>> connectivity for other clients)? I could surely implement this with a proxy
>>> sitting in front of the server, but I'd rather implement this with
>>> PostgreSQL directly.
>>>
>>> I'm using (and need to stick with) PostgreSQL 8.3 because of the frontend
>>> software in question.
>>>
>>> Thanks for any hints!
>>
>> I think the (hypothetical) general solution for these types of
>> problems is to have logon triggers.  It's one of the (very) few things
>> I envy from SQL Server -- see  here:
>> http://msdn.microsoft.com/en-us/library/bb326598.aspx.
>
> I'd like to have logon triggers too, but I don't think that's the right
> solution for this problem. For example the logon triggers would be
> called after forking the backend, which means overhead.
>
> The connection limits should be checked when creating the connection
> (validation username/password etc.), before creating the backend.

I disagree. I'm not convinced the overhead is really worth worrying
about and having a trigger under the user's control allows the feature
to cover a much broader array of scenarios.  If the overhead *was* a
big deal, then you should be using a connection pooler anyways.

>> Barring the above, if you can trust the client to call a function upon
>> connection I'd just do that and handle the error on the client with a
>> connection drop. Barring *that*, I'd be putting my clients in front of
>> pgbouncer with some patches to the same to get what I needed
>> (pgbouncer is single threaded making firewally type features quite
>> easy to implement in an ad hoc fashion).
>
> The connection pooler somehow easier and more complex at the same time.
>
> You can use connect_query to execute whatever you want after connecting
> to the database (not trusting the user to do that), but why would you do
> that? But the database will see the IP of the pgbouncer, not the IP of
> the original client. So executing the query is pointless.
>
> You can modify pgbouncer and it should be quite simple, but you can
> achieve different username/password (pgbouncer) to each customer,
> different database, set pool_size for each of the connections. It won't
> use IP to count connections, but the user's won't 'steal' connections
> from the other.

Yeah, pgbouncer is an ideal platform for coding feature like firewall
features, query whitelist, etc while still having SQL access to the
database if you need it.  You also have access to the client's real
ip, and can pass that down to some code that would presumably be
shared with your logon trigger.

merlin

Re: Limiting number of connections to PostgreSQL per IP (not per DB/user)?

From
Magnus Hagander
Date:
On Thu, Dec 1, 2011 at 01:03, Tomas Vondra <tv@fuzzy.cz> wrote:
> On 29.11.2011 23:38, Merlin Moncure wrote:
>> On Tue, Nov 29, 2011 at 7:49 AM, Heiko Wundram <modelnine@modelnine.org> wrote:
>>> Hello!
>>>
>>> Sorry for that subscribe post I've just sent, that was bad reading on my
>>> part (for the subscribe info on the homepage).
>>>
>>> Anyway, the title says it all: is there any possibility to limit the number
>>> of connections that a client can have concurrently with a PostgreSQL-Server
>>> with "on-board" means (where I can't influence which user/database the
>>> clients use, rather, the clients mostly all use the same user/database, and
>>> I want to make sure that a single client which runs amok doesn't kill
>>> connectivity for other clients)? I could surely implement this with a proxy
>>> sitting in front of the server, but I'd rather implement this with
>>> PostgreSQL directly.
>>>
>>> I'm using (and need to stick with) PostgreSQL 8.3 because of the frontend
>>> software in question.
>>>
>>> Thanks for any hints!
>>
>> I think the (hypothetical) general solution for these types of
>> problems is to have logon triggers.  It's one of the (very) few things
>> I envy from SQL Server -- see  here:
>> http://msdn.microsoft.com/en-us/library/bb326598.aspx.
>
> I'd like to have logon triggers too, but I don't think that's the right
> solution for this problem. For example the logon triggers would be
> called after forking the backend, which means overhead.
>
> The connection limits should be checked when creating the connection
> (validation username/password etc.), before creating the backend.
>
> Anyway, I do have an idea how this could be done using a shared library
> (so it has the same disadvantages as logon triggers). Hopefully I'll
> have time to implement a PoC of this over the weekend.

We have an authentication hook that could probably be used to
implement this. See the authdelay module for an example that uses it.
It does require it to be written in C, of course, but for a usecase
like this that is probably not unreasonable..

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/