Thread: Spoofing as the postmaster
A few months ago a security concern was sent to core. We have discussed it but see little we can do to address it in the code so I am posting to hackers in case there is something we didn't think of or if documentation additions are necessary. Most users understand that if they are connecting to the postmaster over an insecure network that it is possible for a middle-man to intercept passwords, queries, and results. SSL certificates are designed to avoid this problem. The new attack vector reported involves a local user on the same machine as the postmaster. When the postmaster is running it has bound port 5432 and created the unix-domain socket file in /tmp. The new attack involves cases where the postmaster is stopped. The attacker can bind to TCP port 5432 or create a socket file in /tmp and get passwords and queries. PGDATA is secure so results cannot be returned while the postmaster is down. They can also prevent the real server from starting. It is possible for the attacker to use one of the interfaces (tcp or unix domain) and wait for the postmaster to start. The postmaster will fail to start on the interface in use but will start on the other interface and the attacker could route queries to the active postmaster interface. So, what solutions exist? We could require the use of port numbers less than 1024 which typically require root and then become a non-root user, but that requires root to start the server. We could put the unix domain socket file in a secure directory (but where?) but the client has to know that location. An interesting idea would be for the unix domain client to check that the ownership of the socket file is the same as PGDATA, but clients typically don't know PGDATA, nor do they know who should be running the postmaster. I suppose we could create a poor-man's SSL for unix domain sockets that just checks the ownership of the socket file, but users can already do that by specifying the socket file in a directory that only has write permission for the postmaster user. Could we create some kind of lock mode that keeps these interfaces locked when the postmaster is down? Conclusion ---------- The fundamental problem is that because we don't require root, any user's postmaster or pretend postmaster is as legitimate as anyone else's. SSL certificates add legitimacy checks for TCP, but not for unix domain sockets. This is not a Postgres-specific problem. It is probably shared by any server that doesn't need root permissions, but our handling of passwords makes it a larger problem. I think at a minimum we need to add documentation that states if you don't trust the local users on the postmaster server you should: o create unix domain socket files in a non-world-writable directoryo require SSL server certificates for TCP connections Ideas? Remember, as long as your postmaster is running you are safe. It is only postmaster downtime that has this risk. -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://postgres.enterprisedb.com + If your life is a hard drive, Christ can be your backup. +
On Sat, 22 Dec 2007 09:25:05 -0500 (EST) Bruce Momjian <bruce@momjian.us> wrote: > I think at a minimum we need to add documentation that states if you > don't trust the local users on the postmaster server you should: > > o create unix domain socket files in a non-world-writable > directory > o require SSL server certificates for TCP connections > > Ideas? It's generally a bad idea to put your database on a public server anyway but if you do you should definitely disable unix domain sockets and connect over TCP to localhost. That has been our rule for years. It's certainly a corner case. I would think that warnings, perhaps in the config file itself, would be sufficient. -- D'Arcy J.M. Cain <darcy@druid.net> | Democracy is three wolves http://www.druid.net/darcy/ | and a sheep voting on +1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.
Bruce Momjian wrote: > The fundamental problem is that because we don't require root, any user's > postmaster or pretend postmaster is as legitimate as anyone else's. SSL > certificates add legitimacy checks for TCP, but not for unix domain > sockets. Wouldn't SSL work over Unix-domain sockets as well? The API only deals with file descriptors. -- Peter Eisentraut http://developer.postgresql.org/~petere/
Peter Eisentraut wrote: > Bruce Momjian wrote: > >> The fundamental problem is that because we don't require root, any user's >> postmaster or pretend postmaster is as legitimate as anyone else's. SSL >> certificates add legitimacy checks for TCP, but not for unix domain >> sockets. >> > > Wouldn't SSL work over Unix-domain sockets as well? The API only deals with > file descriptors. > > But we don't check the SSL cert's credentials in the client, AFAIK. That means that postmaster spoofer could just as easily spoof SSL. Communications between the client and the endpoint will be protected, but there is no protection from a man in the middle attack, which is what this is. cheers andrew
Andrew Dunstan wrote: > But we don't check the SSL cert's credentials in the client, AFAIK. We do if you configure it so. But I must admit that this fact is not well advertised. It is documented, but you have to look carefully. -- Peter Eisentraut http://developer.postgresql.org/~petere/
Andrew Dunstan wrote: > > > Peter Eisentraut wrote: >> Bruce Momjian wrote: >> >>> The fundamental problem is that because we don't require root, any >>> user's >>> postmaster or pretend postmaster is as legitimate as anyone else's. SSL >>> certificates add legitimacy checks for TCP, but not for unix domain >>> sockets. >>> >> >> Wouldn't SSL work over Unix-domain sockets as well? The API only >> deals with file descriptors. >> >> > > But we don't check the SSL cert's credentials in the client, AFAIK. That > means that postmaster spoofer could just as easily spoof SSL. > Communications between the client and the endpoint will be protected, > but there is no protection from a man in the middle attack, which is > what this is. We do if you put the CA cert on the client. //Magnus
Peter Eisentraut <peter_e@gmx.net> writes: > Wouldn't SSL work over Unix-domain sockets as well? The API only deals with > file descriptors. Hmm ... we've always thought of SSL as being primarily comm security and thus useless on a Unix socket, but the mutual authentication aspect could come in handy as an answer for this type of threat. Anyone want to try this and see if it really works or not? Does OpenSSL have a mode where it only does mutual auth and not encryption? The encryption would be wasted cycles in this scenario, so being able to turn it off would be nice. regards, tom lane
On Dec 22, 2007 1:04 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > Peter Eisentraut <peter_e@gmx.net> writes: > > Wouldn't SSL work over Unix-domain sockets as well? The API only deals with > > file descriptors. > > Hmm ... we've always thought of SSL as being primarily comm security > and thus useless on a Unix socket, but the mutual authentication aspect > could come in handy as an answer for this type of threat. Anyone want > to try this and see if it really works or not? > > Does OpenSSL have a mode where it only does mutual auth and not > encryption? The encryption would be wasted cycles in this scenario, > so being able to turn it off would be nice. > miker@whirly:~$ openssl ciphers -v 'NULL' NULL-SHA SSLv3 Kx=RSA Au=RSA Enc=None Mac=SHA1 NULL-MD5 SSLv3 Kx=RSA Au=RSA Enc=None Mac=MD5 I see no way to turn off the message digest, but maybe that's just an added benefit. --miker
On 12/22/07, Peter Eisentraut <peter_e@gmx.net> wrote: > Bruce Momjian wrote: > > The fundamental problem is that because we don't require root, any user's > > postmaster or pretend postmaster is as legitimate as anyone else's. SSL > > certificates add legitimacy checks for TCP, but not for unix domain > > sockets. > > Wouldn't SSL work over Unix-domain sockets as well? The API only deals with > file descriptors. For Unix sockets it should be enough to just check server process uid, no? (FYI - Debian already puts unix socket to directory writable only to postgres user, so they dont have the problem. Maybe we should encourage distros to move away from /tmp?) -- marko
"Mike Rylander" <mrylander@gmail.com> writes: > On Dec 22, 2007 1:04 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: >> Hmm ... we've always thought of SSL as being primarily comm security >> and thus useless on a Unix socket, but the mutual authentication aspect >> could come in handy as an answer for this type of threat. Anyone want >> to try this and see if it really works or not? >> >> Does OpenSSL have a mode where it only does mutual auth and not >> encryption? > miker@whirly:~$ openssl ciphers -v 'NULL' Cool. I took a quick look through the code, and I think that a smoke test could be made just by diking out these lines in src/interfaces/libpq/fe-connect.c: if (IS_AF_UNIX(conn->raddr.addr.ss_family)) { /* Don't bother requesting SSLover a Unix socket */ conn->allow_ssl_try = false; } Actual support would require rather more effort --- for instance, I doubt that the default behavior should be to try to do SSL over a socket, so "sslmode" would need some extension, and we'd want to extend the pg_hba.conf keywords --- but I think this would be enough to allow verifying whether it will work. regards, tom lane
"Marko Kreen" <markokr@gmail.com> writes: > (FYI - Debian already puts unix socket to directory writable > only to postgres user, so they dont have the problem. Maybe > we should encourage distros to move away from /tmp?) No, we shouldn't, and if I had any authority over them I would make Debian stop doing that. It amounts to a unilateral distro-specific change in the protocol, and I think it makes things *less* secure, because any clients who are expecting the socket to be in /tmp will be easy pickings for a spoofer. Debian cannot hope to prevent that scenario, because there are non-libpq-based client implementations. regards, tom lane
<br /><br /><div class="gmail_quote">On Dec 22, 2007 6:25 AM, Bruce Momjian <<a href="mailto:bruce@momjian.us">bruce@momjian.us</a>>wrote:<br /><blockquote class="gmail_quote" style="border-left: 1pxsolid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><br />It is possible for the attacker to useone of the interfaces (tcp or<br />unix domain) and wait for the postmaster to start. The postmaster will<br />fail tostart on the interface in use but will start on the other<br />interface and the attacker could route queries to the activepostmaster <br />interface.<br /><font color="#888888"><br /></font></blockquote></div><br />I am not very conversantwith networking, but I see a possibly simple solution. Why not refuse to start the postmaster if we are unableto bind with any of the interfaces (all that are specified in the conf file). <br /><br /> This way, if the attackerhas control of even one interface (and optionally the local socket) that the clients are expected to connect to,the postmaster wouldn't start and the attacker won't have any traffic to peek into. <br /><br clear="all" />Best regards,<br/>-- <br />gurjeet[.singh]@EnterpriseDB.com<br />singh.gurjeet@{ gmail | hotmail | indiatimes | yahoo }.com<br/><br />EnterpriseDB <a href="http://www.enterprisedb.com">http://www.enterprisedb.com </a><br /><br />17° 29'34.37"N, 78° 30' 59.76"E - Hyderabad<br />18° 32' 57.25"N, 73° 56' 25.42"E - Pune<br />37° 47' 19.72"N, 122° 24' 1.69"W - San Francisco *<br /><br /><a href="http://gurjeet.frihost.net"> http://gurjeet.frihost.net</a><br /><br />Mailsent from my BlackLaptop device
Gurjeet Singh wrote: > On Dec 22, 2007 6:25 AM, Bruce Momjian <bruce@momjian.us> wrote: > > > > > It is possible for the attacker to use one of the interfaces (tcp or > > unix domain) and wait for the postmaster to start. The postmaster will > > fail to start on the interface in use but will start on the other > > interface and the attacker could route queries to the active postmaster > > interface. > > > > > I am not very conversant with networking, but I see a possibly simple > solution. Why not refuse to start the postmaster if we are unable to bind > with any of the interfaces (all that are specified in the conf file). > > This way, if the attacker has control of even one interface (and > optionally the local socket) that the clients are expected to connect to, > the postmaster wouldn't start and the attacker won't have any traffic to > peek into. Yes, that would fix the problem I mentioned but at that point the attacker already has passwords so they can just connect themselves. Having the server fail if it can't get one interface makes the server less reliable. -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://postgres.enterprisedb.com + If your life is a hard drive, Christ can be your backup. +
On Dec 23, 2007 12:20 PM, Bruce Momjian <bruce@momjian.us> wrote: > Gurjeet Singh wrote: > > On Dec 22, 2007 6:25 AM, Bruce Momjian <bruce@momjian.us> wrote: > > This way, if the attacker has control of even one interface (and > > optionally the local socket) that the clients are expected to connect to, > > the postmaster wouldn't start and the attacker won't have any traffic to > > peek into. > > Yes, that would fix the problem I mentioned but at that point the > attacker already has passwords so they can just connect themselves. > Having the server fail if it can't get one interface makes the server > less reliable. It doesn't solve the spoofing attack problem, but isn't Gurjeet's idea a good one in any case? If the postmaster can't bind on one of the specified interfaces, then at the least, haven't you got got a serious configuration error the sysadmin would want to know about? Having postmaster fail seems like a sensible response. "I can't start with the configuration you've given me, so I won't start at all" is fairly normal behaviour for a server process, no? Regards, BJ
Brendan Jurd wrote: > On Dec 23, 2007 12:20 PM, Bruce Momjian <bruce@momjian.us> wrote: > > Gurjeet Singh wrote: > > > On Dec 22, 2007 6:25 AM, Bruce Momjian <bruce@momjian.us> wrote: > > > This way, if the attacker has control of even one interface (and > > > optionally the local socket) that the clients are expected to connect to, > > > the postmaster wouldn't start and the attacker won't have any traffic to > > > peek into. > > > > Yes, that would fix the problem I mentioned but at that point the > > attacker already has passwords so they can just connect themselves. > > Having the server fail if it can't get one interface makes the server > > less reliable. > > It doesn't solve the spoofing attack problem, but isn't Gurjeet's idea > a good one in any case? > > If the postmaster can't bind on one of the specified interfaces, then > at the least, haven't you got got a serious configuration error the > sysadmin would want to know about? Having postmaster fail seems like > a sensible response. > > "I can't start with the configuration you've given me, so I won't > start at all" is fairly normal behaviour for a server process, no? Yes, we have talked about this in the past and there were concerns that that the server might have some network problem that would prevent binding on all interfaces, particularly IPv6. -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://postgres.enterprisedb.com + If your life is a hard drive, Christ can be your backup. +
Bruce Momjian wrote: > I think at a minimum we need to add documentation that states if you > don't trust the local users on the postmaster server you should: > > o create unix domain socket files in a non-world-writable > directory > o require SSL server certificates for TCP connections I have written documentation for this item: http://momjian.us/tmp/pgsql/server-shutdown.html#SERVER-SPOOFING Comments? -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://postgres.enterprisedb.com + If your life is a hard drive, Christ can be your backup. +
On Dec 23, 2007 1:25 PM, Bruce Momjian <bruce@momjian.us> wrote: > I have written documentation for this item: > > http://momjian.us/tmp/pgsql/server-shutdown.html#SERVER-SPOOFING > > Comments? I thought the content made sense, but the location didn't. I wouldn't expect to find instructions on configuring Postgres for secure operation under a section about how to shut the server down. I realise that in order for the exploit to occur, the server must be shut down (or not yet started), but unless a user already knows about the way the exploit works, how will they know to look for info about it here? IMO by putting this guidance under "Shutting Down" you're going to hurt the chances of anyone stumbling across it. I doubt you'd get many users reading "Shutting Down" at all because in most cases, it's an easy or obvious thing to do (initscripts provided by package and pg_ctl are self-explanatory). Regards, BJ
Brendan Jurd wrote: > On Dec 23, 2007 1:25 PM, Bruce Momjian <bruce@momjian.us> wrote: > > I have written documentation for this item: > > > > http://momjian.us/tmp/pgsql/server-shutdown.html#SERVER-SPOOFING > > > > Comments? > > I thought the content made sense, but the location didn't. I wouldn't > expect to find instructions on configuring Postgres for secure > operation under a section about how to shut the server down. > > I realise that in order for the exploit to occur, the server must be > shut down (or not yet started), but unless a user already knows about > the way the exploit works, how will they know to look for info about > it here? > > IMO by putting this guidance under "Shutting Down" you're going to > hurt the chances of anyone stumbling across it. I doubt you'd get > many users reading "Shutting Down" at all because in most cases, it's > an easy or obvious thing to do (initscripts provided by package and > pg_ctl are self-explanatory). Agreed. I moved it up to its own section: http://momjian.us/tmp/pgsql/preventing-server-spoofing.html I improved the wording slightly too. -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://postgres.enterprisedb.com + If your life is a hard drive, Christ can be your backup. +
Brendan Jurd wrote: <blockquote cite="mid:37ed240d0712221807w6d6c0ffbib15b17aaa48b0482@mail.gmail.com" type="cite"><br /><prewrap="">It doesn't solve the spoofing attack problem, but isn't Gurjeet's idea a good one in any case? </pre></blockquote> What makes it good? It solves no problems. It prevents the server from comingup when it otherwise might still be able to.<br /><blockquote cite="mid:37ed240d0712221807w6d6c0ffbib15b17aaa48b0482@mail.gmail.com"type="cite"><pre wrap=""> If the postmaster can't bind on one of the specified interfaces, then at the least, haven't you got got a serious configuration error the sysadmin would want to know about? Having postmaster fail seems like a sensible response. </pre></blockquote> I don't think it really matters what it does in the grand scheme of things, as it'snot solving a real problem.<br /><blockquote cite="mid:37ed240d0712221807w6d6c0ffbib15b17aaa48b0482@mail.gmail.com" type="cite"><prewrap=""> "I can't start with the configuration you've given me, so I won't start at all" is fairly normal behaviour for a server process, no</pre></blockquote> None of my servers work this way. Ifpossible, I try to make my servers auto-recover at a later time while they are still up. It means an administrator doesnot need to login to a machine at the data center to solve the problem. "Self healing" is a term that is used to describeapproaches such as this.<br /><br /> Cheers,<br /> mark<br /><br /><pre class="moz-signature" cols="72">-- Mark Mielke <a class="moz-txt-link-rfc2396E" href="mailto:mark@mielke.cc"><mark@mielke.cc></a> </pre>
Mark Mielke <mark@mark.mielke.cc> writes: > Brendan Jurd wrote: >> It doesn't solve the spoofing attack problem, but isn't Gurjeet's idea >> a good one in any case? >> > What makes it good? It solves no problems. It prevents the server from > coming up when it otherwise might still be able to. The primary reason things work like that is that there are boatloads of machines that are marginally misconfigured. For instance, userland thinks there is IPv6 support when the kernel thinks not (or vice versa). If we made the postmaster abort every time it couldn't latch onto every address that the listen_addresses setting suggested it might be able to latch onto, what we'd mostly accomplish is to drive away a lot of potential users. Given that everyone agrees that this change wouldn't actually fix anything w.r.t. spoofing, I don't think there's grounds for making it. regards, tom lane
"Tom Lane" <tgl@sss.pgh.pa.us> writes: > "Marko Kreen" <markokr@gmail.com> writes: >> (FYI - Debian already puts unix socket to directory writable >> only to postgres user, so they dont have the problem. Maybe >> we should encourage distros to move away from /tmp?) > > No, we shouldn't, and if I had any authority over them I would make > Debian stop doing that. It amounts to a unilateral distro-specific > change in the protocol, and I think it makes things *less* secure, > because any clients who are expecting the socket to be in /tmp will be > easy pickings for a spoofer. Debian cannot hope to prevent that > scenario, because there are non-libpq-based client implementations. I don't understand this point of view. /tmp is there for users to play with. If you build Postgres as a user and run it from your home directory then you certainly can use /tmp or your home directory to communicate. But if Postgres is provided by the OS then it can't ignore the proper places dedicated to this purpose. Using /tmp for a shared system resource opens the door to serious problems such as the current issue. Also consider denial-of-service attacks by any user creating sockets in the way of Postgres. Bruce summarized the problem pretty well when he said that if Postgres is being run as a non-root user then one non-root user's "postgres" is as good as any other non-root user's "postgres". If you want to authenticate from to another user on the same machine you have to have some kind of credential which sets you apart. I actually think it's quite wrong of a shared resource to not be installed to run as root. It should cause problems like this in many places. Basically if you were never root then you can never really prove you're not an spoof. If you're content to take the "postgres" user as special then you could call getsockopt(SO_PEERCRED) to verify you're really connected to a process run by "postgres". But that isn't going to work if the postgres user could be named something else. In that case what is it you want to verify though? -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Get trained by Bruce Momjian - ask me about EnterpriseDB'sPostgreSQL training!
"D'Arcy J.M. Cain" <darcy@druid.net> writes: > On Sat, 22 Dec 2007 09:25:05 -0500 (EST) > Bruce Momjian <bruce@momjian.us> wrote: >> I think at a minimum we need to add documentation that states if you >> don't trust the local users on the postmaster server you should: >> >> o create unix domain socket files in a non-world-writable >> directory >> o require SSL server certificates for TCP connections >> >> Ideas? > > It's generally a bad idea to put your database on a public server > anyway but if you do you should definitely disable unix domain sockets > and connect over TCP to localhost. That has been our rule for years. > > It's certainly a corner case. I would think that warnings, perhaps in > the config file itself, would be sufficient. That seems like a terrible idea. At least while you're dealing with unix domain sockets you know there's no way a remote user could possibly interfere with or sniff your data. As soon as you're dealing with TCP it's a whole new ballgame. X famously had a problem on many OSes where you could spoof the first packet (and if you could predict sequence numbers more than that) of a connection allegedly coming from 127.0.0.1. (it helped that a message to open up connections from anywhere fit in one packet...) Modern OSes include network filters to block such spoofs but it's one more thing you're counting on. Also brought into place are things like forged RST packets, routing table attacks, and on and on. And on the performance front you're dealing with smaller mss and much higher protocol overhead. You also lose bulletproof authentication from unix credentials and are instead relying on properly configuring your network authentication. And it's much easier to accidentally be relying on insecure identd. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's 24x7 Postgres support!
Bruce Momjian wrote: > Bruce Momjian wrote: > > I think at a minimum we need to add documentation that states if you > > don't trust the local users on the postmaster server you should: > > > > o create unix domain socket files in a non-world-writable > > directory > > o require SSL server certificates for TCP connections > > I have written documentation for this item: > > http://momjian.us/tmp/pgsql/server-shutdown.html#SERVER-SPOOFING > > Comments? What you actually need on the client side is ~/.postgresql/root.crt, not ~/.postgresql/postgresql.crt as you wrote. -- Peter Eisentraut http://developer.postgresql.org/~petere/
Bruce Momjian wrote: > Brendan Jurd wrote: >> On Dec 23, 2007 1:25 PM, Bruce Momjian <bruce@momjian.us> wrote: >>> I have written documentation for this item: >>> >>> http://momjian.us/tmp/pgsql/server-shutdown.html#SERVER-SPOOFING >>> >>> Comments? >> I thought the content made sense, but the location didn't. I wouldn't >> expect to find instructions on configuring Postgres for secure >> operation under a section about how to shut the server down. >> >> I realise that in order for the exploit to occur, the server must be >> shut down (or not yet started), but unless a user already knows about >> the way the exploit works, how will they know to look for info about >> it here? >> >> IMO by putting this guidance under "Shutting Down" you're going to >> hurt the chances of anyone stumbling across it. I doubt you'd get >> many users reading "Shutting Down" at all because in most cases, it's >> an easy or obvious thing to do (initscripts provided by package and >> pg_ctl are self-explanatory). > > Agreed. I moved it up to its own section: > > http://momjian.us/tmp/pgsql/preventing-server-spoofing.html > > I improved the wording slightly too. > The server doesn't need a root.crt certificate really - but it does need the *server* certificate (server.key/server.crt). root.crt is only used to verify *client* certificates, which is a different thing from what you're outlining here. Out of curiosity, does any of the other databases out there "solve" this somehow? Or any non-databases too, really. To me this seems like a general problem for *any* kind of server processes - at least any that runs with port >1024 on Unix (and any at all on win32, since they don't check the port number there). //Magnus
Magnus Hagander wrote: > Out of curiosity, does any of the other databases out there "solve" this > somehow? Or any non-databases too, really. To me this seems like a > general problem for *any* kind of server processes Most kinds of server processes where you'd send sensitive information do support SSL. Most of these server processes don't run over Unix-domain sockets, though. -- Peter Eisentraut http://developer.postgresql.org/~petere/
Peter Eisentraut wrote: > Magnus Hagander wrote: >> Out of curiosity, does any of the other databases out there "solve" this >> somehow? Or any non-databases too, really. To me this seems like a >> general problem for *any* kind of server processes > > Most kinds of server processes where you'd send sensitive information do > support SSL. Most of these server processes don't run over Unix-domain > sockets, though. Well, the question is not about sensitive information, is it? It's aboutpassword disclosure due to spoofing. Which wouldaffect *all* services that accept passwords over any kind of local connections - both unix sockets and TCP localhost. I'm just saying that pretty much everybody has to be affected by this. And you can't claim it's very common to use SSL to secure localhost connections. Maybe it should be, but I hardly ever see it... The best way to avoid it is of course not to give untrusted users access to launch arbitrary processes on your server. Something about that should perhaps be added to that new docs section? //Magnus
Peter Eisentraut wrote: > Bruce Momjian wrote: > > Bruce Momjian wrote: > > > I think at a minimum we need to add documentation that states if you > > > don't trust the local users on the postmaster server you should: > > > > > > o create unix domain socket files in a non-world-writable > > > directory > > > o require SSL server certificates for TCP connections > > > > I have written documentation for this item: > > > > http://momjian.us/tmp/pgsql/server-shutdown.html#SERVER-SPOOFING > > > > Comments? > > What you actually need on the client side is ~/.postgresql/root.crt, not > ~/.postgresql/postgresql.crt as you wrote. Thanks, updated: http://momjian.us/tmp/pgsql/preventing-server-spoofing.html (I mentioned the file name specificly so people like me wouldn't get confused.) :-) -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://postgres.enterprisedb.com + If your life is a hard drive, Christ can be your backup. +
Magnus Hagander wrote: > The server doesn't need a root.crt certificate really - but it does need > the *server* certificate (server.key/server.crt). root.crt is only used > to verify *client* certificates, which is a different thing from what > you're outlining here. Updated: http://momjian.us/tmp/pgsql/preventing-server-spoofing.html Thanks. -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://postgres.enterprisedb.com + If your life is a hard drive, Christ can be your backup. +
Magnus Hagander wrote: > Well, the question is not about sensitive information, is it? It's about > password disclosure due to spoofing. Which would affect *all* services > that accept passwords over any kind of local connections - both unix > sockets and TCP localhost. > > I'm just saying that pretty much everybody has to be affected by this. > And you can't claim it's very common to use SSL to secure localhost > connections. Maybe it should be, but I hardly ever see it... Yep. I think the big issue is most people think unix domain sockets and localhost are secure, but they are not if the server is down, unless SSL is used or the socket file is in a privileged directory. > The best way to avoid it is of course not to give untrusted users access > to launch arbitrary processes on your server. Something about that > should perhaps be added to that new docs section? Yep, doing that now. -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://postgres.enterprisedb.com + If your life is a hard drive, Christ can be your backup. +
Magnus Hagander wrote: > > Most kinds of server processes where you'd send sensitive information do > > support SSL. Most of these server processes don't run over Unix-domain > > sockets, though. > > Well, the question is not about sensitive information, is it? It's about > password disclosure due to spoofing. I included passwords as sensitive information. > Which would affect *all* services > that accept passwords over any kind of local connections - both unix > sockets and TCP localhost. These services either use a protected port or a protected directory, or they support SSL or something similar (SSH), or they are deprecated, as many traditional Unix services are. If you find a service that is not covered by this, then yes, you have a problem. > The best way to avoid it is of course not to give untrusted users access > to launch arbitrary processes on your server. Something about that > should perhaps be added to that new docs section? That is pretty impractical. PostgreSQL is designed to run on multiuser operating systems, so it should do it correctly. Such suggestions do not raise confidence. -- Peter Eisentraut http://developer.postgresql.org/~petere/
On Sat, Dec 22, 2007 at 02:21:42PM -0500, Tom Lane wrote: > No, we shouldn't, and if I had any authority over them I would make > Debian stop doing that. It amounts to a unilateral distro-specific > change in the protocol, and I think it makes things *less* secure, > because any clients who are expecting the socket to be in /tmp will be > easy pickings for a spoofer. Debian cannot hope to prevent that > scenario, because there are non-libpq-based client implementations. Well, it's worked for many years and a little late to change now. It's arguably safer, since only postmasters owned by "postgres" can create a socket in that directory, any client attempting to connect to a server using that directory knows it's connecting to a server owned by 'postgres'. I can't think of any non-libpq clients which support Unix domain sockets? Have a nice day, -- Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/ > Those who make peaceful revolution impossible will make violent revolution inevitable. > -- John F Kennedy
Peter Eisentraut wrote: > Magnus Hagander wrote: >>> Most kinds of server processes where you'd send sensitive information do >>> support SSL. Most of these server processes don't run over Unix-domain >>> sockets, though. >> Well, the question is not about sensitive information, is it? It's about >> password disclosure due to spoofing. > > I included passwords as sensitive information. Well, it's a different kind of vulnerability than getting at sensitive informations. Passwords can be open for a replay attack, for example, even if the transport itself is protected. >> Which would affect *all* services >> that accept passwords over any kind of local connections - both unix >> sockets and TCP localhost. > > These services either use a protected port or a protected directory, or they > support SSL or something similar (SSH), or they are deprecated, as many > traditional Unix services are. If you find a service that is not covered by > this, then yes, you have a problem. It's certainly the default on my SQL Servers. And Sybase. AFAIK it's the default on MySQL, but it's been a while since I installed one. And I'm told it's the default on Oracle, but don't have an install around so I can verify it. Now, most of these *support* SSL. But I've never come across a recommendation to use it for localhost connections. >> The best way to avoid it is of course not to give untrusted users access >> to launch arbitrary processes on your server. Something about that >> should perhaps be added to that new docs section? > > That is pretty impractical. PostgreSQL is designed to run on multiuser > operating systems, so it should do it correctly. Such suggestions do not > raise confidence. Well, I'd still recommend people not to allow arbitrary users access to my db servers. Quite regardless of what OS or database it's running. Not necessarily for this reason, but following such a requirement mitigates this problem as well, as a pure side-effect. //Magnus
Martijn van Oosterhout wrote: > On Sat, Dec 22, 2007 at 02:21:42PM -0500, Tom Lane wrote: >> No, we shouldn't, and if I had any authority over them I would make >> Debian stop doing that. It amounts to a unilateral distro-specific >> change in the protocol, and I think it makes things *less* secure, >> because any clients who are expecting the socket to be in /tmp will be >> easy pickings for a spoofer. Debian cannot hope to prevent that >> scenario, because there are non-libpq-based client implementations. > > Well, it's worked for many years and a little late to change now. It's > arguably safer, since only postmasters owned by "postgres" can create a > socket in that directory, any client attempting to connect to a server > using that directory knows it's connecting to a server owned by > 'postgres'. > > I can't think of any non-libpq clients which support Unix domain > sockets? A different though on this - IIRC, you can at least on linux configure firewall rules based on the uid a talking process is running as. And if I'm not mistaken, you can fiddle something similar on Windows using the ipsec stack (not easily, though). This would make it impossible for a user to create something binding to the pg port, or at least taking on said port, unless they also manage to hack the postgres service account. And if they do that, they have full access to datafiles and certificates and everything, so you've really lost already in that case. This obviously only applies to TCP sockets and not Unix sockets. (And yes, I still consider this more of a host problem than a db problem) //Magnus
On Sun, 23 Dec 2007 07:57:07 +0000 Gregory Stark <stark@enterprisedb.com> wrote: > "D'Arcy J.M. Cain" <darcy@druid.net> writes: > > It's generally a bad idea to put your database on a public server > > anyway but if you do you should definitely disable unix domain sockets > > and connect over TCP to localhost. That has been our rule for years. > > That seems like a terrible idea. At least while you're dealing with unix > domain sockets you know there's no way a remote user could possibly interfere > with or sniff your data. As soon as you're dealing with TCP it's a whole new > ballgame. Are you suggesting that you would have Unix domain sockets only? I have never seen this scenario other than dedicated db/web/etc servers that don't have public users so that's not an issue anyway. Once you are allowing untrusted users access you are probably allowing remote access as well. Two different models and two different security requirements n'est pas? Certainly the scenario where you have untrusted users on a server and require that only logged in users can access the database is possible. I have just never seen it and suspect that it is rare. Since I am suggesting that this is really a documentation and warning issue then this possibility can be examined and discussed in the documentation. > X famously had a problem on many OSes where you could spoof the first packet > (and if you could predict sequence numbers more than that) of a connection > allegedly coming from 127.0.0.1. (it helped that a message to open up > connections from anywhere fit in one packet...) Modern OSes include network > filters to block such spoofs but it's one more thing you're counting on. Well, yes, I do count on the OS being reasonably modern and secure. I don't think that that is an unreasonable expectation. > Also brought into place are things like forged RST packets, routing table > attacks, and on and on. If this is an issue then don't allow remote access. In this case Unix domain sockets only make sense. -- D'Arcy J.M. Cain <darcy@druid.net> | Democracy is three wolves http://www.druid.net/darcy/ | and a sheep voting on +1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.
Bruce Momjian wrote: >> "I can't start with the configuration you've given me, so I won't >> start at all" is fairly normal behaviour for a server process, no? >> > > Yes, we have talked about this in the past and there were concerns that > that the server might have some network problem that would prevent > binding on all interfaces, particularly IPv6. > This used to be our behaviour - IIRC we changed it along with the listen_interfaces changes in 8.0 because we so frequently see misconfigured networking. I'm wondering if it might not be reasonable to restore it as switchable, non-default behaviour. cheers andrew
Magnus Hagander <magnus@hagander.net> writes: > Peter Eisentraut wrote: >> These services either use a protected port or a protected directory, or they >> support SSL or something similar (SSH), or they are deprecated, as many >> traditional Unix services are. If you find a service that is not covered by >> this, then yes, you have a problem. > It's certainly the default on my SQL Servers. And Sybase. AFAIK it's the > default on MySQL, Nyet. I find this in configure.in in mysql 5.0.45 (reasonably current): # The port should be constant for a LONG time MYSQL_TCP_PORT_DEFAULT=3306 MYSQL_UNIX_ADDR_DEFAULT="/tmp/mysql.sock" I see that Red Hat's RPM specfile overrides that:--with-unix-socket-path=/var/lib/mysql/mysql.sock which was a decision that was taken long before I had anything to do with it. Note that neither the out-of-the-box default nor the RH-modified convention appear to support multiple servers on the same box with any degree of convenience; the server doesn't adjust the path name depending on port number. regards, tom lane
Tom Lane wrote: > Magnus Hagander <magnus@hagander.net> writes: >> Peter Eisentraut wrote: >>> These services either use a protected port or a protected directory, or they >>> support SSL or something similar (SSH), or they are deprecated, as many >>> traditional Unix services are. If you find a service that is not covered by >>> this, then yes, you have a problem. > >> It's certainly the default on my SQL Servers. And Sybase. AFAIK it's the >> default on MySQL, > > Nyet. I find this in configure.in in mysql 5.0.45 (reasonably current): > > # The port should be constant for a LONG time > MYSQL_TCP_PORT_DEFAULT=3306 > MYSQL_UNIX_ADDR_DEFAULT="/tmp/mysql.sock" > > I see that Red Hat's RPM specfile overrides that: > --with-unix-socket-path=/var/lib/mysql/mysql.sock > which was a decision that was taken long before I had anything to do > with it. Note that neither the out-of-the-box default nor the > RH-modified convention appear to support multiple servers on the same > box with any degree of convenience; the server doesn't adjust the path > name depending on port number. I was referring to the listening on TCP connections over localhost without SSL. Port 3306 isn't protected AFAIK, and there's nothing in those lines that says it's SSL only. But then again, neither is the /tmp/mysql.sock file. Am I missing something here, or did you just post a piece of configure that *agreed* with what I said? ;-) //Magnus
Gregory Stark <stark@enterprisedb.com> writes: > Bruce summarized the problem pretty well when he said that if Postgres > is being run as a non-root user then one non-root user's "postgres" is > as good as any other non-root user's "postgres". "Problem"? What we mustn't lose sight of is that that's not a bug but a feature. It would be completely inappropriate for us as upstream to destroy that property, and my fundamental objection to what Debian has done is that they've destroyed that property at the distro level. I have no problem with the admin for a single installation putting in things that prevent there being more than one postmaster on that machine. I just say that software distribution time is not the place for such restrictions. > If you're content to take the "postgres" user as special then you could call > getsockopt(SO_PEERCRED) to verify you're really connected to a process run by > "postgres". But that isn't going to work if the postgres user could be named > something else. In that case what is it you want to verify though? This is basically the same old mutual authentication problem that SSL was designed to solve by using certificates. I don't think we have either the need or the expertise to re-invent that wheel. ISTM we have these action items: 1. Improve the code so that SSL authentication can be used across a Unix-socket connection (we can disable encryption though). 2. Improve our documentation about how to set up mutual authentication under SSL (it's a bit scattered now). 3. Recommend using mutual auth even for local connections, if a server containing sensitive data is to be run on a machine that also hosts untrusted users. As somebody noted, it's probably even better policy to not have any sensitive data on a machine that hosts untrusted users, and it wouldn't hurt for the docs to point that out; but we should have a documented solution available if you have to do it. regards, tom lane
Magnus Hagander <magnus@hagander.net> writes: > Am I missing something here, or did you just post > a piece of configure that *agreed* with what I said? ;-) Maybe I misread what you said. I thought you were claiming that mysql do this more securely than we do; which they don't. But looking back, >>> It's certainly the default on my SQL Servers. And Sybase. AFAIK it's the >>> default on MySQL, it seems it's not too clear which case you meant by "it". regards, tom lane
Tom Lane wrote: > Magnus Hagander <magnus@hagander.net> writes: >> Am I missing something here, or did you just post >> a piece of configure that *agreed* with what I said? ;-) > > Maybe I misread what you said. I thought you were claiming that mysql > do this more securely than we do; which they don't. But looking back, > >>>> It's certainly the default on my SQL Servers. And Sybase. AFAIK it's the >>>> default on MySQL, > > it seems it's not too clear which case you meant by "it". My bad, then. Probably didn't quote enough :-) My point is that all these other server products have the exact same issue. And that they deal with it the exact same we do - pretty much leave it up to the guy who configure the server to realize that's just how things work. I'm just surprised that people are actually surprised by this. To me, it's just a natural fact that happens to pretty much all systems. And a good reason not to let arbitrary users run processes that can bind to something on your server. //Magnus
On Sun, Dec 23, 2007 at 02:52:28PM -0500, Tom Lane wrote: > Gregory Stark <stark@enterprisedb.com> writes: > > Bruce summarized the problem pretty well when he said that if Postgres > > is being run as a non-root user then one non-root user's "postgres" is > > as good as any other non-root user's "postgres". > > "Problem"? What we mustn't lose sight of is that that's not a bug but > a feature. It would be completely inappropriate for us as upstream to > destroy that property, and my fundamental objection to what Debian > has done is that they've destroyed that property at the distro level. > > I have no problem with the admin for a single installation putting in > things that prevent there being more than one postmaster on that > machine. I just say that software distribution time is not the place > for such restrictions. The default postgresql.conf in Debian contains a line like this: unix_socket_directory = '/var/run/postgresql' I don't understand what restriction you mean. What was changed is the default location of the unix domain socket. If you still want it in /tmp, you can put it there. I think there are basicly two reasons to move it: - It's insecure, as this thread shows - The FHS says the they should be placed in /var/run/, probably for the first reason. Kurt
On Sun, Dec 23, 2007 at 02:52:28PM -0500, Tom Lane wrote: > "Problem"? What we mustn't lose sight of is that that's not a bug but > a feature. It would be completely inappropriate for us as upstream to > destroy that property, and my fundamental objection to what Debian > has done is that they've destroyed that property at the distro level. I'm unsure what you think is being prevented. Debian allows parallel installation and execution of four major releases of postgres with no extra effort, something the standard release doesn't do. At cluster creation time you can specify what version, what user and what location should be used and all the clients can work with this. > I have no problem with the admin for a single installation putting in > things that prevent there being more than one postmaster on that > machine. I just say that software distribution time is not the place > for such restrictions. Nothing is being prevented here. Things are being made possible that are otherwise difficult. Have a niec day, -- Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/ > Those who make peaceful revolution impossible will make violent revolution inevitable. > -- John F Kennedy
Kurt Roeckx <kurt@roeckx.be> writes: > On Sun, Dec 23, 2007 at 02:52:28PM -0500, Tom Lane wrote: >> a feature. It would be completely inappropriate for us as upstream to >> destroy that property, and my fundamental objection to what Debian >> has done is that they've destroyed that property at the distro level. > The default postgresql.conf in Debian contains a line like this: > unix_socket_directory = '/var/run/postgresql' > I don't understand what restriction you mean. What was changed is the > default location of the unix domain socket. If you still want it in > /tmp, you can put it there. Not as easily as all that, because the system copy of libpq.so has the other directory hard-wired into it. Yes, you can sort of make it work if you have to, but it's inconvenient and error-prone. > I think there are basicly two reasons to move it: > - It's insecure, as this thread shows > - The FHS says the they should be placed in /var/run/, probably > for the first reason. We've had that discussion before. regards, tom lane
On Sun, 23 Dec 2007, Magnus Hagander wrote: > I'm just surprised that people are actually surprised by this. To me, > it's just a natural fact that happens to pretty much all systems. And a > good reason not to let arbitrary users run processes that can bind to > something on your server. Not everybody works for Enterprise, where price does not matter. I cannot afford a dedicated servers for database, DNS, e-mail, antispam, firewall, file, WWW etc. Even administrative overhead would be too much for one person IT staff. I have to run all of this and much more on one machine, so I'm interested in limiting rights for a user for example running WWW, so when, god forbid, compromized, it'd limit damage. I am also not able to run sophisticated security frameworks, limiting every user rights to just what they need, as maintaining it would require a security full-timer. So I'm not very fond of this "insecure by default, it's your problem to make it secure" attitude. I'm the one who reported this. Regards Tometzky -- ...although Eating Honey was a very good thing to do, there was a moment just before you began to eat it which was better than when you were... Winnie the Pooh
Martijn van Oosterhout <kleptog@svana.org> writes: > On Sun, Dec 23, 2007 at 02:52:28PM -0500, Tom Lane wrote: >> "Problem"? What we mustn't lose sight of is that that's not a bug but >> a feature. It would be completely inappropriate for us as upstream to >> destroy that property, and my fundamental objection to what Debian >> has done is that they've destroyed that property at the distro level. > I'm unsure what you think is being prevented. Well, use of standard portable PG clients, for one thing, and use of postmasters running under different userids for another. > Debian allows parallel > installation and execution of four major releases of postgres with no > extra effort, something the standard release doesn't do. My hat's off to them for that, but it's utterly unrelated to the topic at hand, and it's not evidence that this particular decision of theirs was well taken. regards, tom lane
Tomasz Ostrowski <tometzky@batory.org.pl> writes: > So I'm not very fond of this "insecure by default, it's your problem > to make it secure" attitude. I'm the one who reported this. IIRC, you started out your argument by also saying that we had to move the TCP socket to the reserved range, so as to prevent the equivalent problem in the TCP case. (And, given the number of clients such as JDBC that can only connect via TCP, it certainly seems there's little point in changing the socket case if we don't change the TCP case.) So let's look at the implications: 1. Postmaster must be started as root, thereby introducing security risks of its own (ie, after breaking into the DB, an attacker might be able to re-acquire root privileges). 2. Can only have one postmaster per machine (ICANN is certainly not going to give us dozens of reserved addresses). 3. Massive confusion and breakage as various people transition to the new standard at different times. 4. Potential to create, rather than remove, spoofing opportunities anyplace there is confusion about which port the postmaster is really listening on. And at the end of the day there are still any number of ways to configure your system insecurely... Fundamentally these are man-in-the-middle attacks, and the only real solution is mutual authentication. Pretending that some quick-fix change eliminates that class of problem is a recipe for building systems that are less secure, not more so. regards, tom lane
On Sun, Dec 23, 2007 at 04:43:54PM -0500, Tom Lane wrote: > <snip> use of > postmasters running under different userids for another. This is specifically allowed and I mentioned it in the email you responded to, so I don't understand why you think it's not possible. Usage: /usr/bin/pg_createcluster [options] <version> <cluster name> Options: -u <uid> cluster owner and superuser (default: 'postgres') -g <gid> group for data files (default: primarygroup of owner) -d <dir> data directory (default: /var/lib/postgresql/<version>/<cluster name>)-s <dir> socket directory (default: /var/run/postgresql for clusters owned by 'postgres', /tmp forother clusters) -l <dir> path to desired log file (default: /var/log/postgresql/postgresql-<version>-<cluster>.log)--locale <encoding> set cluster locale (default: inheritfrom environment) --lc-collate/ctype/messages/monetary/numeric/time <locale> like --locale, but onlyset for a particular category -e <encoding> Default encoding (default: derived from locale) -p <port> port number(default: next free port starting from 5432) --start start the cluster after creating it --start-conf auto|manual|disabled Set automatic startup behaviour in start.conf (default: 'auto') Have a nice day, -- Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/ > Those who make peaceful revolution impossible will make violent revolution inevitable. > -- John F Kennedy
Tom Lane wrote: > Tomasz Ostrowski <tometzky@batory.org.pl> writes: >> So I'm not very fond of this "insecure by default, it's your problem >> to make it secure" attitude. I'm the one who reported this. > > IIRC, you started out your argument by also saying that we had to move > the TCP socket to the reserved range, so as to prevent the equivalent > problem in the TCP case. (And, given the number of clients such as > JDBC that can only connect via TCP, it certainly seems there's little > point in changing the socket case if we don't change the TCP case.) It should also be noted that not all operating systems even have the concept of a reserved range of ports. > Fundamentally these are man-in-the-middle attacks, and the only real > solution is mutual authentication. Pretending that some quick-fix > change eliminates that class of problem is a recipe for building systems > that are less secure, not more so. And SSL can certainly do that. But I can agree that our SSL documentation could be much clearer on how to do things, and what's a best practice :-) Instead of just adding a section on "preventing spoofing attacks", perhaps what we really need is a general chapter on how to secure your system and what's best practices. Which would also cover things like don't run everything as superuser etc (which is a much more likely problem to be seen in deployments) //Magnus
Martijn van Oosterhout <kleptog@svana.org> writes: > On Sun, Dec 23, 2007 at 04:43:54PM -0500, Tom Lane wrote: >> <snip> use of >> postmasters running under different userids for another. > This is specifically allowed and I mentioned it in the email you > responded to, so I don't understand why you think it's not possible. > -s <dir> socket directory (default: /var/run/postgresql for clusters > owned by 'postgres', /tmp for other clusters) Egad. Is it not apparent to you what a bad idea that is? regards, tom lane
On 12/23/07, Tomasz Ostrowski <tometzky@batory.org.pl> wrote: > On Sun, 23 Dec 2007, Magnus Hagander wrote: > > I'm just surprised that people are actually surprised by this. To me, > > it's just a natural fact that happens to pretty much all systems. And a > > good reason not to let arbitrary users run processes that can bind to > > something on your server. > Not everybody works for Enterprise, where price does not matter. I > cannot afford a dedicated servers for database, DNS, e-mail, > antispam, firewall, file, WWW etc. Even administrative overhead would > be too much for one person IT staff. I have to run all of this > and much more on one machine, so I'm interested in limiting rights > for a user for example running WWW, so when, god forbid, compromized, > it'd limit damage. > > I am also not able to run sophisticated security frameworks, limiting > every user rights to just what they need, as maintaining it would > require a security full-timer. > > So I'm not very fond of this "insecure by default, it's your problem > to make it secure" attitude. I'm the one who reported this. It's not that; it's the fact that if anyone can run a service on a computer, then anyone connecting to that computer won't necessarily know whose service they're connecting to, is a basic thing that should only take a moment's thought to recognize. I wouldn't knock anyone for not automatically realizing it can be a threat to security, but it's so very common it's hard to see why anyone would really be *surprised* by it. SSL and SSH both address the problem of the client wanting to verify the server, so usually being aware of either of those is enough to make someone aware of the issue in general. There is no default or automatic solution because the basic issue is one of trust, which requires an external procedure to address. (SSH generates a key on its own, but you are responsible for transferring the signature to the remote client in a secure manner so they can verify it. SSL typically has an external company generate your key after being paid to verify your identity, and presumably the remote client already trusts that company. You can also use the SSH approach with SSL.) There are various platform-specific security features that might be useful, like reserved port ranges and file permissions, but they are so specific to the scenario they're designed for that it's hard to create a generic solution that works well by default -- especially if you want to run without requiring administrative privileges in the first place. Having the adminstrator be responsible for organizing what they need is the only thing that seems to work in practice, since the requirements are so different for different environments.
On Sun, 23 Dec 2007, Tom Lane wrote: > IIRC, you started out your argument by also saying that we had to move > the TCP socket to the reserved range, so as to prevent the equivalent > problem in the TCP case. > > 1. Postmaster must be started as root, thereby introducing security > risks of its own (ie, after breaking into the DB, an attacker might be > able to re-acquire root privileges). Not at all, as it won't run as root, it'll just start as root and then give up all root privileges. The only thing it would have after being root is just an open socket. > 2. Can only have one postmaster per machine (ICANN is certainly not > going to give us dozens of reserved addresses). I don't think ICANN would prevent anybody from using different port. I'm running httpd on port 81, sshd on 222 etc. It's just the default that should be made official through ICANN. > 3. Massive confusion and breakage as various people transition to the > new standard at different times. As with any major version. > 4. Potential to create, rather than remove, spoofing opportunities > anyplace there is confusion about which port the postmaster is really > listening on. I agree. But because it would just not work it'll be easy to notice and correct. And when corrected it would be no more confusion. > Fundamentally these are man-in-the-middle attacks, and the only real > solution is mutual authentication. The problem is not many people expect man-in-the-middle attack on secure lan, localhost or local socket connection, so they'll not try to prevent it. Regards Tometzky -- ...although Eating Honey was a very good thing to do, there was a moment just before you began to eat it which was better than when you were... Winnie the Pooh
* Trevor Talbot (quension@gmail.com) wrote: > There are various platform-specific security features that might be > useful, like reserved port ranges and file permissions, but they are > so specific to the scenario they're designed for that it's hard to > create a generic solution that works well by default -- especially if > you want to run without requiring administrative privileges in the > first place. Agreed. A guarentee that the process listening on a particular port is what you're expecting isn't something that upstream can give. It needs to be done through some situation-specific mechanism. There are a number of options here, of course: SSL, Kerberos, SELinux, even things like the tiger IDS. Reserved ports really aren't all that great a solution in the end anyway, to be honest. Enjoy, Stephen
Stephen Frost wrote: <blockquote cite="mid:20071224005932.GH5031@tamriel.snowman.net" type="cite"><pre wrap="">* TrevorTalbot (<a class="moz-txt-link-abbreviated" href="mailto:quension@gmail.com">quension@gmail.com</a>) wrote: </pre><blockquotetype="cite"><pre wrap="">There are various platform-specific security features that might be useful, like reserved port ranges and file permissions, but they are so specific to the scenario they're designed for that it's hard to create a generic solution that works well by default -- especially if you want to run without requiring administrative privileges in the first place. </pre></blockquote><pre wrap="">Agreed. A guarentee that the process listening on a particular port is what you're expecting isn't something that upstream can give. It needs to be done through some situation-specific mechanism. There are a number of options here, of course: SSL, Kerberos, SELinux, even things like the tiger IDS. Reserved ports really aren't all that great a solution in the end anyway, to be honest. </pre></blockquote> UNIX socket kernel credential passing was mentioned in an earlierpost, but I didn't see it raised again. All of the above mechanisms still require a piece of information to validate"trust". SSL requires a copy of the public certificate. UNIX socket credential passing would be much cheaper to validate- all it requires is the userid or username.<br /><br /> I prefer UNIX sockets with kernel credential passing overTCP/IP with username/password or the more expensive SSL. I do not like storing passwords or private certificates in aplace available to the web user, as other web users would then also have access. I do not have evidence, but I am underthe impression that the TCP/IP stack incurs additional overhead on connect(), send(), recv(), and close() than UNIXsockets.<br /><br /> Yes, Java doesn't work with UNIX sockets - but both Perl and PHP do. The only reason Java doesn'tis because Java itself doesn't support UNIX sockets, and the Java JDBC provider is pure-Java.<br /><br /> How expensivewould it be to implement a "server_user" db open parameter that would perform reverse credential passing to validate?"dbname=XXX port=5432 server_user=postgres". If the server can't prove it is postgres through UNIX socket credentialpassing, it fails. Similarly, identd may be usable in reverse? I've seen many people claim identd is insecure -but it is secure if I am the one running it, is it not?<br /><br /> Cheers,<br /> mark<br /><br /><pre class="moz-signature"cols="72">-- Mark Mielke <a class="moz-txt-link-rfc2396E" href="mailto:mark@mielke.cc"><mark@mielke.cc></a> </pre>
"Mark Mielke" <mark@mark.mielke.cc> writes: > UNIX socket kernel credential passing was mentioned in an earlier post, but I > didn't see it raised again. I mentioned getsockopt(SO_PEERCRED) which isn't the same as credential passing. It just tells you what uid is on the other end of your unix domain socket. I think it's much more widespread and portable than credential passing which was a BSD feature which allowed you to send along your kernel credentials to another process. So you could, for example, open a file in psql then pass the file descriptor to the backend to have the backend read directly from the file. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's RemoteDBA services!
Gregory Stark wrote: <blockquote cite="mid:87prww1zks.fsf@oxford.xeocode.com" type="cite"><pre wrap="">"Mark Mielke" <aclass="moz-txt-link-rfc2396E" href="mailto:mark@mark.mielke.cc"><mark@mark.mielke.cc></a> writes: </pre><blockquotetype="cite"><pre wrap="">UNIX socket kernel credential passing was mentioned in an earlier post, but I didn't see it raised again. </pre></blockquote><pre wrap=""> I mentioned getsockopt(SO_PEERCRED) which isn't the same as credential passing. It just tells you what uid is on the other end of your unix domain socket. I think it's much more widespread and portable than credential passing which was a BSD feature which allowed you to send along your kernel credentials to another process. So you could, for example, open a file in psql then pass the file descriptor to the backend to have the backend read directly from the file</pre></blockquote> I agree - I forgot there were different flavours. I think any of these are just as good as SSL withpublic key authentication, and perhaps a lot cheaper in terms of performance. The only piece of information missing isthe uid to compare against, which may as well be provided in the db open parameters the same as any other parameters mightbe provided.<br /><br /> Cheers,<br /> mark<br /><br /><pre class="moz-signature" cols="72">-- Mark Mielke <a class="moz-txt-link-rfc2396E" href="mailto:mark@mielke.cc"><mark@mielke.cc></a> </pre>
Mike Rylander wrote: > On Dec 22, 2007 1:04 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > > Peter Eisentraut <peter_e@gmx.net> writes: > > > Wouldn't SSL work over Unix-domain sockets as well? The API only deals with > > > file descriptors. > > > > Hmm ... we've always thought of SSL as being primarily comm security > > and thus useless on a Unix socket, but the mutual authentication aspect > > could come in handy as an answer for this type of threat. Anyone want > > to try this and see if it really works or not? > > > > Does OpenSSL have a mode where it only does mutual auth and not > > encryption? The encryption would be wasted cycles in this scenario, > > so being able to turn it off would be nice. > > > > miker@whirly:~$ openssl ciphers -v 'NULL' > NULL-SHA SSLv3 Kx=RSA Au=RSA Enc=None Mac=SHA1 > NULL-MD5 SSLv3 Kx=RSA Au=RSA Enc=None Mac=MD5 > > I see no way to turn off the message digest, but maybe that's just an > added benefit. So if we set ssl_ciphers in postgresql.conf to: ssl_ciphers = 'NULL-SHA:NULL-MD5' then SSL does client and server machine authentication with no encryption overhead? -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://postgres.enterprisedb.com + If your life is a hard drive, Christ can be your backup. +
Tomasz Ostrowski wrote: > > Fundamentally these are man-in-the-middle attacks, and the only real > > solution is mutual authentication. > > The problem is not many people expect man-in-the-middle attack on > secure lan, localhost or local socket connection, so they'll not try > to prevent it. Agreed. This was the big surprise for me, and hence the new documentation section I wrote. I think based on this discussion that there is no way for us easily to to avoid vulnerabilities so documentation/education seems the most appropriate approach. -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://postgres.enterprisedb.com + If your life is a hard drive, Christ can be your backup. +
Mark Mielke wrote: > Gregory Stark wrote: > > "Mark Mielke" <mark@mark.mielke.cc> writes: > > > >> UNIX socket kernel credential passing was mentioned in an earlier post, but I > >> didn't see it raised again. > >> > > > > I mentioned getsockopt(SO_PEERCRED) which isn't the same as credential > > passing. It just tells you what uid is on the other end of your unix domain > > socket. > > > > I think it's much more widespread and portable than credential passing which > > was a BSD feature which allowed you to send along your kernel credentials to > > another process. So you could, for example, open a file in psql then pass the > > file descriptor to the backend to have the backend read directly from the > > file > I agree - I forgot there were different flavours. I think any of these > are just as good as SSL with public key authentication, and perhaps a > lot cheaper in terms of performance. The only piece of information > missing is the uid to compare against, which may as well be provided in > the db open parameters the same as any other parameters might be provided. True, but if you are going to have the client check a uid we might as well just put the socket file in a secure directory and be done with it. -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://postgres.enterprisedb.com + If your life is a hard drive, Christ can be your backup. +
Bruce Momjian wrote: <blockquote cite="mid:200712241743.lBOHhCc21531@momjian.us" type="cite"><pre wrap="">Mark Mielke wrote:</pre><blockquote type="cite"><br /><pre wrap="">I agree - I forgot there were different flavours. I think any of these are just as good as SSL with public key authentication, and perhaps a lot cheaper in terms of performance. The only piece of information missing is the uid to compare against, which may as well be provided in the db open parameters the same as any other parameters might be provided. </pre></blockquote><pre wrap=""> True, but if you are going to have the client check a uid we might as well just put the socket file in a secure directory and be done with it. </pre></blockquote><br /> That's a good point too...:-)<br /><br /> Cheers,<br /> mark<br /><br /><pre class="moz-signature" cols="72">-- Mark Mielke <a class="moz-txt-link-rfc2396E" href="mailto:mark@mielke.cc"><mark@mielke.cc></a> </pre>
Tom Lane wrote: > 2. Improve our documentation about how to set up mutual authentication > under SSL (it's a bit scattered now). > > 3. Recommend using mutual auth even for local connections, if a server > containing sensitive data is to be run on a machine that also hosts > untrusted users. > > As somebody noted, it's probably even better policy to not have any > sensitive data on a machine that hosts untrusted users, and it wouldn't > hurt for the docs to point that out; but we should have a documented > solution available if you have to do it. I have added the section about preventing server spoofing and updated the SSL documentation to be more logical and clearer about certificates. The major updated sections are: http://momjian.us/tmp/pgsql/preventing-server-spoofing.htmlhttp://momjian.us/tmp/pgsql/ssl-tcp.htmlhttp://momjian.us/tmp/pgsql/libpq-ssl.html I have to say I didn't understand the certificate stuff before, but I think it should be clearer now to anyone who reads it. -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://postgres.enterprisedb.com + If your life is a hard drive, Christ can be your backup. +
Bruce Momjian wrote: > Tom Lane wrote: > > 2. Improve our documentation about how to set up mutual authentication > > under SSL (it's a bit scattered now). > > > > 3. Recommend using mutual auth even for local connections, if a server > > containing sensitive data is to be run on a machine that also hosts > > untrusted users. > > > > As somebody noted, it's probably even better policy to not have any > > sensitive data on a machine that hosts untrusted users, and it wouldn't > > hurt for the docs to point that out; but we should have a documented > > solution available if you have to do it. > > I have added the section about preventing server spoofing and updated the SSL > documentation to be more logical and clearer about certificates. > > The major updated sections are: > > http://momjian.us/tmp/pgsql/preventing-server-spoofing.html > http://momjian.us/tmp/pgsql/ssl-tcp.html > http://momjian.us/tmp/pgsql/libpq-ssl.html > > I have to say I didn't understand the certificate stuff before, but I > think it should be clearer now to anyone who reads it. I have just added two documentation tables outlining SSL file usage for client and server. -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://postgres.enterprisedb.com + If your life is a hard drive, Christ can be your backup. +
On Sun, 23 Dec 2007, Tom Lane wrote: > ISTM we have these action items: > 1. Improve the code so that SSL authentication can be used across a > Unix-socket connection (we can disable encryption though). I've just realised that there's a problem with SSL with disabled encryption on a unix socket / localhost connections for cpu-saving. Any local user using this attack would be able to eavesdrop everything comming through a socket. If an attacker just acts as a tunnel, highjacking a unix-socket and talking to a server using any other interface (or the other way around), then he would not be able to modify information flow, but he would be able to read and save everything going to and from a server. It is again not obvious as normally local connections are not susceptible to eavesdropping. And could go unnoticed for a long time as everything would just work normally. So I think no cpu-saving by turning off encryption should be done. And this would all not help for a denial-of-service attack. Regards Tometzky -- ...although Eating Honey was a very good thing to do, there was a moment just before you began to eat it which was better than when you were... Winnie the Pooh
On Sun, Dec 23, 2007 at 09:52:14PM +0100, Magnus Hagander wrote: > My point is that all these other server products have the exact same > issue. And that they deal with it the exact same we do - pretty much > leave it up to the guy who configure the server to realize that's just > how things work. The problem with that approach is that, in the computer security world, taking that approach is increasingly regarded as negligent. And pointing out that others are similarly negligent is not a response. Note that I am explicitly not subscribing to or disagreeing with that view. A
On Mon, Dec 24, 2007 at 12:04:16AM +0100, Tomasz Ostrowski wrote: > > Not at all, as it won't run as root, it'll just start as root and > then give up all root privileges. The only thing it would have after > being root is just an open socket. If you think that is complete protection against privilege escalation, I encourage you to read some more bugtraq archives. The answer to MITM attacks is not superuser-reserved ports anyway. The privileged port idea was a bad one in retrospect. The answer is strong authentication. A
On Sun, Dec 23, 2007 at 01:45:14AM -0500, Tom Lane wrote: > > The primary reason things work like that is that there are boatloads of > machines that are marginally misconfigured. For instance, userland > thinks there is IPv6 support when the kernel thinks not (or vice versa). Not only "marginally misconfigured", but "broken as shipped", in the case of some OSes. And in those cases, you can't even fix it. A
Andrew Sullivan wrote: > On Sun, Dec 23, 2007 at 09:52:14PM +0100, Magnus Hagander wrote: >> My point is that all these other server products have the exact same >> issue. And that they deal with it the exact same we do - pretty much >> leave it up to the guy who configure the server to realize that's just >> how things work. > > The problem with that approach is that, in the computer security world, > taking that approach is increasingly regarded as negligent. And pointing > out that others are similarly negligent is not a response. Sure. But we *do* provide a way to work around it *if you have to*: use SSL with trusted certificates. In the large number of cases where you *don't* need to worry about it, there's no need to add any extra overhead. And if you're going with SSL already, the extra overhead of TCP vs Unix sockets shouldn't matter *at all*... So I don't really see a motivation for us to support SSL over Unix sockets, if it adds any complexity to the code. //Magnus
Mark Mielke wrote: > I prefer UNIX sockets with kernel credential passing over TCP/IP with > username/password or the more expensive SSL. I do not like storing > passwords or private certificates in a place available to the web user, > as other web users would then also have access. I do not have evidence, > but I am under the impression that the TCP/IP stack incurs additional > overhead on connect(), send(), recv(), and close() than UNIX sockets. I think that was one of the original reasons the Unix sockets code was added at all. > How expensive would it be to implement a "server_user" db open parameter > that would perform reverse credential passing to validate? "dbname=XXX > port=5432 server_user=postgres". If the server can't prove it is > postgres through UNIX socket credential passing, it fails. Similarly, Probably not very, but you should be able to achieve the same thing by moving the socket to a protected directory, I think? > identd may be usable in reverse? I've seen many people claim identd is > insecure - but it is secure if I am the one running it, is it not? AFAIK, it's secure if the host that it's running on can be considered secure. It's not secure over the internet, because by definition wherever the client runs is not under your control. But if you fully control the machine that the client runs on, AFAIK, ident should be secure. //Magnus
Tomasz Ostrowski wrote: > On Sun, 23 Dec 2007, Tom Lane wrote: >> 3. Massive confusion and breakage as various people transition to the >> new standard at different times. > > As with any major version. No, it would introduce a client/server incompatibility. Generally, older clients (libpq) will still work fine with newer servers, or the other way around. Lots of attention is paid to maintaining that. >> 4. Potential to create, rather than remove, spoofing opportunities >> anyplace there is confusion about which port the postmaster is really >> listening on. > > I agree. But because it would just not work it'll be easy to notice > and correct. And when corrected it would be no more confusion. It would be a perfect spot to put in the MITM attack that this whole thread has been about... >> Fundamentally these are man-in-the-middle attacks, and the only real >> solution is mutual authentication. > > The problem is not many people expect man-in-the-middle attack on > secure lan, localhost or local socket connection, so they'll not try > to prevent it. There is no such thing as a secure LAN, unless you control every host and what every user can do on it. (Definition of LAN can be a bit different though. Say you implement proper IPsec isolation on it - in that case, only the machines on the inside of the ipsec "cloud" need to be trusted) Same thing really does go for the host - it's not a secure host if you can't control what the users are doing on it. So you can't treat it as such if that's the case. //Magnus
Magnus Hagander wrote: > > How expensive would it be to implement a "server_user" db open parameter > > that would perform reverse credential passing to validate? "dbname=XXX > > port=5432 server_user=postgres". If the server can't prove it is > > postgres through UNIX socket credential passing, it fails. Similarly, > > Probably not very, but you should be able to achieve the same thing by > moving the socket to a protected directory, I think? What you are ulimately interested in is who runs a given server. Making the inference that if the socket is in a directory that is currently only writable by a certain user implies that the user owns the server that offers that socket doesn't sound like a given to me. And let's forget that it's not really straightforward to find out who has write access to some directory. -- Peter Eisentraut http://developer.postgresql.org/~petere/
Magnus Hagander <magnus@hagander.net> writes: > Sure. But we *do* provide a way to work around it *if you have to*: use > SSL with trusted certificates. In the large number of cases where you > *don't* need to worry about it, there's no need to add any extra overhead. > And if you're going with SSL already, the extra overhead of TCP vs Unix > sockets shouldn't matter *at all*... So I don't really see a motivation > for us to support SSL over Unix sockets, if it adds any complexity to > the code. Well, the problem with the current behavior is that the client app can "require SSL", but the request is silently ignored if the connection is over Unix socket. So you might think you're secure when you aren't. I think that the reason we don't support SSL over Unix socket is mainly that we thought it was useless; but this discussion has exposed reasons to use it. So I'm for just eliminating the asymmetry. regards, tom lane
Tom Lane wrote: > Magnus Hagander <magnus@hagander.net> writes: > >> Sure. But we *do* provide a way to work around it *if you have to*: use >> SSL with trusted certificates. In the large number of cases where you >> *don't* need to worry about it, there's no need to add any extra overhead. >> > > >> And if you're going with SSL already, the extra overhead of TCP vs Unix >> sockets shouldn't matter *at all*... So I don't really see a motivation >> for us to support SSL over Unix sockets, if it adds any complexity to >> the code. >> > > Well, the problem with the current behavior is that the client app can > "require SSL", but the request is silently ignored if the connection is > over Unix socket. So you might think you're secure when you aren't. > > I think that the reason we don't support SSL over Unix socket is mainly > that we thought it was useless; but this discussion has exposed reasons > to use it. So I'm for just eliminating the asymmetry. > > > I have no problem with that. But it does seem to me that we are going about this all wrong. The OP proposed a "solution" which was intended to ensure at the server end that an untrusted user could not spoof the postmaster if the postmaster were not running. Putting the onus of this on clients seems wrong. I don't have any experience with SELinux, but my impression is that it can be used to control who or what can open files, sockets etc. On Linux at least this strikes me as a more productive approach to the original problem, as it would put the solution in the SA's hands. Maybe other Unices and Windows have similar capabilities? cheers andrew
Andrew Dunstan <andrew@dunslane.net> writes: > I have no problem with that. But it does seem to me that we are going > about this all wrong. The OP proposed a "solution" which was intended to > ensure at the server end that an untrusted user could not spoof the > postmaster if the postmaster were not running. Putting the onus of this > on clients seems wrong. I don't have any experience with SELinux, but my > impression is that it can be used to control who or what can open files, > sockets etc. On Linux at least this strikes me as a more productive > approach to the original problem, as it would put the solution in the > SA's hands. Maybe other Unices and Windows have similar capabilities? Most Linux distros don't have SELinux, AFAIK, so this is probably not a very useful suggestion. Not that I have a problem with Red-Hat-specific solutions ;-) ... but since one of the arguments being made against move-the-socket is that it introduces a lot of platform-specific assumptions, we have to apply that same criterion to alternative answers. As far as ensuring security from the server end, what about extending the pg_hba.conf options to require that the server has both checked a client certificate and presented its own certificate? (I'm not sure whether OpenSSL provides a way to determine that, though.) regards, tom lane
Tom Lane wrote: > Andrew Dunstan <andrew@dunslane.net> writes: >> I have no problem with that. But it does seem to me that we are going >> about this all wrong. The OP proposed a "solution" which was intended to >> ensure at the server end that an untrusted user could not spoof the >> postmaster if the postmaster were not running. Putting the onus of this >> on clients seems wrong. I don't have any experience with SELinux, but my >> impression is that it can be used to control who or what can open files, >> sockets etc. On Linux at least this strikes me as a more productive >> approach to the original problem, as it would put the solution in the >> SA's hands. Maybe other Unices and Windows have similar capabilities? > > Most Linux distros don't have SELinux, AFAIK, so this is probably not a > very useful suggestion. Not that I have a problem with Red-Hat-specific > solutions ;-) ... but since one of the arguments being made against > move-the-socket is that it introduces a lot of platform-specific > assumptions, we have to apply that same criterion to alternative > answers. > > As far as ensuring security from the server end, what about extending > the pg_hba.conf options to require that the server has both checked > a client certificate and presented its own certificate? (I'm not sure > whether OpenSSL provides a way to determine that, though.) A server has *always* presented its certificate. SSL doesn't work otherwise. What we can't know is if the client *verified* the certificate. But there's no way to control that from server-side anyway... And we do request the client certificate if the server is provided with a root certificate store to verify it against... I'm not sure we gain a lot by adding a second option to do the same thing (which still will need said root certificate store to work) //Magnus
* Tom Lane (tgl@sss.pgh.pa.us) wrote: > Most Linux distros don't have SELinux, AFAIK, so this is probably not a > very useful suggestion. Not that I have a problem with Red-Hat-specific > solutions ;-) Debian also has SELinux, if one wishes to configure it. I suspect other Debian-derived distributions also have it as a result. It can certainly be a pain to configure but it's far from impossible and if an SA has concerns such as those described, well, I'd be kind of suprised if they weren't considering SELinux (if they're on Linux anyway). > ... but since one of the arguments being made against > move-the-socket is that it introduces a lot of platform-specific > assumptions, we have to apply that same criterion to alternative > answers. I don't quite follow how one argues 'against' SELinux in this context as I don't believe upstream changes would be required here. Just a policy configuration whereby only the postgres user can listen on port 5432. > As far as ensuring security from the server end, what about extending > the pg_hba.conf options to require that the server has both checked > a client certificate and presented its own certificate? (I'm not sure > whether OpenSSL provides a way to determine that, though.) It'd be really nice to be able to have client-side certificates used for authentication by having a way to associate a certificate (or maybe at least the DN, but you can have dups) to a user. That's a seperate conversation tho, really. Thanks, Stephen
Stephen Frost wrote: > It'd be really nice to be able to have client-side certificates used for > authentication by having a way to associate a certificate (or maybe at > least the DN, but you can have dups) to a user. That's a seperate > conversation tho, really. Absolutely, but as you say a completely different thing. And FYI, it's on my list of things I'd like to work on for 8.4. Usual disclaimers about not actually ending up having time to do it applies, of course :-) //Magnus
On Thu, 27 Dec 2007, Stephen Frost wrote: > Debian also has SELinux, if one wishes to configure it. I suspect other > Debian-derived distributions also have it as a result. It can certainly > be a pain to configure but it's far from impossible That's a good summary. As of Debian Etch (April of this year) the base distribution now include enough SELinux compatible userland packages for the fundamental utilities (ssh, svsvinit, pam, cron, some others) that you don't have to run around hacking a set of patches anymore just to get the base system working. There is also a Hardened Gentoo with SELinux. The most notable distribution where SELinux support is seriously dead is SuSE. RHEL/Fedora are the only distributions where SELinux is taken seriously enough that most packages/daemons are patched and have policies setup in a useful state out of the box. But with some work you can customize a reasonable setup on some other distributions. -- * Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD
The problem with forcing authentication is that an auth-unaware client connecting to a legitimate postmaster would have its connections refused. That same client would have its connections accepted by an impostor postmaster. Thus, there is no way to stop impostor postmasters from carrying out these attacks on auth-unaware clients. The proper solution, as I see it, would be to have an authentication system in the postmaster that was not enforced. If the client requests authentication, postmaster will provide it, if not, then postmaster will connect normally without it. This would not result in *any* change in the default behavior of postmaster, and as far as users who don't want to use it are concerned, they don't even need to bother to turn it off (assuming that having it turned on does not consume extra resources and I don't think having an unused authentication mechanism sitting in the postmaster connection establishment routine would). This does not appear to result in greater security, however it does. It allows DBAs who suspect that they are likely going to be the target of these attacks to deploy authentication procedures in their client packages. This could be a modification to their applications, or whatever steps are necessary to mandate authenticated connections within their organization. There is no point forcing some auth mechanism within postmaster, as attackers would simply catch users using software that did not require the server to auth before sending passwords. For this reason it is not postmaster's responsibility to check that unknown clients do not connect to impostors, it is postmaster's responsibility however to authenticate itself, if the client asks for it. So the onus (rightfully in my opinion) falls upon network administrators / DBAs to ensure that all of their users are using auth-enabled client packages which will not allow connections to be established with a postmaster until authentication has passed, and disallow the use of other client software to connect to postmaster. In my view, this puts the security responsibility where it rightfully belongs *and* maintains a non-breaking of client packages in the wild. Making a server or anything that *requires* auth and disallows non-authed clients is pointless, as there is nothing stopping attackers from setting up an auth-disabled impostor and waiting for someone to just connect using psql or some other vanilla connection method. The onus really ought to be with the administrators who give their users the software they use to connect to ensure that the software they use adheres to the relevant security policy, in the same way that its their responsibility to ensure that the client software does not contain keyloggers and other such trashware. In the web world, it is the client's responsibility to ensure that they check the SSL cert and don't do their banking at www.bankofamerica.hax0r.ru and there is nothing that the real banking site can do to stop them using their malware infested PC to connect to the phishing site. They can only provide a site that provides authentication. This is analogous to postmaster: It is only the responsibility of postmaster to provide the option of authentication, it is the client's responsibility to know if they should use it, and if so, to ensure they do so properly. Regards, - MrNaz.com
On Sat, Dec 29, 2007 at 02:09:23AM +1100, Naz Gassiep wrote: > In the web world, it is the client's responsibility to ensure that they > check the SSL cert and don't do their banking at > www.bankofamerica.hax0r.ru and there is nothing that the real banking > site can do to stop them using their malware infested PC to connect to > the phishing site. The above security model is exactly how we got into the mess we're in: relying entirely on the good sense of a wide community of users is how compromises happen. Strong authentication authenticates both ways. For instance, the web world you describe is not the only one. Banks who take security seriously have multiple levels of authentication, have trained their users how to do this, and regularly provide scan tools to clients in an attempt (IMO possibly doomed) to reduce the chances of input-device sniffing. A
On 12/28/07, Andrew Sullivan <ajs@crankycanuck.ca> wrote: > On Sat, Dec 29, 2007 at 02:09:23AM +1100, Naz Gassiep wrote: > > In the web world, it is the client's responsibility to ensure that they > > check the SSL cert and don't do their banking at > > www.bankofamerica.hax0r.ru and there is nothing that the real banking > > site can do to stop them using their malware infested PC to connect to > > the phishing site. > The above security model is exactly how we got into the mess we're in: > relying entirely on the good sense of a wide community of users is how > compromises happen. Strong authentication authenticates both ways. > For instance, the web world you describe is not the only one. Banks who > take security seriously have multiple levels of authentication, have trained > their users how to do this, and regularly provide scan tools to clients in > an attempt (IMO possibly doomed) to reduce the chances of input-device > sniffing. I don't follow. What are banks doing on the web now to force clients to authenticate them, and how is it any different from the model of training users to check the SSL certificate? There's a fundamental problem that you can't make someone else do authentication if they don't want to, and that's exactly the situation clients are in. I don't see how this can possibly be fixed anywhere other than the client.
"Trevor Talbot" <quension@gmail.com> writes: > There's a fundamental problem that you can't make someone else do > authentication if they don't want to, and that's exactly the situation > clients are in. I don't see how this can possibly be fixed anywhere > other than the client. The point of requiring authentication from the server side is that it will get people to configure their client code properly. Then if a MITM attack is subsequently attempted, the client code will detect it. It's true that this doesn't offer much defense in the case where a new user is getting set up and a MITM attack is already active. But a user who blindly trusts a server that he's never connected to before is open to all sorts of attacks, starting for instance with mistyping the host name. The fact that this approach doesn't (by itself) solve that problem doesn't make it useless. Also, getting people in the habit of setting up for mutual authentication does have value in that scenario too; it makes the new user perhaps a bit more likely to distrust a server that isn't presenting the right certificate. regards, tom lane
On 12/28/07, Tom Lane <tgl@sss.pgh.pa.us> wrote: > "Trevor Talbot" <quension@gmail.com> writes: > > There's a fundamental problem that you can't make someone else do > > authentication if they don't want to, and that's exactly the situation > > clients are in. I don't see how this can possibly be fixed anywhere > > other than the client. > The point of requiring authentication from the server side is that it > will get people to configure their client code properly. Then if a MITM > attack is subsequently attempted, the client code will detect it. But this is essentially just an education/training issue; the security model itself is unchanged. Bank web sites are only going to accept clients via SSL, but if a client does not try to authenticate the site, whether it connects via SSL or not is rather irrelevant. I have no problem with the idea of encouraging clients to authenticate the server, but this configuration doesn't help with defaults. It's just available as a tool for site administrators to use. > Also, getting people in the habit of setting up for mutual > authentication does have value in that scenario too; it makes the new > user perhaps a bit more likely to distrust a server that isn't > presenting the right certificate. I see Naz's argument as addressing this goal. The problem with forcing authentication is that it's an all-or-nothing proposition: either the server and all the clients do it, or none of them do. That's fine when you control all the pieces and are willing to put in the work to configure them all, but not effective for encouraging default behavior. Instead, give the server credentials by default, but let clients choose whether to request them. That makes deployment easier in that all you have to do is configure clients as needed to get authentication of the server. Easier deployment means it's more likely to be used. IOW, put up both http and https out of the box. You might even want to have newer clients default to caching credentials on the first connect. That still doesn't change the security model, but should be more effective at getting clients to do something useful by default.
On Fri, Dec 28, 2007 at 07:48:22AM -0800, Trevor Talbot wrote: > I don't follow. What are banks doing on the web now to force clients > to authenticate them, and how is it any different from the model of > training users to check the SSL certificate? Some banks (mostly Swiss and German, from what I've seen) are requiring two-token authentication, and that second "token" is really the way that the client authenticates the server: when you "install" your banking application, you're really installing the keys you need to authenticate the server and for the server to authenticate you. > There's a fundamental problem that you can't make someone else do > authentication if they don't want to, and that's exactly the situation > clients are in. Right, but you can train users to expect authentication of the server. One way to do that is to require them to use an intrusive enough system that they end up learning what to look for in a phish attack. That said, I tend to agree with you: if we had dnssec everywhere today, it's totally unclear to me what client applications would do in the event they got a "bogus" resolution. A
Andrew Sullivan wrote: <blockquote cite="mid:20071228215734.GA26152@crankycanuck.ca" type="cite"><pre wrap="">On Fri, Dec28, 2007 at 07:48:22AM -0800, Trevor Talbot wrote: </pre><blockquote type="cite"><pre wrap="">I don't follow. What arebanks doing on the web now to force clients to authenticate them, and how is it any different from the model of training users to check the SSL certificate? </pre></blockquote><pre wrap=""> Some banks (mostly Swiss and German, from what I've seen) are requiring two-token authentication, and that second "token" is really the way that the client authenticates the server: when you "install" your banking application, you're really installing the keys you need to authenticate the server and for the server to authenticate you. </pre></blockquote> I have done this for my own application before. Althoughthe client and server use standard TLS 1.0 to speak to each other with a required authentication of RSA 1024-bitand a required encryption of AES 128-bit, it still requires that passwords sent from the client to the server areRSA encrypted using the server public certificate, making it impossible for anybody except for the legitimate server tosee the password. One benefit of this is that the password itself can be '\0'd out as soon as we have RSA encrypted it,and things like a core dump of the client have a lower chance of including the password in plain text.<br /><br /> Inmy case, the reason I did it is because I was trying to navigate around the US export control regulations that preventgreater than 1024 bit assymetric or 128 bit symmetric from leaving the US. I was able to use the standard Java SSLand crypto libraries to achieve greater than 128 bit symmetric encryption by combining the two.<br /><br /> Now, my implementationisn't perfect with regard to Andrew's comments, as I encrypt using the server's public certificate after authenticatingit. Technically, however, I could actually have two server certificates - one to use for authentication, andone to use for encryption. I believe this is becoming common in some circles, and you will find that gpg uses DSA keysfor authentication, and signs the RSA keys used for encryption with the DSA key. The DSA key can be more bits, or havea longer life time.<br /><br /> At what point does prudence become paranoia? I don't know. In my case, I felt 128-bitencryption was insufficient for protecting the passwords in my application. 256-bit encryption would have been sufficient,but that cannot yet be safely exported from the US to the countries I required.<br /><br /> Cheers,<br /> mark<br/><br /><pre class="moz-signature" cols="72">-- Mark Mielke <a class="moz-txt-link-rfc2396E" href="mailto:mark@mielke.cc"><mark@mielke.cc></a> </pre>
Andrew Sullivan wrote: > On Fri, Dec 28, 2007 at 07:48:22AM -0800, Trevor Talbot wrote: >> I don't follow. What are banks doing on the web now to force clients >> to authenticate them, and how is it any different from the model of >> training users to check the SSL certificate? > > Some banks (mostly Swiss and German, from what I've seen) are requiring > two-token authentication, and that second "token" is really the way that the > client authenticates the server: when you "install" your banking > application, you're really installing the keys you need to authenticate the > server and for the server to authenticate you. Most actually secure banks would be using standalone tokens, and not something that runs on your local machine and can easily be compromised. There needs to be air between the token and the computer. The exact difference in security is always debatable, but "air gap" tokens is what's been used for most banks here for many years - in many cases since they first started doing internet banking 10+ years ago. But. That's for authenticating the *client*. Authenticating the server in the end requires you to trust the security of the client machine, and requiring special applications for that just makes it worse :-( And in the end, the only thing they really do is implement the browser the way it should've been implemented in the first place. The bottom line is still that the security against that has to happen on the client side. We could make it so that we *require* the root certificate to be present on the client and make the check, and simply refuse to connect without it. But my guess is that it'll just increase the bar for SSL adoption at all, whilst most people will find some insecure way to get the root key over there anyway. Unless we want to start shipping our own batch of trusted roots, and only support paid-for certificates or something... >> There's a fundamental problem that you can't make someone else do >> authentication if they don't want to, and that's exactly the situation >> clients are in. > > Right, but you can train users to expect authentication of the server. One > way to do that is to require them to use an intrusive enough system that > they end up learning what to look for in a phish attack. That said, I tend > to agree with you: if we had dnssec everywhere today, it's totally unclear > to me what client applications would do in the event they got a "bogus" > resolution. Well, we all know how well the big warning boxes in the web browsers work... You can't really trust the user to make such a decision, in the end. You can get to a point, but not all the way by far. But what you can do is that as an administrator, you can require these checks. If you only allow connections from machines that are trusted, and you make sure those are configured to require verification of the server cert, then you're safe. //Magnus
Mark Mielke wrote: > Andrew Sullivan wrote: >> On Fri, Dec 28, 2007 at 07:48:22AM -0800, Trevor Talbot wrote: >> >>> I don't follow. What are banks doing on the web now to force clients >>> to authenticate them, and how is it any different from the model of >>> training users to check the SSL certificate? >>> >> >> Some banks (mostly Swiss and German, from what I've seen) are requiring >> two-token authentication, and that second "token" is really the way that the >> client authenticates the server: when you "install" your banking >> application, you're really installing the keys you need to authenticate the >> server and for the server to authenticate you. >> > I have done this for my own application before. Although the client and > server use standard TLS 1.0 to speak to each other with a required > authentication of RSA 1024-bit and a required encryption of AES 128-bit, > it still requires that passwords sent from the client to the server are > RSA encrypted using the server public certificate, making it impossible > for anybody except for the legitimate server to see the password. One > benefit of this is that the password itself can be '\0'd out as soon as > we have RSA encrypted it, and things like a core dump of the client have > a lower chance of including the password in plain text. Why are you even using a password in this case, and not just key-based auth? Wouldn't that be even easier and more secure? > At what point does prudence become paranoia? I don't know. In my case, I > felt 128-bit encryption was insufficient for protecting the passwords in > my application. 256-bit encryption would have been sufficient, but that > cannot yet be safely exported from the US to the countries I required. How do you protect the certificate store on the client? Or the binary that ends up prompting for the password on the client? //Magnus
Magnus Hagander wrote: <blockquote cite="mid:477578F0.2010303@hagander.net" type="cite"><pre wrap="">Mark Mielke wrote:</pre><blockquote type="cite"><br /><pre wrap="">I have done this for my own application before. Although the clientand server use standard TLS 1.0 to speak to each other with a required authentication of RSA 1024-bit and a required encryption of AES 128-bit, it still requires that passwords sent from the client to the server are RSA encrypted using the server public certificate, making it impossible for anybody except for the legitimate server to see the password. One benefit of this is that the password itself can be '\0'd out as soon as we have RSA encrypted it, and things like a core dump of the client have a lower chance of including the password in plain text. </pre></blockquote><pre wrap=""> Why are you even using a password in this case, and not just key-based auth? Wouldn't that be even easier and more secure? </pre></blockquote><br /> Users of this product don't have keys - theyhave passwords. The username/password is for per-user authentication. The username defines the access level. Many userswill use the same client. The client does have its own private RSA key and public certificate, however, this grantsentry to the system. Password login is still required by the users of the client.<br /><br /><blockquote cite="mid:477578F0.2010303@hagander.net"type="cite"><blockquote type="cite"><pre wrap="">At what point does prudence becomeparanoia? I don't know. In my case, I felt 128-bit encryption was insufficient for protecting the passwords in my application. 256-bit encryption would have been sufficient, but that cannot yet be safely exported from the US to the countries I required. </pre></blockquote><pre wrap="">How do you protectthe certificate store on the client? Or the binary that ends up prompting for the password on the client</pre></blockquote> The certificate on the client grants access to thesystem. It does not grant access to the resources on the system. Two-level authentication with mandatory server authentication.You see similar things in physical security instances. A security badge lets you in the door - but you stillneed to login to the computer once you get in.<br /><br /> As for protecting the binary that prompts for a passwordon the client - I didn't bother with this, although Java does allow for signed jar files that would allow the userto be assured that the client is legitimate. There are always loops though, just because the client is legitimate doesn'tmean that the keyboard is, and so on. You end up putting in enough effort to mitigate the risk. The risk always exists,but through clever, cryptographic, or obfuscatory measures, the risk can be greatly reduced.<br /><br /> Cheers,<br/> mark<br /><br /><pre class="moz-signature" cols="72">-- Mark Mielke <a class="moz-txt-link-rfc2396E" href="mailto:mark@mielke.cc"><mark@mielke.cc></a> </pre>
Magnus Hagander wrote: > We could make it so that we *require* the root certificate to be present > on the client and make the check, and simply refuse to connect without > it. But my guess is that it'll just increase the bar for SSL adoption at > all, whilst most people will find some insecure way to get the root key > over there anyway. Unless we want to start shipping our own batch of > trusted roots, and only support paid-for certificates or something... Agreed. Requiring client root certificate checking is heavy-handed. At most we could emit a server log message when a client has no certificate. Of course I am not sure anyone knows how to get that information from SSL. We could do it in the clients we ship but a malicious client will just remove the check. -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://postgres.enterprisedb.com + If your life is a hard drive, Christ can be your backup. +
Bruce Momjian <bruce@momjian.us> writes: > Agreed. Requiring client root certificate checking is heavy-handed. There seems to be some confusion here. I didn't think anyone was proposing that we force every installation to require client root certificate checking. What was under discussion (I thought) was providing the ability for a DBA to *choose* to require it. > Of course I am not sure anyone knows how to get that information from > SSL. Yeah, if OpenSSL doesn't support testing for this then the discussion is moot... regards, tom lane
Tom Lane wrote: > Bruce Momjian <bruce@momjian.us> writes: > > Agreed. Requiring client root certificate checking is heavy-handed. > > There seems to be some confusion here. I didn't think anyone was > proposing that we force every installation to require client root > certificate checking. What was under discussion (I thought) was > providing the ability for a DBA to *choose* to require it. Oh, yea, that would be OK. I am a little worried that the extra configuration required to turn this on/off might be added complexity for little gain. It might be simpler to allow the administrator to control whether non-checking clients are logged, rather than refusing the connection. I think this makes it clearer the root client check is to make sure all your clients are doing it right, rather than an actual security enhancement (if that makes sense). > > Of course I am not sure anyone knows how to get that information from > > SSL. > > Yeah, if OpenSSL doesn't support testing for this then the discussion > is moot... Yea. -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://postgres.enterprisedb.com + If your life is a hard drive, Christ can be your backup. +
Tomasz Ostrowski wrote: > On Sun, 23 Dec 2007, Tom Lane wrote: > > > ISTM we have these action items: > > 1. Improve the code so that SSL authentication can be used across a > > Unix-socket connection (we can disable encryption though). > > I've just realised that there's a problem with SSL with disabled > encryption on a unix socket / localhost connections for cpu-saving. > Any local user using this attack would be able to eavesdrop > everything comming through a socket. > > If an attacker just acts as a tunnel, highjacking a unix-socket and > talking to a server using any other interface (or the other way > around), then he would not be able to modify information flow, but he > would be able to read and save everything going to and from a server. > It is again not obvious as normally local connections are not > susceptible to eavesdropping. And could go unnoticed for a long time > as everything would just work normally. > > So I think no cpu-saving by turning off encryption should be done. > > And this would all not help for a denial-of-service attack. Good point. I have added the last two sentences to the documentation paragraph to highlight this issue: <productname>OpenSSL</productname> supports a wide range of ciphers and authentication algorithms, of varying strength. While a list of ciphers can be specified in the <productname>OpenSSL</productname> configuration file, you canspecify ciphers specifically for use by the database server by modifying <xref linkend="guc-ssl-ciphers"> in <filename>postgresql.conf</>. It is possible to have authentication without the overhead of encryption by using <literal>NULL-SHA</>or <literal>NULL-MD5</> ciphers. However, a man-in-the-middle could read and pass communications betweenclient and server. -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://postgres.enterprisedb.com + If your life is a hard drive, Christ can be your backup. +
Bruce Momjian wrote: > Good point. I have added the last two sentences to the documentation > paragraph to highlight this issue: > > <productname>OpenSSL</productname> supports a wide range of ciphers > and authentication algorithms, of varying strength. While a list of > ciphers can be specified in the <productname>OpenSSL</productname> > configuration file, you can specify ciphers specifically for use by > the database server by modifying <xref linkend="guc-ssl-ciphers"> in > <filename>postgresql.conf</>. It is possible to have authentication > without the overhead of encryption by using <literal>NULL-SHA</> or > <literal>NULL-MD5</> ciphers. However, a man-in-the-middle could read > and pass communications between client and server. > A fact that the above misses, is that symmetric key encryption is actually quite cheap. It is asymmetric key encryption that is expensive. If you look up information on SSL accelerators, you will find claims that the initial SSL authentication negotiation is 1000X as expensive as the actual data encryption for a running session, and that SSL web services are usually limited by their ability to negotiate NEW sessions. In other words, as well intentioned and accurate as the claim you make above, it may be irrelevant in many real world scenarios. If you are going to go through all the expensive processing of having authentication enabled, you may as well have encryption enabled too. Cheers, mark -- Mark Mielke <mark@mielke.cc>
Tom Lane wrote: <blockquote cite="mid:27698.1198897818@sss.pgh.pa.us" type="cite"><pre wrap="">Bruce Momjian <a class="moz-txt-link-rfc2396E"href="mailto:bruce@momjian.us"><bruce@momjian.us></a> writes: </pre><blockquote type="cite"><prewrap="">Agreed. Requiring client root certificate checking is heavy-handed. </pre></blockquote><pre wrap="">Thereseems to be some confusion here. I didn't think anyone was proposing that we force every installation to require client root certificate checking. What was under discussion (I thought) was providing the ability for a DBA to *choose* to require it. </pre><blockquote type="cite"><pre wrap="">Of course I am notsure anyone knows how to get that information from SSL. </pre></blockquote><pre wrap="">Yeah, if OpenSSL doesn't support testing for this then the discussion is moot..</pre></blockquote> I believe SSL is only capable of letting you know whether authentication for each end pointwas 1) not requested, 2) optional requested, or 3) required. Note that even if the authentication is required, thereis no way to know how authentication was performed. For example, did it check the signature chain, requiring it to mapto a public root certificate lists used by most web browsers? If so, did it check the contents of the certificate, oris only checking that it exists? Did it check a local key store that has a copy of the public key certificate? Or did itjust log the certificate subject?<br /><br /> OpenSSH, for instance, presents the user with the finger print of the certificateand asks you:<br /><br /> $ ssh 192.168.0.1<br /> The authenticity of host '192.168.0.1 (192.168.0.1)' can't beestablished.<br /> RSA key fingerprint is 3e:a7:0f:04:60:7e:8e:64:52:bf:81:92:a9:05:c7:36.<br /> Are you sure you wantto continue connecting (yes/no)? <br /><br /> While this certainly gives you the opportunity to challenge it, I don'tknow of any person who actually checks this finger print. Luckily, it stores it to ~/.ssh/known_hosts, and so the realissue is if it suddenly changes, you get a warning. Still, I've seen the warning before, and realized that "oh yes, thatmachine was upgraded, so it probably has a new public key". I have never personally checked the finger print againsta known source. Authentication is only as strong as the person or process confirming it. In the case of trying toforce a client to authenticate the server, this requires the client to know who the server is. As most clients will notknow who the server is, I see clients implementing an OpenSSH-style authentication model (shown above), or providing theirown no-op authentication routine to OpenSSL. I don't think it is worth it, and I don't think it would work.<br /><br/> Cheers,<br /> mark<br /><br /><pre class="moz-signature" cols="72">-- Mark Mielke <a class="moz-txt-link-rfc2396E" href="mailto:mark@mielke.cc"><mark@mielke.cc></a> </pre>
Mark Mielke wrote: > Bruce Momjian wrote: > > Good point. I have added the last two sentences to the documentation > > paragraph to highlight this issue: > > > > <productname>OpenSSL</productname> supports a wide range of ciphers > > and authentication algorithms, of varying strength. While a list of > > ciphers can be specified in the <productname>OpenSSL</productname> > > configuration file, you can specify ciphers specifically for use by > > the database server by modifying <xref linkend="guc-ssl-ciphers"> in > > <filename>postgresql.conf</>. It is possible to have authentication > > without the overhead of encryption by using <literal>NULL-SHA</> or > > <literal>NULL-MD5</> ciphers. However, a man-in-the-middle could read > > and pass communications between client and server. > > > A fact that the above misses, is that symmetric key encryption is > actually quite cheap. It is asymmetric key encryption that is expensive. > If you look up information on SSL accelerators, you will find claims > that the initial SSL authentication negotiation is 1000X as expensive as > the actual data encryption for a running session, and that SSL web > services are usually limited by their ability to negotiate NEW sessions. > In other words, as well intentioned and accurate as the claim you make > above, it may be irrelevant in many real world scenarios. If you are > going to go through all the expensive processing of having > authentication enabled, you may as well have encryption enabled too. OK, updated paragraph: It is possible to have authentication without encryption overhead by using <literal>NULL-SHA</> or <literal>NULL-MD5</>ciphers. However, a man-in-the-middle could read and pass communications between client and server. Also, encryption overhead is minimal compared to the overhead of authentication. For these reasons NULL ciphersare not recommended. -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://postgres.enterprisedb.com + If your life is a hard drive, Christ can be your backup. +
Bruce Momjian wrote: > OK, updated paragraph: > > It is possible to have authentication without encryption overhead by > using <literal>NULL-SHA</> or <literal>NULL-MD5</> ciphers. However, > a man-in-the-middle could read and pass communications between client > and server. Also, encryption overhead is minimal compared to the > overhead of authentication. For these reasons NULL ciphers are not > recommended. > Looks good! Cheers, mark -- Mark Mielke <mark@mielke.cc>
Tom Lane wrote: > Bruce Momjian <bruce@momjian.us> writes: >> Agreed. Requiring client root certificate checking is heavy-handed. > > There seems to be some confusion here. I didn't think anyone was > proposing that we force every installation to require client root > certificate checking. What was under discussion (I thought) was > providing the ability for a DBA to *choose* to require it. Ok, at least someone is partly lost in this discussion, and I'm getting a sneaky suspicion it's me :-) We already *do* allow the DBA to choose this, no? If you put the root certificate on the client, it *will* verify the server cert, and it *will* refuse to connect to a server that can't present a trusted root cert. Hang on, maybe I get what you're referring to now - we don't check the Common Name field on the certificate, so *any* trusted certificate would be ok. Incorrect common name generally results in a warning in a browser, for example, but we accept it fine. We do store it in conn->peer_cn, so the client can check it if they need to. But we don't enforce. Or are you saying that the *server* should require that the client has done verification, by a config string? If so, I just don't see how that's possible in any meaningful way. >> Of course I am not sure anyone knows how to get that information from >> SSL. > > Yeah, if OpenSSL doesn't support testing for this then the discussion > is moot... AFAIK, our current OpenSSL code supports verifying both client and server certificates. If we want to, as Bruce suggested, emit a log message when the client hasn't provided a certificate, we can certainly do so. But I thought this thread was about impersonating the server, not the client... Emitting such a log message in cases where the system isn't configured to use client certificates at all would cause a whole lot of unnecessary logging for all cases that don't use client certificates. And if you *do* use client certificates, it's not going to get emitted because you can't even *connect* without having it. Now, if/when we actually support authenticating with client certificates (as I said, I hope to work on this), the equation is different because then you can set it per hba using authentication method. But just enabling such a thing globally is a very blunt instrument... //Magnus
Mark Mielke wrote: > Tom Lane wrote: >> Bruce Momjian <bruce@momjian.us> writes: >> >>> Agreed. Requiring client root certificate checking is heavy-handed. >>> >> There seems to be some confusion here. I didn't think anyone was >> proposing that we force every installation to require client root >> certificate checking. What was under discussion (I thought) was >> providing the ability for a DBA to *choose* to require it. >> >>> Of course I am not sure anyone knows how to get that information from >>> SSL. >>> >> Yeah, if OpenSSL doesn't support testing for this then the discussion >> is moot.. > I believe SSL is only capable of letting you know whether authentication > for each end point was 1) not requested, 2) optional requested, or 3) > required. Note that even if the authentication is required, there is no > way to know how authentication was performed. For example, did it check > the signature chain, requiring it to map to a public root certificate > lists used by most web browsers? If so, did it check the contents of the > certificate, or is only checking that it exists? Did it check a local > key store that has a copy of the public key certificate? Or did it just > log the certificate subject? That is exactly my point. The server can never know if the client has actually verified anything. It can provide the client with the *means* to verify things, but it can't enforce it. A naive implementation would have a flag in the protocol that says "enforce client to validate server certs". The MITM attacker could then just remove this flag in the stream before it arrives at the client, and it's gone. And the kind of attack we would be trying to protect from here is exactly the one that can trivially remove such a check. It would just be a no-op with administrative overhead, really. The bottom line is that the server cannot be responsible for client security. Only the client can. > OpenSSH, for instance, presents the user with the finger print of the > certificate and asks you: > > $ ssh 192.168.0.1 > The authenticity of host '192.168.0.1 (192.168.0.1)' can't be established. > RSA key fingerprint is 3e:a7:0f:04:60:7e:8e:64:52:bf:81:92:a9:05:c7:36. > Are you sure you want to continue connecting (yes/no)? > > While this certainly gives you the opportunity to challenge it, I don't > know of any person who actually checks this finger print. Luckily, it > stores it to ~/.ssh/known_hosts, and so the real issue is if it suddenly > changes, you get a warning. Still, I've seen the warning before, and > realized that "oh yes, that machine was upgraded, so it probably has a > new public key". I have never personally checked the finger print > against a known source. Authentication is only as strong as the person > or process confirming it. In the case of trying to force a client to > authenticate the server, this requires the client to know who the server > is. As most clients will not know who the server is, I see clients > implementing an OpenSSH-style authentication model (shown above), or > providing their own no-op authentication routine to OpenSSL. I don't > think it is worth it, and I don't think it would work. Yeah, it *is* decent protection against it suddenly changing. But as you say, that requires the original fingerprint to be stored at the client. Just like if you store the root or server cert on a libpq client, it will refuse to connect if the server suddenly presents an untrusted certificate. (So in that way we're actually *more* secure than OpenSSH, since we don't give you a prompt to ignore an untrusted root cert - we just refuse to connect. Unless you manually disable the check by removing the file.) //Magnus
On Sat, Dec 29, 2007 at 12:40:24PM +0100, Magnus Hagander wrote: > We already *do* allow the DBA to choose this, no? If you put the root > certificate on the client, it *will* verify the server cert, and it > *will* refuse to connect to a server that can't present a trusted root cert. I think Tom's point is that we don't allow this for connections over a Unix Domain socket. And thus we should remove the asymmetry so the verification can work for them also. Personally I quite liked the idea of having a serveruser=foo which is checked by getting the peer credentials. Very low cost, quick setup solution. Have a nice day, -- Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/ > Those who make peaceful revolution impossible will make violent revolution inevitable. > -- John F Kennedy
Martijn van Oosterhout wrote: > On Sat, Dec 29, 2007 at 12:40:24PM +0100, Magnus Hagander wrote: >> We already *do* allow the DBA to choose this, no? If you put the root >> certificate on the client, it *will* verify the server cert, and it >> *will* refuse to connect to a server that can't present a trusted root cert. > > I think Tom's point is that we don't allow this for connections over a > Unix Domain socket. And thus we should remove the asymmetry so the > verification can work for them also. If that's where we still are, then I'm all for that provided it doesn't add a whole lot of complexity, as I think I said before. I thought we were now talking general SSL connections. That could be where I lost the thread :-) > Personally I quite liked the idea of having a serveruser=foo which is > checked by getting the peer credentials. Very low cost, quick setup > solution. It would still only tell you the user and not the postmaster ;-) But yes, it does help in the unix domain case (but not TCP-over-localhost). Either that, or a function that returns the peer credentials if available - like we have for SSL today. Then the client could do some more advanced checking if necessary - like allowing multiple different accounts if wanted. //Magnus
Mark Mielke wrote: > Magnus Hagander wrote: >> Mark Mielke wrote: >> >>> >>> I have done this for my own application before. Although the client and >>> server use standard TLS 1.0 to speak to each other with a required >>> authentication of RSA 1024-bit and a required encryption of AES 128-bit, >>> it still requires that passwords sent from the client to the server are >>> RSA encrypted using the server public certificate, making it impossible >>> for anybody except for the legitimate server to see the password. One >>> benefit of this is that the password itself can be '\0'd out as soon as >>> we have RSA encrypted it, and things like a core dump of the client have >>> a lower chance of including the password in plain text. >>> >> >> Why are you even using a password in this case, and not just key-based >> auth? Wouldn't that be even easier and more secure? >> > > Users of this product don't have keys - they have passwords. The > username/password is for per-user authentication. The username defines > the access level. Many users will use the same client. The client does > have its own private RSA key and public certificate, however, this > grants entry to the system. Password login is still required by the > users of the client. And you have one private key *per client*? That's an interesting approach - and actually how pg will work if you enable client cert checking :-) It's probably about as far as you can get as long as you use passwords. If you want something that's really secure, you just have to give up using passwords. Solutions like one-time passwords from a token or certificates on a smartcard are what people use then :-) >>> At what point does prudence become paranoia? I don't know. In my case, I >>> felt 128-bit encryption was insufficient for protecting the passwords in >>> my application. 256-bit encryption would have been sufficient, but that >>> cannot yet be safely exported from the US to the countries I required. >>> >> How do you protect the certificate store on the client? Or the binary >> that ends up prompting for the password on the client > The certificate on the client grants access to the system. It does not > grant access to the resources on the system. Two-level authentication > with mandatory server authentication. You see similar things in physical > security instances. A security badge lets you in the door - but you > still need to login to the computer once you get in. > > As for protecting the binary that prompts for a password on the client - > I didn't bother with this, although Java does allow for signed jar files > that would allow the user to be assured that the client is legitimate. Only as long as you can trust the JRE... And the OS... (yeah, reaching, but still goes to prove the point that the system *cannot* be secure if people can chance your client code/machine. It can be secure from a server or network POV, but not froma client one) > There are always loops though, just because the client is legitimate > doesn't mean that the keyboard is, and so on. You end up putting in > enough effort to mitigate the risk. The risk always exists, but through > clever, cryptographic, or obfuscatory measures, the risk can be greatly > reduced. Right. //Magnus
On Sat, 29 Dec 2007 12:45:26 +0100 Magnus Hagander <magnus@hagander.net> wrote: > That is exactly my point. The server can never know if the client has > actually verified anything. It can provide the client with the *means* > to verify things, but it can't enforce it. I know this is probably obvious to most people in this discussion and I don't mean to impugn Magnus just because I am latching onto his message to make this point but I suspect that this discussion would go a lot smoother if it branches into two completely different discussions about two completely different issues; - 1: How does the client assure that the postmaster is legit- 2: How does the postmaster assure that the client is legit Does anyone think that there is one answer to both? -- D'Arcy J.M. Cain <darcy@druid.net> | Democracy is three wolves http://www.druid.net/darcy/ | and a sheep voting on +1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.
Magnus Hagander wrote: <blockquote cite="mid:47763626.6010301@hagander.net" type="cite"><pre wrap="">Mark Mielke wrote:</pre><blockquote type="cite"><blockquote type="cite"><pre wrap="">Why are you even using a password in this case,and not just key-based auth? Wouldn't that be even easier and more secure? </pre></blockquote><pre wrap="">Users of this product don't havekeys - they have passwords. The username/password is for per-user authentication. The username defines the access level. Many users will use the same client. The client does have its own private RSA key and public certificate, however, this grants entry to the system. Password login is still required by the users of the client. </pre></blockquote><pre wrap="">And you have one private key *per client*? That's an interesting approach - and actually how pg will work if you enable client cert checking :-) </pre></blockquote> Yep.<br /><br /><blockquote cite="mid:47763626.6010301@hagander.net" type="cite"><pre wrap=""> It's probably about as far as you can get as long as you use passwords. If you want something that's really secure, you just have to give up using passwords. Solutions like one-time passwords from a token or certificates on a smartcard are what people use then :-) </pre></blockquote> Yes. Pseudo-random number generator on an LCDdisplay that changes every 60 seconds, or one of those government satellite-based systems. :-)<br /><br /> Even still,it's often two forms of authentication. With the SecureID cards, the number proves you have the physical card on yourpossession, and the password proves you have access to the person's brain (or piece of paper that they stupidly wrotetheir password on :-) ). Most of these systems are not necessarily effective against kidnapping the person and threateningto kill them. However, they are very effective against random hackers on the Internet who are doing trial anderror or some other approach. By denying entry BEFORE the password is provided, they are unable to guess passwords andget lucky.<br /><br /><blockquote cite="mid:47763626.6010301@hagander.net" type="cite"><blockquote type="cite"><pre wrap="">Thecertificate on the client grants access to the system. It does not grant access to the resources on the system. Two-level authentication with mandatory server authentication. You see similar things in physical security instances. A security badge lets you in the door - but you still need to login to the computer once you get in. As for protecting the binary that prompts for a password on the client - I didn't bother with this, although Java does allow for signed jar files that would allow the user to be assured that the client is legitimate. </pre></blockquote><pre wrap="">Only as long asyou can trust the JRE... And the OS... (yeah, reaching, but still goes to prove the point that the system *cannot* be secure if people can chance your client code/machine. It can be secure from a server or network POV, but not froma client one) </pre></blockquote> Correct. I believe this is why I didn't bother. I sawvalue to using better than 128-bit AES for the password (as per US export control regulations), but not for the data (thedata was primarily a list of privileged write requests), and I saw value to making the password unreadable as soon aspossible (the client might be long running, but it turns the password into RSA encrypted data soon after you hit ENTER,and reuses this for the length of the session if the password is required again). I didn't see value to protectingthe client.<br /><br /><blockquote cite="mid:47763626.6010301@hagander.net" type="cite"><blockquote type="cite"><prewrap="">There are always loops though, just because the client is legitimate doesn't mean that the keyboard is, and so on. You end up putting in enough effort to mitigate the risk. The risk always exists, but through clever, cryptographic, or obfuscatory measures, the risk can be greatly reduced. </pre></blockquote><pre wrap="">Right</pre></blockquote> If it was an easy problem, somebody would have solvedit once and for all, and the CIA would be out of business... :-)<br /><br /> Cheers,<br /> mark<br /><br /><pre class="moz-signature"cols="72">-- Mark Mielke <a class="moz-txt-link-rfc2396E" href="mailto:mark@mielke.cc"><mark@mielke.cc></a> </pre>
D'Arcy J.M. Cain wrote: > - 1: How does the client assure that the postmaster is legit > - 2: How does the postmaster assure that the client is legit > > > And neither answers the original problem: 3. How can the sysadmin prevent a malicious local user from hijacking the sockets if the postmaster isn't running? Prevention is much more valuable than ex post detection, IMNSHO. Probably the first answer is not to run postgres on a machine with untrusted users, but that's not always possible. Maybe we can't find a simple cross-platform answer, but that doesn't mean we should not look at platform-specific answers, at least for documentation. cheers andrew
On Sat, 29 Dec 2007 10:38:13 -0500 Andrew Dunstan <andrew@dunslane.net> wrote: > > > D'Arcy J.M. Cain wrote: > > - 1: How does the client assure that the postmaster is legit > > - 2: How does the postmaster assure that the client is legit > > And neither answers the original problem: Which seems to have been lost in the noise. > 3. How can the sysadmin prevent a malicious local user from hijacking > the sockets if the postmaster isn't running? A better way of stating it for sure. > Prevention is much more valuable than ex post detection, IMNSHO. > > Probably the first answer is not to run postgres on a machine with > untrusted users, but that's not always possible. Maybe we can't find a > simple cross-platform answer, but that doesn't mean we should not look > at platform-specific answers, at least for documentation. Yes, that's what I said at the start of this discussion. If you don't trust the users with actual access to the box, the rest of this is pretty much academic. -- D'Arcy J.M. Cain <darcy@druid.net> | Democracy is three wolves http://www.druid.net/darcy/ | and a sheep voting on +1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.
* D'Arcy J.M. Cain (darcy@druid.net) wrote: > > Probably the first answer is not to run postgres on a machine with > > untrusted users, but that's not always possible. Maybe we can't find a > > simple cross-platform answer, but that doesn't mean we should not look > > at platform-specific answers, at least for documentation. > > Yes, that's what I said at the start of this discussion. If you don't > trust the users with actual access to the box, the rest of this is > pretty much academic. Academic from an upstream standpoint, but there are platform-specific / setup-specific things you can do (SELinux, vserver/jails, Kerberos, SSL, etc...). Documenting it is good, but I think it should really be to the extent of saying "look, 5432 is unprivledged, here are some ways to deal with that" and "you should probably put the PG unix socket in a secured directory" (though Debian and I suspect many other distributions do this part for you). Enjoy, Stephen
Andrew Dunstan wrote: > D'Arcy J.M. Cain wrote: >> - 1: How does the client assure that the postmaster is legit >> - 2: How does the postmaster assure that the client is legit > And neither answers the original problem: > 3. How can the sysadmin prevent a malicious local user from hijacking > the sockets if the postmaster isn't running? > Prevention is much more valuable than ex post detection, IMNSHO. > Probably the first answer is not to run postgres on a machine with > untrusted users, but that's not always possible. Maybe we can't find a > simple cross-platform answer, but that doesn't mean we should not look > at platform-specific answers, at least for documentation. I thought this answer was already provided: Put the socket in a directory that is only writable by the database owner. The socket is created as part of the bind() process. I think this covers 90%+ of it, and is already in use by distributions. The only thing "better" this team could do would be to formalize it? The "serveruser=" db open parameter might be enough to lock it up tight if there is still a race condition on bind(). It's effectively a very cheap authentication mechanism that does not require expensive cryptographic operations. There is probably value to making SSL consistent for TCP/UNIX sockets as Tom suggests. Removing the inconsistency as it were, and allowing for SSL authentication and encryption for UNIX sockets the same as for TCP sockets. If it was as simple as removing an if statement that would be even cooler... :-) What has come out for me is that this isn't UNIX socket specific at all (although there may be UNIX socket specific options available). The standard PostgreSQL port is above 1024, and anybody could bind()/listen()/accept() on it, assuming it is not running. This is where your first answer of running PostgreSQL on a machine with trusted users comes in as a sensible recommendation, even if only some people are willing to accept this recommendation. :-) Cheers, mark -- Mark Mielke <mark@mielke.cc>
Magnus Hagander wrote: > Martijn van Oosterhout wrote: > > On Sat, Dec 29, 2007 at 12:40:24PM +0100, Magnus Hagander wrote: > >> We already *do* allow the DBA to choose this, no? If you put the root > >> certificate on the client, it *will* verify the server cert, and it > >> *will* refuse to connect to a server that can't present a trusted root cert. > > > > I think Tom's point is that we don't allow this for connections over a > > Unix Domain socket. And thus we should remove the asymmetry so the > > verification can work for them also. > > If that's where we still are, then I'm all for that provided it doesn't > add a whole lot of complexity, as I think I said before. I thought we > were now talking general SSL connections. That could be where I lost the > thread :-) I think the user-visible impact of that addition would be to add 'localssl' in pg_hba.conf. It would be nice if we had made SSL control separate from the connection type, and perhaps we will explore that in 8.4. (None of this is 8.3, I believe.) -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://postgres.enterprisedb.com + If your life is a hard drive, Christ can be your backup. +
Mark Mielke wrote: > Andrew Dunstan wrote: >> D'Arcy J.M. Cain wrote: >>> - 1: How does the client assure that the postmaster is legit >>> - 2: How does the postmaster assure that the client is legit >> And neither answers the original problem: >> 3. How can the sysadmin prevent a malicious local user from hijacking >> the sockets if the postmaster isn't running? >> Prevention is much more valuable than ex post detection, IMNSHO. >> Probably the first answer is not to run postgres on a machine with >> untrusted users, but that's not always possible. Maybe we can't find >> a simple cross-platform answer, but that doesn't mean we should not >> look at platform-specific answers, at least for documentation. > I thought this answer was already provided: Put the socket in a > directory that is only writable by the database owner. The socket is > created as part of the bind() process. I think this covers 90%+ of it, > and is already in use by distributions. The only thing "better" this > team could do would be to formalize it? The "serveruser=" db open > parameter might be enough to lock it up tight if there is still a race > condition on bind(). It's effectively a very cheap authentication > mechanism that does not require expensive cryptographic operations. > > It's in use by some distributions, hardly all, or even a majority. AFAIK it's only in Debian + descendants. Anyway, I think it could arguably make matters worse, not better, by guaranteeing that the postmaster can start up even if the TCP socket has been hijacked . That's why I suggested it might be useful to have a switch that says don't start if any interface fails to bind (which was the old pre-8.0 behaviour). It might well be useful for us to look at drafting an SELinux policy, even if it's not universal. After all, this situation is precisely the sort of thing that SELinux is about, ISTM. cheers andrew
Andrew Dunstan wrote: > It might well be useful for us to look at drafting an SELinux policy, > even if it's not universal. After all, this situation is precisely the > sort of thing that SELinux is about, ISTM. http://code.google.com/p/sepgsql/ ??? Sincerely, Joshua D. Drake
Andrew Dunstan <andrew@dunslane.net> writes: > It might well be useful for us to look at drafting an SELinux policy, There already is one. However, I'm not sure that it's ever been reviewed by anyone who's Postgres-savvy (I certainly haven't looked at it:-(). It would be useful for that to happen. Another thing is that we could stand to have some documentation on how to adjust the policy for local needs --- in particular, supporting tablespaces that're outside /var/lib/pgsql. regards, tom lane
Mark Mielke <mark@mark.mielke.cc> writes: > What has come out for me is that this isn't UNIX socket specific at all > (although there may be UNIX socket specific options available). The > standard PostgreSQL port is above 1024, and anybody could > bind()/listen()/accept() on it, assuming it is not running. Right. The real bottom line is that a socket in /tmp is exactly as secure as a localhost TCP port. There is no value in debating moving the default socket location unless you are prepared to also relocate the default port to below 1024 (and even that helps only on Unix-y platforms). I remain of the opinion that what we should do about this is support SSL usage over sockets and document the issues. regards, tom lane
On Sat, 29 Dec 2007, Joshua D. Drake wrote: > http://code.google.com/p/sepgsql/ > ??? Getting that to work required some obtrusive changes to the source code, which they've only done to 8.2.4. Even that doesn't seem to be production-quality and it's not clear how that will make its way into newer versions yet. The job here is to work on the SELinux policies for PostgreSQL. You can't just re-use whatever work has gone into the SE-PostgreSQL ones, because those presume you're using their modified server instead of the regular one. I started collecting notes and writing a PostgreSQL/SELinux how-to aimed at RHEL 5.0+ but I'm not doing work in that area anymore. On reflection I might just release what I did so far to the developer's wiki and see if anybody else fills in the missing pieces. But unless there's somebody else with a burning need to work on this area I doubt that will happen--there's nothing about SELinux that anybody does just for fun. -- * Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Sat, 29 Dec 2007 14:40:29 -0500 (EST) Greg Smith <gsmith@gregsmith.com> wrote: > On Sat, 29 Dec 2007, Joshua D. Drake wrote: > > > http://code.google.com/p/sepgsql/ > > ??? > > Getting that to work required some obtrusive changes to the source > code, which they've only done to 8.2.4. Even that doesn't seem to be > production-quality and it's not clear how that will make its way into > newer versions yet. "they've" has the potential to be "we"... As I recall the individual made a reasonable effort to introduce the work that he was doing to the community. http://archives.postgresql.org/pgsql-hackers/2007-03/msg00271.php http://archives.postgresql.org/pgsql-hackers/2007-04/msg00664.php > > The job here is to work on the SELinux policies for PostgreSQL. You > can't just re-use whatever work has gone into the SE-PostgreSQL ones, > because those presume you're using their modified server instead of > the regular one. Fair enough. I was just trying to offer a source to start with. > But unless > there's somebody else with a burning need to work on this area I > doubt that will happen--there's nothing about SELinux that anybody > does just for fun. Ya think? :P I recognize that this "SE PGSQL" has the potential to be a portability nightmare (as it only works on linux) but it certainly has potential to give us a leg up on a lot of work. Anyway, not saying its good code but I did read the docs and it sure looks cool. Sincerely, Joshua D. Drake - -- The PostgreSQL Company: Since 1997, http://www.commandprompt.com/ Sales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240 Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate SELECT 'Training', 'Consulting' FROM vendor WHERE name = 'CMD' -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFHdqriATb/zqfZUUQRAk73AJ9/Gy2+5mjxBsbEZHyCycp/HgwR0wCfYHPw TaLkLocBWGpgP0Z7T+IaaWA= =0Zwj -----END PGP SIGNATURE-----
On Sat, 29 Dec 2007, Joshua D. Drake wrote: > "they've" has the potential to be "we"... As I recall the individual > made a reasonable effort to introduce the work that he was doing to the > community. After a bit of hindsight research, I think SE-PostgreSQL suffered from two timing problems combined with a cultural misperception. The first timing issue was that those messages went out just as the 8.3 feature freeze was going on. I know I looked at their stuff for a bit at that point, remembered I had patches to work on, and that was it at that point. The second problem is that just after the first message to the list came out, RedHat released RHEL 5.0, which did a major reworking of SELinux that everyone could for production systems immediately. I know all my SELinux time at that point immediately switched to working through the major improvements RHEL5 made rather than thinking about their project. The cultural problem is that their deliverable was a series of RPM packages (for Fedora 7, ack). They also have a nice set of user documentation. But you can't send a message to this hackers list asking for feedback and hand that over as your reference. People here want code. When I wander through the threads that died, I think this message shows the mismatch best: http://archives.postgresql.org/pgsql-hackers/2007-04/msg00722.php When Tom throws out an objection that a part of the design looks sketchy, the only good way to respond is to throw the code out and let him take a look. I never saw the SE-PostgreSQL group even showing diffs of what they did; making it easy to get a fat context diff (with a bit more context than usual) would have done wonders for their project. You're not going to get help from this community if people have to install a source RPM and do their own diff just to figure out what was changed from the base. -- * Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD
Tom Lane wrote: > This is basically the same old mutual authentication problem that SSL > was designed to solve by using certificates. I don't think we have > either the need or the expertise to re-invent that wheel. > > ISTM we have these action items: > > 1. Improve the code so that SSL authentication can be used across a > Unix-socket connection ... Added to TODO: * Allow SSL authentication/encryption over unix domain sockets -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://postgres.enterprisedb.com + If your life is a hard drive, Christ can be your backup. +
Greg Smith wrote: > On Sat, 29 Dec 2007, Joshua D. Drake wrote: > >> http://code.google.com/p/sepgsql/ >> ??? > > Getting that to work required some obtrusive changes to the source code, > which they've only done to 8.2.4. Even that doesn't seem to be > production-quality and it's not clear how that will make its way into > newer versions yet. Sorry for my late responding. I don't argue your opinion about its quality issue. We indeed need more feedbacks and improvements from widespread viewpoints. The current status of SE-PostgreSQL is a bit incorrect. The latest one is sepostgresql-8.2.5-1.66.fc9, based on 8.2.5. See, http://download.fedora.redhat.com/pub/fedora/linux/development/ Currently, we are paying efforts to port SE-PostgreSQL features into 8.3.x based PostgreSQL. (It is based on 8.3beta based PostgreSQL in correct.) > The job here is to work on the SELinux policies for PostgreSQL. You > can't just re-use whatever work has gone into the SE-PostgreSQL ones, > because those presume you're using their modified server instead of the > regular one. Yes, SE-PostgreSQL requires to stop the regular one when it works. We cannot use both of them at the same time. However, the default security policy is designed as if it works like regular one without any special SELinux configuration. If you can find out any bug or unclear behavior, I want you to report it. > I started collecting notes and writing a PostgreSQL/SELinux how-to aimed > at RHEL 5.0+ but I'm not doing work in that area anymore. I'm interested in this effort. Could you tell me the URL? Thanks, -- OSS Platform Development Division, NEC KaiGai Kohei <kaigai@ak.jp.nec.com>
Joshua D. Drake wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > On Sat, 29 Dec 2007 14:40:29 -0500 (EST) > Greg Smith <gsmith@gregsmith.com> wrote: > >> On Sat, 29 Dec 2007, Joshua D. Drake wrote: >> >>> http://code.google.com/p/sepgsql/ >>> ??? >> Getting that to work required some obtrusive changes to the source >> code, which they've only done to 8.2.4. Even that doesn't seem to be >> production-quality and it's not clear how that will make its way into >> newer versions yet. > > "they've" has the potential to be "we"... As I recall the individual > made a reasonable effort to introduce the work that he was doing to the > community. > > http://archives.postgresql.org/pgsql-hackers/2007-03/msg00271.php > http://archives.postgresql.org/pgsql-hackers/2007-04/msg00664.php If my memory is correct, the alpha implementation was announced after the feature freeze date of 8.3. # Sorry for my lacking of understanding for PostgreSQL development processes. Therefore, Tom suggested this kind of discussion should be restarted after the release of 8.3. I also agreed it. >> But unless >> there's somebody else with a burning need to work on this area I >> doubt that will happen--there's nothing about SELinux that anybody >> does just for fun. > > Ya think? :P > > I recognize that this "SE PGSQL" has the potential to be a portability > nightmare (as it only works on linux) but it certainly has potential to > give us a leg up on a lot of work. Yes, it works only on Linux. I added --enable-selinux build option into the configure script. It prevent to enable SE-PostgreSQL feature on any other plathomes. > Anyway, not saying its good code but I did read the docs and it sure > looks cool. Thanks, -- OSS Platform Development Division, NEC KaiGai Kohei <kaigai@ak.jp.nec.com>
Greg Smith wrote: > On Sat, 29 Dec 2007, Joshua D. Drake wrote: > >> "they've" has the potential to be "we"... As I recall the individual >> made a reasonable effort to introduce the work that he was doing to the >> community. > > After a bit of hindsight research, I think SE-PostgreSQL suffered from > two timing problems combined with a cultural misperception. The first > timing issue was that those messages went out just as the 8.3 feature > freeze was going on. I know I looked at their stuff for a bit at that > point, remembered I had patches to work on, and that was it at that > point. Yes, it was lack of my understanding of PostgreSQL development process. > The second problem is that just after the first message to the > list came out, RedHat released RHEL 5.0, which did a major reworking of > SELinux that everyone could for production systems immediately. I know > all my SELinux time at that point immediately switched to working > through the major improvements RHEL5 made rather than thinking about > their project. The most of SELinux features on RHEL5.0 are based on Fedora core 6. It does not contain any SE-PostgreSQL support. We have to wait for next major release of RHEL to apply SE-PostgreSQL features on production system. If you can try out it on non-production system, Fedora 8 is the most recommendable environment. > The cultural problem is that their deliverable was a series of RPM > packages (for Fedora 7, ack). They also have a nice set of user > documentation. But you can't send a message to this hackers list asking > for feedback and hand that over as your reference. People here want > code. When I wander through the threads that died, I think this message > shows the mismatch best: > http://archives.postgresql.org/pgsql-hackers/2007-04/msg00722.php Hmm... I'll send it as a patch to discuss this feature. Please wait for we can port it into the latest postgresql tree. (Maybe, it is nonsense to discuss 8.2.x based patches.) > When Tom throws out an objection that a part of the design looks > sketchy, the only good way to respond is to throw the code out and let > him take a look. I never saw the SE-PostgreSQL group even showing diffs > of what they did; making it easy to get a fat context diff (with a bit > more context than usual) would have done wonders for their project. > You're not going to get help from this community if people have to > install a source RPM and do their own diff just to figure out what was > changed from the base. Thanks for your indications. -- OSS Platform Development Division, NEC KaiGai Kohei <kaigai@ak.jp.nec.com>
On Sat, Dec 22, 2007 at 09:25:05AM -0500, Bruce Momjian wrote: > So, what solutions exist? We could require the use of port numbers less > than 1024 which typically require root and then become a non-root user, > but that requires root to start the server. We could put the unix I don't know about *requiring* this, but it would certainly be a nice option to have. Right now there's absolutely no way that you could get Postgres to use a port < 1024. -- Decibel!, aka Jim C. Nasby, Database Architect decibel@decibel.org Give your computer some brain candy! www.distributed.net Team #1828