Thread: Successor of MD5 authentication, let's use SCRAM
The security of MD5 authentication is brought up every now and then, most recently here: http://archives.postgresql.org/pgsql-hackers/2012-08/msg00586.php. The NIST competition mentioned in that thread just finished. MD5 is still resistent to preimage attacks, which is what matters for our MD5 authentication protocol, but I think we should start thinking about a replacement, if only to avoid ringing the alarm bells in people's minds thinking "MD5 = broken" Perhaps the biggest weakness in the current scheme is that if an attacker ever sees the contents of pg_shadow, it can use the stored hashes to authenticate as any user. This might not seems like a big deal, you have to be a superuser to read pg_shadow after all, but it makes it a lot more dangerous to e.g leave old backups lying around. Thers was some talk about avoiding that in this old thread: http://archives.postgresql.org/pgsql-general/2002-06/msg00553.php. It turns out that it's possible to do this without the kind of commutative hash function discussed in that thread. There's a protocol called Salted Challenge Response Authentication Mechanism (SCRAM) (see RFC5802), that accomplishes the same with some clever use of a hash function and XOR. I think we should adopt that. Thoughts on that? There are some other minor issues with current md5 authentication. SCRAM would address these as well, but if we don't adopt SCRAM for some reason, we should still address these somehow: 1. Salt length. Greg Stark calculated the odds of salt collisions here: http://archives.postgresql.org/pgsql-hackers/2004-08/msg01540.php. It's not too bad as it is, and as Greg pointed out, if you can eavesdrop it's likely you can also hijack an already established connection. Nevertheless I think we should make the salt longer, say, 16 bytes. 2. Make the calculation more expensive, to make dictionary attacks more expensive. An eavesdropper can launch a brute-force or dictionary attack using a captured hash and salt. Similar to the classic crypt(3) function, it would be good for the calculation to be expensive, although that naturally makes authentication more expensive too. For future-proofing, it would be good to send the number of iterations the hash is applied as part of the protocol, so that it can be configured in the server or we can just raise the default/hardcoded number without changing the protocol as computers becomes more powerful (SCRAM does this). 3. Instead of a straightforward hash of (password, salt), use a HMAC construct to concatenate the password and salt (see RFC2104). This makes it resistant to length-extension attacks. The current scheme isn't vulnerable to that, but better safe than sorry. - Heikki
On 10 October 2012 11:41, Heikki Linnakangas <hlinnakangas@vmware.com> wrote: > Thoughts on that? I think there has been enough discussion of md5 problems elsewhere that we should provide an alternative. If we can agree on that bit first, we can move onto exactly what else should be available. -- Simon Riggs http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Training & Services
On Wed, Oct 10, 2012 at 3:36 PM, Simon Riggs <simon@2ndquadrant.com> wrote: > On 10 October 2012 11:41, Heikki Linnakangas <hlinnakangas@vmware.com> wrote: >> Thoughts on that? > > I think there has been enough discussion of md5 problems elsewhere > that we should provide an alternative. > > If we can agree on that bit first, we can move onto exactly what else > should be available. Main weakness in current protocol is that stored value is plaintext-equivalent - you can use it to log in. Rest of the problems - use of md5 and how it is used - are relatively minor. (IOW - they don't cause immediate security incident.) Which means just slapping SHA1 in place of MD5 and calling it a day is bad idea. Another bad idea is to invent our own algorithm - if a security protocol needs to fulfill more than one requirement, it tends to get tricky. I have looked at SRP previously, but it's heavy of complex bignum math, which makes it problematic to reimplement in various drivers. Also many versions of it makes me dubious of the authors.. The SCRAM looks good from the quick glance. It uses only basic crypto tools - hash, hmac, xor. The "stored auth info cannot be used to log in" will cause problems to middleware, but SCRAM defines also concept of log-in-as-other-user, so poolers can have their own user that they use to create connections under another user. As it works only on connect time, it can actually be secure, unlike user switching with SET ROLE. -- marko
Heikki, Like these proposals in general. * Heikki Linnakangas (hlinnakangas@vmware.com) wrote: > For future-proofing, it would be good to send the > number of iterations the hash is applied as part of the protocol, so > that it can be configured in the server or we can just raise the > default/hardcoded number without changing the protocol as computers > becomes more powerful (SCRAM does this). wrt future-proofing, I don't like the "#-of-iterations" approach. There are a number of examples of how to deal with multiple encryption types being supported by a protocol, I'd expect hash'ing could be done in the same way. For example, Negotiate, SSL, Kerberos, GSSAPI, all have ways of dealing with multiple encryption/hashing options being supported. Multiple iterations could be supported through that same mechanism (as des/des3 were both supported by Kerberos for quite some time). In general, I think it's good to build on existing implementations where possible. Perhaps we could even consider using something which already exists for this? Also, how much should we worry about supporting complicated/strong authentication systems for those who don't actually encrypt the entire communication, which might reduce the need for this additional complexity anyway? Don't get me wrong- I really dislike that we don't have something better today for people who insist on password based auth, but perhaps we should be pushing harder for people to use SSL instead? Thanks, Stephen
* Marko Kreen (markokr@gmail.com) wrote: > As it works only on connect > time, it can actually be secure, unlike user switching > with SET ROLE. I'm guessing your issue with SET ROLE is that a RESET ROLE can be issued later..? If so, I'd suggest that we look at fixing that, but realize it could break poolers. For that matter, I'm not sure how the proposal to allow connections to be authenticated as one user but authorized as another (which we actually already support in some cases, eg: peer) *wouldn't* break poolers, unless you're suggesting they either use a separate connection for every user, or reconnect every time, both of which strike me as defeating a great deal of the point of having a pooler in the first place... Thanks, Stephen
On 10/12/12 12:44 PM, Stephen Frost wrote: > Don't get me wrong- I really dislike that > we don't have something better today for people who insist on password > based auth, but perhaps we should be pushing harder for people to use > SSL instead? Problem is, the fact that setting up SSL correctly is hard is outside of our control. Unless we can give people a "run these three commands on each server and you're now SSL authenticating" script, we can continue to expect the majority of users not to use SSL. And I don't think that level of simplicity is even theoretically possible. -- Josh Berkus PostgreSQL Experts Inc. http://pgexperts.com
* Josh Berkus (josh@agliodbs.com) wrote: > Problem is, the fact that setting up SSL correctly is hard is outside of > our control. Agreed, though the packagers do make it easier.. > Unless we can give people a "run these three commands on each server and > you're now SSL authenticating" script, we can continue to expect the > majority of users not to use SSL. And I don't think that level of > simplicity is even theoretically possible. The Debian-based packages do quite a bit to ease this pain. Do the other distributions do anything to set up SSL certificates, etc on install? Perhaps they could be convinced to? Thanks, Stephen
On 10/12/12 4:25 PM, Stephen Frost wrote: > * Josh Berkus (josh@agliodbs.com) wrote: >> >Unless we can give people a "run these three commands on each server and >> >you're now SSL authenticating" script, we can continue to expect the >> >majority of users not to use SSL. And I don't think that level of >> >simplicity is even theoretically possible. > The Debian-based packages do quite a bit to ease this pain. Do the > other distributions do anything to set up SSL certificates, etc on > install? Perhaps they could be convinced to? don't forget, there's OS's other than Linux to consider too... the various BSD's, Solaris, AIX, OSX, and MS Windows are all platforms PostgreSQL runs on. -- john r pierce N 37, W 122 santa cruz ca mid-left coast
Stephen Frost wrote: > * Josh Berkus (josh@agliodbs.com) wrote: >> Problem is, the fact that setting up SSL correctly is hard is outside of >> our control. > > Agreed, though the packagers do make it easier.. > >> Unless we can give people a "run these three commands on each server and >> you're now SSL authenticating" script, we can continue to expect the >> majority of users not to use SSL. And I don't think that level of >> simplicity is even theoretically possible. > > The Debian-based packages do quite a bit to ease this pain. Do the > other distributions do anything to set up SSL certificates, etc on > install? Perhaps they could be convinced to? This has bit me. At my work we started a project on Debian, using the http://packages.debian.org/squeeze-backports/ version of Postgres 9.1, and it included the SSL out of the box, just install that regular Postgres or Pg client package and SSL was ready to go. And now we're migrating to Red Hat for the production launch, using the http://www.postgresql.org/download/linux/redhat/ packages for Postgres 9.1, and these do *not* include the SSL. This change has been a pain, as we then disabled SSL when we otherwise would have used it. (Though all database access would be over a private server-server network, so the situation isn't as bad as going over the public internet.) How much trouble would it be to make the http://www.postgresql.org/download/linux/redhat/ packages include SSL? -- Darren Duncan
On 10/12/12 9:00 PM, Darren Duncan wrote: > And now we're migrating to Red Hat for the production launch, using > the http://www.postgresql.org/download/linux/redhat/ packages for > Postgres 9.1, and these do *not* include the SSL. hmm? I'm using the 9.1 for CentOS 6(RHEL 6) and libpq.so certainly has libssl3.so, etc as references. ditto the postmaster/postgres main program has libssl3.so too. maybe your certificate chains don't come pre-built, I dunno, I haven't dealt with that end of things. -- john r pierce N 37, W 122 santa cruz ca mid-left coast
John R Pierce wrote: > On 10/12/12 9:00 PM, Darren Duncan wrote: >> And now we're migrating to Red Hat for the production launch, using >> the http://www.postgresql.org/download/linux/redhat/ packages for >> Postgres 9.1, and these do *not* include the SSL. > > hmm? I'm using the 9.1 for CentOS 6(RHEL 6) and libpq.so certainly has > libssl3.so, etc as references. ditto the postmaster/postgres main > program has libssl3.so too. maybe your certificate chains don't come > pre-built, I dunno, I haven't dealt with that end of things. Okay, I'll have to look into that. All I know is out of the box SSL just worked on Debian and it didn't on Red Hat; trying to enable SSL on out of the box Postgres on Red Hat gave a fatal error on server start, at the very least needing the installation of SSL keys/certs, which I didn't have to do on Debian. -- Darren Duncan
On 10/13/2012 01:55 AM, Darren Duncan wrote: > John R Pierce wrote: >> On 10/12/12 9:00 PM, Darren Duncan wrote: >>> And now we're migrating to Red Hat for the production launch, using >>> the http://www.postgresql.org/download/linux/redhat/ packages for >>> Postgres 9.1, and these do *not* include the SSL. >> >> hmm? I'm using the 9.1 for CentOS 6(RHEL 6) and libpq.so certainly >> has libssl3.so, etc as references. ditto the postmaster/postgres >> main program has libssl3.so too. maybe your certificate chains >> don't come pre-built, I dunno, I haven't dealt with that end of things. > > Okay, I'll have to look into that. All I know is out of the box SSL > just worked on Debian and it didn't on Red Hat; trying to enable SSL > on out of the box Postgres on Red Hat gave a fatal error on server > start, at the very least needing the installation of SSL keys/certs, > which I didn't have to do on Debian. -- Darren Duncan . Of course RedHat RPMs are build with SSL. Does Debian they create a self-signed certificate? If so, count me as unimpressed. I'd argue that's worse than doing nothing. Here's what the docs say (rightly) about such certificates: A self-signed certificate can be used for testing, but a certificate signed by a certificate authority (CA) (eitherone of the global CAs or a local one) should be used in production so that clients can verify the server's identity.If all the clients are local to the organization, using a local CA is recommended. Creation of properly signed certificates is entirely outside the scope of Postgres, and I would not expect packagers to do it. I have created a local CA for RedHat and friends any number of times, and created signed certs for Postgres, both server and client, using them. It's not terribly hard. cheers andrew
* Andrew Dunstan (andrew@dunslane.net) wrote: > Does Debian they create a self-signed certificate? If so, count me > as unimpressed. I'd argue that's worse than doing nothing. Here's > what the docs say (rightly) about such certificates: Self-signed certificates do provide for in-transit encryption. I agree that they don't provide a guarantee of the remote side being who you think it is, but setting up a MITA attack is more difficult than eavesdropping on a connection and more likely to be noticed. You can, of course, set up your own CA and sign certs off of it under Debian as well. Unfortunately, most end users aren't going to do that. Many of those same do benefit from at least having an encrypted connection when it's all done for them. Thanks, Stephen
On Wed, Oct 10, 2012 at 11:41 AM, Heikki Linnakangas <hlinnakangas@vmware.com> wrote: > 1. Salt length. Greg Stark calculated the odds of salt collisions here: > http://archives.postgresql.org/pgsql-hackers/2004-08/msg01540.php. It's not > too bad as it is, and as Greg pointed out, if you can eavesdrop it's likely > you can also hijack an already established connection. Nevertheless I think > we should make the salt longer, say, 16 bytes. Fwiw that calculation was based on the rule of thumb that a collision is likely when you have sqrt(hash space) elements. Wikipedia has a better formula which comes up with 77,163. For 16 bytes that formula gives 2,171,938,135,516,356,249 salts before you expect a collision. -- greg
On Sat, Oct 13, 2012 at 7:00 AM, Andrew Dunstan <andrew@dunslane.net> wrote: > Does Debian they create a self-signed certificate? If so, count me as > unimpressed. I'd argue that's worse than doing nothing. Here's what the docs > say (rightly) about such certificates: Debian will give you a self signed certificate by default. Protecting against passive eavesdroppers is not an inconsiderable benefit to get for "free", and definitely not a marginal attack technique: it's probably the most common. For what they can possibly know about the end user, Debian has it right here. -- fdr
On Sun, Oct 14, 2012 at 5:59 AM, Daniel Farina <daniel@heroku.com> wrote: > On Sat, Oct 13, 2012 at 7:00 AM, Andrew Dunstan <andrew@dunslane.net> wrote: >> Does Debian they create a self-signed certificate? If so, count me as >> unimpressed. I'd argue that's worse than doing nothing. Here's what the docs >> say (rightly) about such certificates: > > Debian will give you a self signed certificate by default. Protecting > against passive eavesdroppers is not an inconsiderable benefit to get > for "free", and definitely not a marginal attack technique: it's > probably the most common. > > For what they can possibly know about the end user, Debian has it right here. There's a lot of shades of gray to that one. Way too many to say they're right *or* wrong, IMHO. It *does* make people think they have "full ssl security by default", which they *don't*.They do have partial protection, which helps in some (fairly common) scenarios. But if you compare it to the requirements that people *do* have when they use SSL, it usually *doesn't* protect them the whole way - but they get the illusion that it does. Sure, they'd have to read up on the details in order to get secure whether it's on by default or not - that's why I think it's hard to call it either right or wrong, but it's rather somewhere in between. They also enable things like encryption on all localhost connections. I consider that plain wrong, regardless. Though it provides for some easy "performance tuning" for consultants... -- Magnus HaganderMe: http://www.hagander.net/Work: http://www.redpill-linpro.com/
On Sun, Oct 14, 2012 at 2:04 AM, Magnus Hagander <magnus@hagander.net> wrote: > There's a lot of shades of gray to that one. Way too many to say > they're right *or* wrong, IMHO. We can agree it is 'sub-ideal', but there is not one doubt in my mind that it is not 'right' given the scope of Debian's task, which does *not* include pushing applied cryptography beyond its current pitiful state. Debian not making self-signed certs available by default will just result in a huge amount of plaintext database authentication and traffic available over the internet, especially when you consider the sslmode=prefer default, and as a result eliminate protection from the most common class of attack for users with low-value (or just low-vigilance) use cases. In aggregate, that is important, because there are a lot of them. It would be a net disaster for security. > It *does* make people think they have "full ssl security by default", > which they *don't*.They do have partial protection, which helps in > some (fairly common) scenarios. But if you compare it to the > requirements that people *do* have when they use SSL, it usually > *doesn't* protect them the whole way - but they get the illusion that > it does. Sure, they'd have to read up on the details in order to get > secure whether it's on by default or not - that's why I think it's > hard to call it either right or wrong, but it's rather somewhere in > between. If there is such blame to go around, I place such blame squarely on clients. More secure is the JDBC library, which makes you opt into logging into a server that has no verified identity via configuration.The problem there is that it's a pain to get signedcerts in, say, a test environment, so "don't check certs" will make its way into the default configuration, and now you have all pain and no gain. -- fdr
On 14 October 2012 22:17, Daniel Farina <daniel@heroku.com> wrote: > The problem there is that it's a pain to get signed certs in, say, a > test environment, so "don't check certs" will make its way into the > default configuration, and now you have all pain and no gain. This is precisely the issue that Debian deals with in providing the "default Snake Oil" certificate; software development teams - especially small shops with one or two developers - don't want to spend time learning about CAs and creating their own, etc, and often their managers would see this as wasted time for setting up development environments and staging systems. Not saying they're right, of course; but it can be an uphill struggle, and as long as you get a real certificate for your production environment, it's hard to see what harm this (providing the "snake oil" certificate) actually causes.
On Mon, Oct 15, 2012 at 1:21 PM, Will Crawford <billcrawford1970@gmail.com> wrote: > On 14 October 2012 22:17, Daniel Farina <daniel@heroku.com> wrote: > >> The problem there is that it's a pain to get signed certs in, say, a >> test environment, so "don't check certs" will make its way into the >> default configuration, and now you have all pain and no gain. > > This is precisely the issue that Debian deals with in providing the > "default Snake Oil" certificate; software development teams - > especially small shops with one or two developers - don't want to > spend time learning about CAs and creating their own, etc, and often > their managers would see this as wasted time for setting up > development environments and staging systems. Not saying they're > right, of course; but it can be an uphill struggle, and as long as you > get a real certificate for your production environment, it's hard to > see what harm this (providing the "snake oil" certificate) actually > causes. I don't see a problem at all with providing the snakeoil cert. In fact, it's quite useful. I see a problem with enabling it by default. Because it makes people think they are more secure than they are. In a browser, they will get a big fat warning every time, so they will know it. There is no such warning in psql. Actually, maybe we should *add* such a warning. We could do it in psql. We can't do it in libpq for everyone, but we can do it in our own tools... Particularly since we do print the SSL information already - we could just add a "warning: cert not verified" or something like that to the same piece of information. -- Magnus HaganderMe: http://www.hagander.net/Work: http://www.redpill-linpro.com/
On Sun, Oct 21, 2012 at 09:55:50AM +0200, Magnus Hagander wrote: > I don't see a problem at all with providing the snakeoil cert. In > fact, it's quite useful. > > I see a problem with enabling it by default. Because it makes people > think they are more secure than they are. So, what you're suggesting is that any use of ssl to a remote machine without the sslrootcert option should generate a warning. Something along the lines of "remote server not verified"? For completeness it should also show this for any non-SSL connection. libpq should export a "serververified" flag which would be false always unless the connection is SSL and the CA is verified . > In a browser, they will get a big fat warning every time, so they will > know it. There is no such warning in psql. Actually, maybe we should > *add* such a warning. We could do it in psql. We can't do it in libpq > for everyone, but we can do it in our own tools... Particularly since > we do print the SSL information already - we could just add a > "warning: cert not verified" or something like that to the same piece > of information. It bugs me every time you have to jump through hoops and get red warnings for an unknown CA, whereas no encryption whatsoever is treated as fine while being actually even worse. Transport encryption is a *good thing*, we should be encouraging it wherever possible. If it wern't for the performance issues I'd suggest defaulting to SSL everywhere transparently with ephemeral certs. It would protect against any number of passive attacks. Have a nice day, -- Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/ > He who writes carelessly confesses thereby at the very outset that he does > not attach much importance to his own thoughts. -- Arthur Schopenhauer
Magnus Hagander <magnus@hagander.net> writes: > I don't see a problem at all with providing the snakeoil cert. In > fact, it's quite useful. > I see a problem with enabling it by default. Because it makes people > think they are more secure than they are. I am far from an SSL expert, but I had the idea that the only problem with a self-signed cert is that the client can't trace it to a trusted cert --- so if the user took the further step of copying the cert to the client machines' ~/.postgresql/root.crt files, wouldn't things be just fine? > In a browser, they will get a big fat warning every time, so they will > know it. There is no such warning in psql. Actually, maybe we should > *add* such a warning. We could do it in psql. We can't do it in libpq > for everyone, but we can do it in our own tools... Particularly since > we do print the SSL information already - we could just add a > "warning: cert not verified" or something like that to the same piece > of information. No objection to that. I do have an objection to trying to force people to use SSL, which is how I read some of the other proposals in this thread --- but if they are already choosing to use SSL, and it's not as secure as it could be, some sort of notice seems reasonable. What happens in the other direction, ie if a client presents a self-signed cert that the server can't verify? regards, tom lane
On Sun, Oct 21, 2012 at 11:02 AM, Martijn van Oosterhout <kleptog@svana.org> wrote: > It bugs me every time you have to jump through hoops and get red > warnings for an unknown CA, whereas no encryption whatsoever is treated > as fine while being actually even worse. +1. Amen, brother. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
On 10/22/2012 10:18 AM, Robert Haas wrote: > On Sun, Oct 21, 2012 at 11:02 AM, Martijn van Oosterhout > <kleptog@svana.org> wrote: >> It bugs me every time you have to jump through hoops and get red >> warnings for an unknown CA, whereas no encryption whatsoever is treated >> as fine while being actually even worse. > +1. Amen, brother. > Not really, IMNSHO. The difference is that an unencrypted session isn't pretending to be secure. In any case, it doesn't seem too intrusive for us to warn, at least in psql, with something like: SSL connection (cipher: DHE-RSA-AES256-SHA, bits: 256) Host Certificate Unverified If people want to get more paranoid they can always set PGSSLMODE to verify-ca or verify-full. cheers andrew
On 10/12/12 3:44 PM, Stephen Frost wrote: > wrt future-proofing, I don't like the "#-of-iterations" approach. There > are a number of examples of how to deal with multiple encryption types > being supported by a protocol, I'd expect hash'ing could be done in the > same way. For example, Negotiate, SSL, Kerberos, GSSAPI, all have ways > of dealing with multiple encryption/hashing options being supported. > Multiple iterations could be supported through that same mechanism (as > des/des3 were both supported by Kerberos for quite some time). > > In general, I think it's good to build on existing implementations where > possible. Perhaps we could even consider using something which already > exists for this? Sounds like SASL to me.
* Peter Eisentraut (peter_e@gmx.net) wrote: > On 10/12/12 3:44 PM, Stephen Frost wrote: > > In general, I think it's good to build on existing implementations where > > possible. Perhaps we could even consider using something which already > > exists for this? > > Sounds like SASL to me. aiui, that would allow us to support SCRAM and we could support Kerberos/GSSAPI under SASL as well... Not sure how comfortable folks would be with moving to that though. Thanks, Stephen
On Mon, Oct 22, 2012 at 10:57 AM, Andrew Dunstan <andrew@dunslane.net> wrote: > On 10/22/2012 10:18 AM, Robert Haas wrote: >> On Sun, Oct 21, 2012 at 11:02 AM, Martijn van Oosterhout >> <kleptog@svana.org> wrote: >>> >>> It bugs me every time you have to jump through hoops and get red >>> warnings for an unknown CA, whereas no encryption whatsoever is treated >>> as fine while being actually even worse. >> >> +1. Amen, brother. > > Not really, IMNSHO. The difference is that an unencrypted session isn't > pretending to be secure. In any case, it doesn't seem too intrusive for us > to warn, at least in psql, with something like: > > SSL connection (cipher: DHE-RSA-AES256-SHA, bits: 256) Host Certificate > Unverified Well, that change wouldn't bother me at all; in fact, I like it. But Firefox, for example, makes me do three or four clicks every time I got to a website with an invalid SSL certificate, whereas a web site that does not use SSL requires no clicks at all. What's the sense in that? If we imagine that all activity is user-initiated - that is, the user is always careful to ask for SSL when and only when they need a higher level of security - then that's pretty sensible. But in fact the world doesn't work that way. Most web pages are downloaded automatically when you click on a link, and you don't normally look to see whether SSL is in use unless you have a security concern (e.g. because you are logging into your bank's web site). If somebody went and trojaned my bank's web page, they wouldn't need to break the SSL certificate; they could just remove SSL from the login page altogether. Odds are very good that 95% of people wouldn't notice. I think it's great to have a full-paranoia mode where anything not kosher on the SSL connection is grounds for extreme panic. But it shouldn't be the default. What Ubuntu is doing does not solve every problem, but it does solve some problems, and we shouldn't go out of our way to break it. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
On Fri, Oct 12, 2012 at 10:47 PM, Stephen Frost <sfrost@snowman.net> wrote: > * Marko Kreen (markokr@gmail.com) wrote: >> As it works only on connect >> time, it can actually be secure, unlike user switching >> with SET ROLE. > > I'm guessing your issue with SET ROLE is that a RESET ROLE can be issued > later..? If so, I'd suggest that we look at fixing that, but realize it > could break poolers. For that matter, I'm not sure how the proposal to > allow connections to be authenticated as one user but authorized as > another (which we actually already support in some cases, eg: peer) > *wouldn't* break poolers, unless you're suggesting they either use a > separate connection for every user, or reconnect every time, both of > which strike me as defeating a great deal of the point of having a > pooler in the first place... The point of pooler is to cache things. The TCP connection is only one thing to be cached, all the backend-internal caches are as interesting - prepared plans, compiled functions. The fact that on role reset you need to drop all those things is what is breaking pooling. Of course, I'm speaking only about high-performance situations. Maybe there are cases where indeed the authenticated TCP connection is only interesting to be cached. Eg. with dumb client with raw sql only, where there is nothing to cache in backend. But it does not seem like primary scenario we should optimize for. -- marko
On Wed, Oct 10, 2012 at 4:24 PM, Marko Kreen <markokr@gmail.com> wrote: > The SCRAM looks good from the quick glance. SCRAM does have weakness - the info necessary to log in as client (ClientKey) is exposed during authentication process. IOW, the stored auth info can be used to log in as client, if the attacker can listen on or participate in login process. The random nonces used during auth do not matter, what matters is that the target server has same StoredKey (same password, salt and iter). It seems this particular attack is avoided by SRP. This weakness can be seen as feature tho - it can be used by poolers to "cache" auth info and re-connect to server later. They need full access to stored keys still. But it does make it give different security guaratees depending whether SSL is in use or not. -- marko
On Sun, Oct 21, 2012 at 5:49 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > Magnus Hagander <magnus@hagander.net> writes: >> I don't see a problem at all with providing the snakeoil cert. In >> fact, it's quite useful. > >> I see a problem with enabling it by default. Because it makes people >> think they are more secure than they are. > > I am far from an SSL expert, but I had the idea that the only problem > with a self-signed cert is that the client can't trace it to a trusted > cert --- so if the user took the further step of copying the cert to the > client machines' ~/.postgresql/root.crt files, wouldn't things be just > fine? I'm not sure if server certs are supposed to go in the root.crt file or somewhere else. It's a bit tricky to distribute them securely but most people will just scp them and call that good since they ignore the ssh host key messages anyways. Fwiw the main problem here is that you're vulnerable to a MitM attack. In theory we could work around that by doing something like encrypting the ssl public key in a key based on a query provided by the user. That query would have to include some server data that the client can predict and that only the correct server would have access to. There are obvious problems with this though and inventing our own security protocol is almost certainly a bad idea even if we can fix them. >> In a browser, they will get a big fat warning every time, so they will >> know it. There is no such warning in psql. Actually, maybe we should >> *add* such a warning. We could do it in psql. We can't do it in libpq >> for everyone, but we can do it in our own tools... Particularly since >> we do print the SSL information already - we could just add a >> "warning: cert not verified" or something like that to the same piece >> of information. I think we can provide a much better warning however. I think we want something like 'WARNING: Server identity signed by unknown and untrusted authority "Snakeoil CA"' We could go even further: INFO: Server identity "ACME Debian Machine" certified by "Snakeoil CA" WARNING: Server identity signed by unknown and untrusted authority "Snakeoil CA" HINT: Add either the server certificate or the CA certificate to "/usr/lib/ssl/certs" after verifying the identity and certificate hash SSL is notoriously hard to set up, it would go a long way to give the sysadmin an immediate pointer to what certificates are being used and where to find or install the CA certs. It might be worth mentioning the GUC parameter names to control these things too. > What happens in the other direction, ie if a client presents a > self-signed cert that the server can't verify? Surely that's just a failure. The server always expects client authentication and a connection authenticated using an unverified cert could be anyone at all. Clients traditionally didn't authenticate the server until encrypted connections entered the picture and preventing MitM attacks became relevant. -- greg
On Mon, Oct 22, 2012 at 3:54 PM, Greg Stark <stark@mit.edu> wrote: > We could go even further: > INFO: Server identity "ACME Debian Machine" certified by "Snakeoil CA" > WARNING: Server identity signed by unknown and untrusted authority "Snakeoil CA" > HINT: Add either the server certificate or the CA certificate to > "/usr/lib/ssl/certs" after verifying the identity and certificate hash > > SSL is notoriously hard to set up, it would go a long way to give the > sysadmin an immediate pointer to what certificates are being used and > where to find or install the CA certs. It might be worth mentioning > the GUC parameter names to control these things too. Are the possible locations of certs that libpq reads in always so short and definitive? Is it clear that the user would always want to fix the cert situation in that way? What if they don't have file system access to the remote database and would like to learn its public key anyway (ala SSH trust on first use). Overall, I do very much like the sentiment: less guesswork around where the heck to put things or what to search for in documentation. -- fdr
On Mon, Oct 22, 2012 at 6:54 PM, Greg Stark <stark@mit.edu> wrote: > I think we can provide a much better warning however. I think we want > something like 'WARNING: Server identity signed by unknown and > untrusted authority "Snakeoil CA"' > > We could go even further: > INFO: Server identity "ACME Debian Machine" certified by "Snakeoil CA" > WARNING: Server identity signed by unknown and untrusted authority "Snakeoil CA" > HINT: Add either the server certificate or the CA certificate to > "/usr/lib/ssl/certs" after verifying the identity and certificate hash > > SSL is notoriously hard to set up, it would go a long way to give the > sysadmin an immediate pointer to what certificates are being used and > where to find or install the CA certs. It might be worth mentioning > the GUC parameter names to control these things too. Yeah, this seems like a nice idea if we can do it. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
On 10/22/12 1:25 PM, Stephen Frost wrote: > * Peter Eisentraut (peter_e@gmx.net) wrote: >> On 10/12/12 3:44 PM, Stephen Frost wrote: >>> In general, I think it's good to build on existing implementations where >>> possible. Perhaps we could even consider using something which already >>> exists for this? >> >> Sounds like SASL to me. > > aiui, that would allow us to support SCRAM and we could support > Kerberos/GSSAPI under SASL as well... Not sure how comfortable folks > would be with moving to that though. Considering all the design and implementation challenges that have been brought up in this thread: - not using MD5 - not using whatever we replace MD5 with when that gets broken - content of pg_shadow can be used to log in - questions about salt collisions - making the hash more expensive - negotiating how much more expensive, allowing changes in the future - using HMAC to guard against length-extension attacks - support for poolers/proxies I think I would be less comfortable with a hand-crafted solution to each of these issues, and would be more comfortable with using an existing solution that, from the look of it, already does all of that, and which is used by mail and LDAP servers everywhere. That said, I don't have any experience programming SASL clients or servers, only managing existing implementations. But I'd say it's definitely worth a look.
(reviving an old thread) On 23.10.2012 19:53, Peter Eisentraut wrote: > On 10/22/12 1:25 PM, Stephen Frost wrote: >> * Peter Eisentraut (peter_e@gmx.net) wrote: >>> On 10/12/12 3:44 PM, Stephen Frost wrote: >>>> In general, I think it's good to build on existing implementations where >>>> possible. Perhaps we could even consider using something which already >>>> exists for this? >>> >>> Sounds like SASL to me. >> >> aiui, that would allow us to support SCRAM and we could support >> Kerberos/GSSAPI under SASL as well... Not sure how comfortable folks >> would be with moving to that though. > > Considering all the design and implementation challenges that have been > brought up in this thread: > > - not using MD5 > > - not using whatever we replace MD5 with when that gets broken > > - content of pg_shadow can be used to log in > > - questions about salt collisions > > - making the hash more expensive > > - negotiating how much more expensive, allowing changes in the future > > - using HMAC to guard against length-extension attacks > > - support for poolers/proxies > > I think I would be less comfortable with a hand-crafted solution to each > of these issues, and would be more comfortable with using an existing > solution that, from the look of it, already does all of that, and which > is used by mail and LDAP servers everywhere. > > That said, I don't have any experience programming SASL clients or > servers, only managing existing implementations. But I'd say it's > definitely worth a look. SASL seems like a good approach to me. The SASL specification leaves it up to the application protocol how the SASL messages are transported. Each application protocol that uses SASL defines a "SASL profile", which specifies that. So for PostgreSQL, we would need to document how to do SASL authentication. That's pretty straightforward, the SASL messages can be carried in Authentication and PasswordMessage messages, just like we do for GSS. SASL specifies several "methods", like PLAIN and GSSAPI. It also has a couple of methods that use MD5, which probably could be used with the hashes we already store in pg_authid. I believe we could map all the existing authentication methods to SASL methods. In the distant future, we could deprecate and remove the existing built-in authentication handshakes, and always use SASL for authentication. The SASL specification is quite simple, so I think we could easily implement it ourselves without relying on an external library, for the authentication methods we already support. That doesn't buy us much, but would be required if we want to always use SASL for authentication. On top of that, we could also provide a configure option to use an external SASL library, which could provide more exotic authentication methods. Now, to a completely different approach: I just found out that OpenSSL has added support for SRP in version 1.0.1. We're already using OpenSSL, so all we need to do is to provide a couple of callbacks to OpenSSL, and store SRP verifiers in pg_authid instead of MD5 hashes, and we're golden. Well, not quite. There's one little problem: Currently, we first initialize SSL, then read the startup packet which contains the username and database to connect to. After that, we initialize database access to the specified database, and only then we proceed with authentication. That's not a problem for certificate authentication, because certificate authentication doesn't require any database access, but if we are to store the SRP verifiers in pg_authid, we'll need to database access much earlier. Before we know which database to connect to. But that's just an implementation detail - no protocol changes would be required. - Heikki
On 09/12/2013 09:10 AM, Heikki Linnakangas wrote: > > > Now, to a completely different approach: > > I just found out that OpenSSL has added support for SRP in version > 1.0.1. We're already using OpenSSL, so all we need to do is to provide > a couple of callbacks to OpenSSL, and store SRP verifiers in pg_authid > instead of MD5 hashes, and we're golden. > > Well, not quite. There's one little problem: Currently, we first > initialize SSL, then read the startup packet which contains the > username and database to connect to. After that, we initialize > database access to the specified database, and only then we proceed > with authentication. That's not a problem for certificate > authentication, because certificate authentication doesn't require any > database access, but if we are to store the SRP verifiers in > pg_authid, we'll need to database access much earlier. Before we know > which database to connect to. > > You forgot to mention that we'd actually like to get away from being tied closely to OpenSSL because it has caused license grief in the past (not to mention that it's fairly dirty to manage). cheers andrew
* Andrew Dunstan (andrew@dunslane.net) wrote: > You forgot to mention that we'd actually like to get away from being > tied closely to OpenSSL because it has caused license grief in the > past (not to mention that it's fairly dirty to manage). While I agree with this sentiment (and have complained bitterly about OpenSSL's license in the past), I'd rather see us implement this (perhaps with a shim layer, if that's possible/sensible) even if only OpenSSL is supported than to not have the capability at all. It seems highly unlikely we'd ever be able to drop support for OpenSSL completely; we've certainly not made any progress towards that and I don't think forgoing adding new features would make such a change any more or less likely to happen. Thanks, Stephen
On 12.09.2013 17:30, Andrew Dunstan wrote: > > On 09/12/2013 09:10 AM, Heikki Linnakangas wrote: >> >> I just found out that OpenSSL has added support for SRP in version >> 1.0.1. We're already using OpenSSL, so all we need to do is to provide >> a couple of callbacks to OpenSSL, and store SRP verifiers in pg_authid >> instead of MD5 hashes, and we're golden. >> >> Well, not quite. There's one little problem: Currently, we first >> initialize SSL, then read the startup packet which contains the >> username and database to connect to. After that, we initialize >> database access to the specified database, and only then we proceed >> with authentication. That's not a problem for certificate >> authentication, because certificate authentication doesn't require any >> database access, but if we are to store the SRP verifiers in >> pg_authid, we'll need to database access much earlier. Before we know >> which database to connect to. > > You forgot to mention that we'd actually like to get away from being > tied closely to OpenSSL because it has caused license grief in the past > (not to mention that it's fairly dirty to manage). Yeah. I've been looking more closely at the SRP API in OpenSSL; it's completely undocumented. There are examples on the web and mailing lists on how to use it, but no documentation. Hopefully that gets fixed eventually. GnuTLS also supports SRP. They even have documentation for it :-). The API is slightly different than OpenSSL's, but not radically so. If we are to start supporting multiple TLS libraries, we're going to need some kind of a shim layer to abstract away the differences. Writing such a shim for the SRP stuff wouldn't be much additional effort, once you have the shim for all the other stuff in place. - Heikki
On Thu, Sep 12, 2013 at 4:41 PM, Heikki Linnakangas <hlinnakangas@vmware.com> wrote: > On 12.09.2013 17:30, Andrew Dunstan wrote: >> >> >> On 09/12/2013 09:10 AM, Heikki Linnakangas wrote: >>> >>> >>> I just found out that OpenSSL has added support for SRP in version >>> 1.0.1. We're already using OpenSSL, so all we need to do is to provide >>> a couple of callbacks to OpenSSL, and store SRP verifiers in pg_authid >>> instead of MD5 hashes, and we're golden. >>> >>> Well, not quite. There's one little problem: Currently, we first >>> initialize SSL, then read the startup packet which contains the >>> username and database to connect to. After that, we initialize >>> database access to the specified database, and only then we proceed >>> with authentication. That's not a problem for certificate >>> authentication, because certificate authentication doesn't require any >>> database access, but if we are to store the SRP verifiers in >>> pg_authid, we'll need to database access much earlier. Before we know >>> which database to connect to. >> >> >> You forgot to mention that we'd actually like to get away from being >> tied closely to OpenSSL because it has caused license grief in the past >> (not to mention that it's fairly dirty to manage). > > > Yeah. I've been looking more closely at the SRP API in OpenSSL; it's > completely undocumented. There are examples on the web and mailing lists on > how to use it, but no documentation. Hopefully that gets fixed eventually. Well, undocumented and OpenSSL tend to go hand in hand a lot. Or, well, it might be documented, but not in a useful way. I wouldn't count on it. > GnuTLS also supports SRP. They even have documentation for it :-). The API > is slightly different than OpenSSL's, but not radically so. If we are to > start supporting multiple TLS libraries, we're going to need some kind of a > shim layer to abstract away the differences. Writing such a shim for the SRP > stuff wouldn't be much additional effort, once you have the shim for all the > other stuff in place. http://en.wikipedia.org/wiki/Secure_Remote_Password_protocol#Real_world_implementations That does not paint a very good pictures. I'd say the most likely library that we'd *want* instead of openssl is NSS, if we have to pick one of the big ones. Or one of the newer implementations that are a lot more focused and lean. And none of them support SRP. I fear starting to use that is going to make it even harder to break out from our openssl dependency, which people do complain about at least semi-regularly. I wonder how much work it would be to build something on top of lower level primitives that are provided in them all. For example, https://github.com/cocagne/csrp/ implements it on top of openssl, and it's around 1000 LOC (bsd licensed). And that's generic - it might well be shorter if we do something ourselves. And if it's BSD licensed, we could import.. (And then extend to run on top of other cyrpto libraries, of course) -- Magnus HaganderMe: http://www.hagander.net/Work: http://www.redpill-linpro.com/
On Thu, Sep 12, 2013 at 11:33 AM, Magnus Hagander <magnus@hagander.net> wrote: > Well, undocumented and OpenSSL tend to go hand in hand a lot. Or, > well, it might be documented, but not in a useful way. I wouldn't > count on it. The OpenSSL code is some of the worst-formatted spaghetti code I've ever seen, and the reason I know that is because whenever I try to do anything with OpenSSL I generally end up having to read it, precisely because, as you say, the documentation is extremely incomplete. I hate to be critical of other projects, but everything I've ever done with OpenSSL has been difficult, and I really think we should try to get less dependent on it rather than more. > I fear starting to use that is going to make it even harder to break > out from our openssl dependency, which people do complain about at > least semi-regularly. +1. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
On Fri, Sep 13, 2013 at 5:31 PM, Robert Haas <robertmhaas@gmail.com> wrote: > On Thu, Sep 12, 2013 at 11:33 AM, Magnus Hagander <magnus@hagander.net> wrote: >> Well, undocumented and OpenSSL tend to go hand in hand a lot. Or, >> well, it might be documented, but not in a useful way. I wouldn't >> count on it. > > The OpenSSL code is some of the worst-formatted spaghetti code I've > ever seen, and the reason I know that is because whenever I try to do > anything with OpenSSL I generally end up having to read it, precisely > because, as you say, the documentation is extremely incomplete. I > hate to be critical of other projects, but everything I've ever done > with OpenSSL has been difficult, and I really think we should try to > get less dependent on it rather than more. I have nothing exciting to add but I happened to be reading this old thread and thought the above post was relevant again these days. -- greg