Thread: Speed of SSL connections; cost of renegotiation
I just derived the following rather interesting numbers concerning the cost of SSL-encryption in CVS tip. The test case is to COPY about 40MB of data to or from a server running on the same machine (RHL 8.0). Without SSL: [tgl@rh1 tmp]$ time psql -c "\copy foo to 'foo2'" regression real 0m16.592s user 0m0.521s sys 0m0.270s [tgl@rh1 tmp]$ time psql -c "\copy foo from 'foo3'" regression real 0m20.032s user 0m2.223s sys 0m0.217s With SSL: [tgl@rh1 tmp]$ time psql -c "\copy foo to 'foo2'" -h localhost regression real 4m18.912s user 2m30.842s sys 1m4.076s [tgl@rh1 tmp]$ time psql -c "\copy foo from 'foo3'" -h localhost regression real 1m10.774s user 0m29.461s sys 0m23.494s In other words, bulk data transfer *to* the server is about 3.5x slower than it is over an unencrypted Unix socket. Okay, I can live with that. But bulk transfer *from* the server is more than 15x slower. That's above my threshold of pain. And considering client and server are the same machine, why should there be any asymmetry in transfer rate? It looks to me like the culprit is SSL renegotiation. The server is currently programmed to force a renegotiation after every 64K of data transferred to or from the client. However, the test to decide to do a renegotiation was placed only in SSL_write, so a large COPY-to-server escapes triggering the renegotiation except at the very end, whereas the COPY-to-file case is indeed executing a renegotiation about every 64K. Apparently, those renegotiations are horridly expensive. As an experiment, I increased the renegotiation interval by a factor of 10 (from 64K to 640K). This brought the COPY-to-file time down to about 47sec, which is more in line with the in/out speed ratio for the non-encrypted case, and more than a factor of 5 faster than what's in CVS. So, questions for the group: where did the decision to renegotiate every 64K come from? Do we need it at all? Do we need it at such a short interval? And if we do need it, shouldn't the logic be symmetric, so that renegotiations are forced during large input transfers as well as large output transfers? regards, tom lane
> So, questions for the group: where did the decision to renegotiate > every 64K come from? Do we need it at all? Do we need it at such a > short interval? And if we do need it, shouldn't the logic be > symmetric, so that renegotiations are forced during large input > transfers as well as large output transfers? It doesn't look like there's any guidance from mod_ssl in Apache 2.0. http://cvs.apache.org/viewcvs.cgi/httpd-2.0/modules/ssl/ssl_engine_kernel.c?rev=1.92&content-type=text/vnd.viewcvs-markup 'round line 536 begins a good set of comments, but I think the tail end of the file has the best commentary: * Because SSL renegotations can happen at any time (not only after * SSL_accept()), the best way to log the currentconnection details is * right after a finished handshake. I think the correct solution to this is to have some way of specifying this via libpq or by some external configuration file as it is supposed to conform to the client or server's security policy. It'd say by default that 640K is ok, but that it should be tunable and apart of the connections properties. Ex: Index: libpq-fe.h =================================================================== RCS file: /projects/cvsroot/pgsql-server/src/interfaces/libpq/libpq-fe.h,v retrieving revision 1.91 diff -u -r1.91 libpq-fe.h --- libpq-fe.h 2003/03/25 02:44:36 1.91 +++ libpq-fe.h 2003/04/11 01:12:32 @@ -154,6 +154,9 @@ * Password field - hide value "D" Debug * option - don't show by default */ int dispsize; /* Field size in characters for dialog */ +#ifdef USE_SSL + int ssl_reneg_size; /* Rate at which the connection renegotiates keys */ +#endif} PQconninfoOption; /* ---------------- Someone on IRC suggested that this value be tuned automatically depending on the cypher used. The more secure the cypher, the less frequently rekeying is needed. DES = 64K, 3DES = 256K, AES = 512K? Total WAG on the values there, but it conveys the point. -sc -- Sean Chittenden
Sean Chittenden <sean@chittenden.org> writes: >> So, questions for the group: where did the decision to renegotiate >> every 64K come from? Do we need it at all? Do we need it at such a >> short interval? And if we do need it, shouldn't the logic be >> symmetric, so that renegotiations are forced during large input >> transfers as well as large output transfers? > It doesn't look like there's any guidance from mod_ssl in Apache 2.0. Yeah, I looked at mod_ssl before sending in my gripe. AFAICT Apache *never* forces a renegotiation based on amount of data sent --- all that code is intended just to handle transitions between different webpages with different security settings. So is that a precedent we can follow; or is it an optimization based on the assumption that not a lot of data will be transferred on any one web page? (But even if you assume the latter, there are plenty of web pages with more than 64K of data. It's hard to believe mod_ssl would be built like that if security demands a renegotiation every 64K or so.) regards, tom lane
> >> So, questions for the group: where did the decision to renegotiate > >> every 64K come from? Do we need it at all? Do we need it at such a > >> short interval? And if we do need it, shouldn't the logic be > >> symmetric, so that renegotiations are forced during large input > >> transfers as well as large output transfers? > > > It doesn't look like there's any guidance from mod_ssl in Apache 2.0. > > Yeah, I looked at mod_ssl before sending in my gripe. AFAICT Apache > *never* forces a renegotiation based on amount of data sent --- all > that code is intended just to handle transitions between different > webpages with different security settings. So is that a precedent > we can follow; or is it an optimization based on the assumption that > not a lot of data will be transferred on any one web page? I'd assume it's a precedent we can follow given that mod_ssl was written by the same crew that did openssl. That said, I hope that Ralf knows his stuff, nevermind that I haven't seen anyone jump all over Apache for not-renegotiating its keys (and quite a few folks have looked at that). My best guess is that you only have to key the session once and only need to renegotiate that key if you change cyphers or are worried about someone obtaining a key.... that said, OpenSSH does rekey periodically, but I think those guys are overly paranoid. Even still, OpenSSH rekeys every 10min I think, not every 64K. From sshd(8): -k key_gen_time Specifies how often the ephemeral protocol version 1 server key is regenerated(default 3600 seconds, or one hour). The motiva- tion for regenerating the key fairly often is thatthe key is not stored anywhere, and after about an hour, it becomes impossible to recover the keyfor decrypting intercepted communications even if the machine is cracked into or physically seized. A valueof zero indicates that the key will never be regenerated. Drat, close: once every hour. I think it'd be safe to jack that puppy pretty high or to use a time based rekeying, not data transfer based. Seconds since epoc since last rekeying should always be less than 3600? Don't know that we'd want to poll gettimeofday() does postgresql have any timer code sitting around in the tree? > (But even if you assume the latter, there are plenty of web pages > with more than 64K of data. It's hard to believe mod_ssl would be > built like that if security demands a renegotiation every 64K or > so.) Hopefully it takes less than one hour for an HTTP request to go through, regardless of the size. -- Sean Chittenden
Sean Chittenden <sean@chittenden.org> writes: >> From sshd(8): > -k key_gen_time > Specifies how often the ephemeral protocol version 1 server key > is regenerated (default 3600 seconds, or one hour). Hmmm. But a server key isn't the same as a session key, is it? Is this an argument for renegotiating session keys at all? In any case, you can pump a heck of a lot of data through ssh in an hour. Based on that, it sure looks to me like every-64K is a ridiculously small setting. If we were to crank it up to a few meg, the performance issue would go away, and we'd not really need to think about changing to a time-based criterion. regards, tom lane
> Yeah, I looked at mod_ssl before sending in my gripe. AFAICT Apache > *never* forces a renegotiation based on amount of data sent --- all that > code is intended just to handle transitions between different webpages > with different security settings. So is that a precedent we can follow; > or is it an optimization based on the assumption that not a lot of data > will be transferred on any one web page? How about a GUC variable: ssl_renegotiation = 0 # no unnecessary renegotiation ssl_renegotiation = 64000 # renegotiate every 64000 bytes Chris
On Thu, 10 Apr 2003, Tom Lane wrote: > So, questions for the group: where did the decision to renegotiate every > 64K come from? Do we need it at all? Do we need it at such a short > interval? And if we do need it, shouldn't the logic be symmetric, so > that renegotiations are forced during large input transfers as well as > large output transfers? Yes, you do want renegotiations, for two reasons. One is that if you use the same key over a long period of time, you offer too much same-keyed cryptographic material to an attacker, and increase his chances of a successful attack. The second is that you limit the amount of data that can be compromised should someone get hold of your current key. (Though if they've got that from your server, they've probably got access to the database itself, too, so I wouldn't worry so much about this.) I don't actually know how often you should renegotiate, but I'd guess that 64K is really very much not the right value. It's probably not enough for DES, and is way too much for anything else. One hour seems to be a popular session key renegotiation interval for SSH and IPSec; why not start with that? If you really are concerned, I can ask an expert. And yes, both ends should renegotiate. cjs -- Curt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org Don't you know, in this new Dark Age, we're alllight. --XTC
> >> From sshd(8): > > > -k key_gen_time > > Specifies how often the ephemeral protocol version 1 server key > > is regenerated (default 3600 seconds, or one hour). > > Hmmm. But a server key isn't the same as a session key, is it? Is this > an argument for renegotiating session keys at all? The server and client can kick off a key renegotiation. Generally it's left up to the client from what I can tell. The key specified above is the public key used before the session is encrypted so that makes sense to rekey... once encrypted though, I don't think it's necessary to rekey that often. 10MB would likely be a nice and conservative level that should be outside of the scope of most PostgreSQL transactions. -sc -- Sean Chittenden
> It looks to me like the culprit is SSL renegotiation. The server is > currently programmed to force a renegotiation after every 64K of data > transferred to or from the client. However, the test to decide to do > a renegotiation was placed only in SSL_write, so a large COPY-to-server > escapes triggering the renegotiation except at the very end, whereas the > COPY-to-file case is indeed executing a renegotiation about every 64K. > Apparently, those renegotiations are horridly expensive. BEA has a configuration parameter (ISL -- Interval for Session Renogiation) allowing you to specify the frequency in whole minutes. The default being 0, or disabled renegotiations. http://www.ietf.org/rfc/rfc2246.txt Dealing with session ID: F.1.4. Resuming sessions When a connection is established by resuming a session, new ClientHello.random and ServerHello.random values are hashedwith the session's master_secret. Provided that the master_secret has not been compromised and that the secure hashoperations used to produce the encryption keys and MAC secrets are secure, the connection should be secure and effectivelyindependent from previous connections. Attackers cannot use known encryption keys or MAC secrets to compromisethe master_secret without breaking the secure hash operations (which use both SHA and MD5). Sessions cannot be resumed unless both the client and server agree. If either party suspects that the session may havebeen compromised, or that certificates may have expired or been revoked, it should force a full handshake. An upperlimit of 24 hours is suggested for session ID lifetimes, since an attacker who obtains a master_secret may be ableto impersonate the compromised party until the corresponding session ID is retired. Applications that may be run in relatively insecure environments should not write session IDs to stable storage. http://www.ssl-technology.com/ssl_persistence.htm It looks like IEv5 (and up) will renegotiate the Session ID every 2 minutes: "Beginning with IE5, Microsoft changed the behavior of their secure channel libraries to force a renegotiation of a new SSL session every two minutes. This meant that all IE5+ users would change SSL Session ID every two minutes, breaking the only method of secure persistence available." -- Rod Taylor <rbt@rbt.ca> PGP Key: http://www.rbt.ca/rbtpub.asc
On Fri, 11 Apr 2003, Curt Sampson wrote: > On Thu, 10 Apr 2003, Tom Lane wrote: > > > So, questions for the group: where did the decision to renegotiate every > > 64K come from? Do we need it at all? Do we need it at such a short > > interval? And if we do need it, shouldn't the logic be symmetric, so > > that renegotiations are forced during large input transfers as well as > > large output transfers? > > Yes, you do want renegotiations, for two reasons. One is that if you use > the same key over a long period of time, you offer too much same-keyed > cryptographic material to an attacker, and increase his chances of a > successful attack. The second is that you limit the amount of data that > can be compromised should someone get hold of your current key. (Though if > they've got that from your server, they've probably got access to the database > itself, too, so I wouldn't worry so much about this.) > > I don't actually know how often you should renegotiate, but I'd guess > that 64K is really very much not the right value. It's probably not > enough for DES, and is way too much for anything else. One hour seems to > be a popular session key renegotiation interval for SSH and IPSec; why > not start with that? Ummm. I'm not comfortable with using a time based period for renogatiation. What can move in an hour from a P100 on Arcnet versus a 32 CPU altix on switched fabric are two entirely different things. If there is a "sweet spot" for how often to renogotiate it would be based on amount. Basing it on time introduces too much variability. You'd have to basically say that x bytes is as much as you should encrypt with one key, then base time t on t=xr where r is the max rate you can expect on a given server, and rate can vary too wildly. In fact, setting a time period of 5 minutes for large server might well be too seldom, and 30 minutes on the small slow Sparc IPC in the back room is too often. If it is a GUC then the user can adjust it. I'm comfortable with that, since there's a lot of variability to where postgresql gets used and what it gets used for.
"scott.marlowe" <scott.marlowe@ihs.com> writes: > Ummm. I'm not comfortable with using a time based period for > renogatiation. What can move in an hour from a P100 on Arcnet versus a 32 > CPU altix on switched fabric are two entirely different things. If there > is a "sweet spot" for how often to renogotiate it would be based on > amount. That's what I would think, too. So we already have the right mechanism, it's just a question of what the setting ought to be. I realized this morning that there's probably a security tradeoff involved: renegotiating the session key limits the amount of session data encrypted with any one key, which is good; but each renegotiation requires another use of the server key, increasing the odds that an eavesdropper could break *that* (which'd let him into all sessions not just the one). So a too-short renegotiation interval is not only expensive time-wise, but could actually be a net loss for security. I'm beginning to think we need to consult some experts to find out what the right tradeoff is. regards, tom lane PS: the sshd setting that was quoted refers to how often a new server key is chosen, which is really independent of choosing new session keys. Does our SSL code even have the facility to choose new server keys? If not, perhaps someone had better add it.
> Ummm. I'm not comfortable with using a time based period for > renogatiation. I think the time based approach sees it more from the angle of the attacker. You don't want to leave him enough time to crack your encryption and read happily on in real time, no ? Since some of the data is actually predictable (as with html), I think you will actually want larger blocks, and not smaller. Seems like a tradeoff to me. Most of this encryption stuff is actually only good for delaying a skilled attacker. Andreas
> "scott.marlowe" <scott.marlowe@ihs.com> writes: > > Ummm. I'm not comfortable with using a time based period for > > renogatiation. What can move in an hour from a P100 on Arcnet versus a 32 > > CPU altix on switched fabric are two entirely different things. If there > > is a "sweet spot" for how often to renogotiate it would be based on > > amount. Well, I suspect that the amount is high enough that you might leave a connection from, say, a web server using the same key for days on end if you set a reasonably high amount. Probably the ideal is to have a maximum time and amount. But as you point out, if it's a GUC variable, it can be set by the user. Personally, I would tend to go for a time limit over a size limit because a size limit leaves an open-ended time, whereas a time limit puts a definite limit on size as well. (And a very powerful system going full bore for a long period of time over a high-speed connection is pretty rare.) On Fri, 11 Apr 2003, Tom Lane wrote: > I realized this morning that there's probably a security tradeoff > involved: renegotiating the session key limits the amount of session > data encrypted with any one key, which is good; but each renegotiation > requires another use of the server key, increasing the odds that an > eavesdropper could break *that* (which'd let him into all sessions not > just the one). This seems extremely low-risk to me; there's very little data transferred using the server key. > I'm beginning to think we need to consult some experts to find out what > the right tradeoff is. If you really want to know, yes. I would think there would be a paper or something out there, but I failed to dig one up. cjs -- Curt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org Don't you know, in this new Dark Age, we're alllight. --XTC
Curt Sampson <cjs@cynic.net> writes: > On Fri, 11 Apr 2003, Tom Lane wrote: >> I realized this morning that there's probably a security tradeoff >> involved: renegotiating the session key limits the amount of session >> data encrypted with any one key, which is good; but each renegotiation >> requires another use of the server key, increasing the odds that an >> eavesdropper could break *that* (which'd let him into all sessions not >> just the one). > This seems extremely low-risk to me; there's very little data > transferred using the server key. Perhaps, but the downside if the server key is broken is much worse than the loss if any one session key is broken. Also, I don't know how stylized the key-renegotiation exchange is --- there might be a substantial known-plaintext risk there. The fact that sshd thinks it necessary to choose a new server key as often as once an hour indicates to me that they consider the risks nonnegligible. regards, tom lane
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Fri, 11 Apr 2003, Tom Lane wrote: > I realized this morning that there's probably a security tradeoff > involved: renegotiating the session key limits the amount of session > data encrypted with any one key, which is good; but each > renegotiation requires another use of the server key, increasing the > odds that an eavesdropper could break *that* (which'd let him into > all sessions not just the one). > > So a too-short renegotiation interval is not only expensive > time-wise, but could actually be a net loss for security. > > I'm beginning to think we need to consult some experts to find out > what the right tradeoff is. Late follow up, but a data point for this: "Practical Cryptography"[0] p.82 suggests limiting CBC mode to 2^32 128-bit blocks and CTR mode to 2^60 before rekeying because of information leakage from collisions (they warn against using OFB at all). That gives us: 2^32 blocks * 2^7 bits/block ---------------------------- = 64GB 2^33 bits/GB I'd add a fudge factor of a few powers of two in there for chattiness of protocols and general paranoia and suggest the cap on data transferred before rekeying should be no higher than 1GB. Pretty big limit, but that's the only real suggestion I've found so far. This doesn't address the potential issue of more ciphertext making an attack on the key easier which could dramatically lower the safe bound. The book is a relatively quick, entertaining and very clear read on the topic of actually implementing and using cryptosystems and the degree of conservatism they show is reassuring. [0] Niels Ferguson, Bruce Schneier. "Practical Cryptography". Wiley Publishing, Inc., 2003. ISBN 0-471-22357-3 - -- Jonathan Conway rise@knavery.net -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.0.7 (GNU/Linux) Comment: Made with pgp4pine 1.76 iD8DBQE+pJkPx9v8xy9f0yoRAhuHAJ96e4wYyfL6JYJFbg4qftjFDlMoLwCbBUy6 pFKlJs//AOkVRk+PQztiIFo= =wJ5/ -----END PGP SIGNATURE-----