OpenSSL 1.0.1e CipherSuites and TLS1.2 more mixed signals than my xgf

I am currently working on a highly secure web server using Windows (I know if it was my OpenBSD) Server 2012.

When looking at the choice of ciphersuites and finding speed on what is considered the strongest and what is not, I had a few questions.

  • I understand that with OpenSSL 1.0.1e (or the current TLS 1.2), which block ciphers (in particular, AES and Camellia), are no longer vulnerable to attacks with a side timeout channel. Is it correct?

  • Knowing No. 1, it is now safe to say that block ciphers in CBC mode are again safe, although there are several known weak attacks that simplify them a bit.

  • SHA1 has known collisions, SHA2-256 is the new minimum known safe standard, right?

  • For all normal purposes and tasks, RC4 is completely broken. Do not use it. Is this a valid expression?

  • Ephemeral keys are the only way to achieve perfect privacy in direct access using OpenSSL or TLS 1.2, right?

And finally, the question is: is there a mathematical or probable reason to consider GCM more secure than CBC after the current round of OpenSSL updates?

Thanks guys, this is a lot of BS to shuffle through google and wikis, and I could not find a direct answer to this question.

+6
source share
1 answer

Sorry for the format below. I will try to group them by topic, so this means that some questions are visited several times and fail.


I understand that with OpenSSL 1.0.1e (or the current TLS 1.2), which block ciphers (in particular, AES and Camellia), are no longer vulnerable to attacks on the cache side.

Well, OpenSSL runs branches using secrets, so it is prone to timeout attacks (both local and network). I remember reading the document, but at the moment I have no link. I think Bernstein talks about this in his presentation below. I know that one group of cryptographers is really upset by OpenSSL, because the project will not accept patches to fix some pains.

For an accessible discussion on this subject, see Daniel Bernstein Cryptography of Worst Practices .

As far as I know, Bernstein NaCl is the only library that tries to remove all side channels. But NaCl is not a public SSL / TLS library.


I understand that with OpenSSL 1.0.1e (or the current TLS 1.2), which block ciphers (in particular, AES and Camellia), are no longer vulnerable to attacks on the cache side.

SSL / TLS uses the Authenticate-then-Encrypt (AtE) scheme. The scheme is essentially:

T = Auth(m) C = Enc(m||t) Send {C} to peer 

Because the authentication tag is encrypted, the protocol must decrypt the message before it can verify the authenticity of the message. That Serge Wodenai Complementary Oracles flourished, and that used by Duong and Rizzo FAST. This is because encrypted text is used before authentication.

It's good to use Authenticate-then-Encrypt, but the details can be complex to get right. If it is used with stream encryption, which XOR transmits a key stream, then this is usually normal. If to use it with block ciphering then you should be careful. A formal appeal can be found in the Hugo Krawczyk Encryption and Authentication Procedure to Protect Messages .

OpenSSL and other libraries have fixed the recent oracle scrolling in block ciphers.

Stream ciphers are not very viable since RC4 is not suitable for use in TLS. See Answer to Question 4.

Why SSL / TLS is so poorly absorbed in 2011 or so. Both stream ciphers and block ciphers were broken, and we did not have a good choice / alternative to use. (Most people chose RC4 over block ciphers).

In the future, you should expect more attacks on the side channels due to architectural flaws in the protocol and implementation defects.


For completeness, this is what you would like to use in an ideal world. Its encryption scheme, then authentication (EtA), is used by IPSec. The IETF refuses to specify it for SSL / TLS, even though it is explicitly protected under the general composition (see Krawzzyk Article):

 C = Enc(m) T = Auth(C) Send {C || T} to peer 

In the above diagram, the peer rejects any ciphertext C that is not authenticated against the authentication tag T A capital oracle is not disclosed because decryption is never performed.

Now there is an IETF project for using Enrcypt-then-Authenticate in SSL / TLS. See Peter Gutmann Encrypt-then-MAC for TLS and DTLS .

And for greater completeness, this is what SSH does. Its authentication and encryption scheme (A & E), and it also uses ciphertext before its authentication:

 C = Enc(m) T = Auth(m) Send {C || T} to peer 

Because the authentication tag applies to the text message, the encryption text must be decrypted to recover the text message. This means that ciphertext is used before authentication.

Programmable affordable authenticated encryption treatment is available in Code Project Authenticated Encryption .


For all normal and RC4 goals, it's completely broken. Do not use it. Is this a valid expression?

Its not completely broken, but this bias is a real problem in TLS. From AlFardan, Bernstein (et al), On RC4 Security in TLS and WPA :

 ... While the RC4 algorithm is known to have a variety of cryptographic weaknesses (see [26] for an excellent survey), it has not been previously explored how these weaknesses can be exploited in the context of TLS. Here we show that new and recently discovered biases in the RC4 keystream do create serious vulnerabilities in TLS when using RC4 as its encryption algorithm. 

Knowing No. 1, it is now safe to say that block ciphers in CBC mode are again safe, although there are several known weak attacks that simplify them a bit.

Well, this is a matter of your risk or adversity. If you know that RC4 is not suitable for use in TLS, this leaves only blocking ciphers. But we know that OpenSSL still suffers from side channel attacks when using block ciphers, because they are deployed on secret key material. So you can choose your poison.


Knowing No. 1, it is now safe to say that block ciphers in CBC mode are again safe, although there are several known weak attacks that simplify them a bit.

And the block cipher is not the only vector. An attacker could recover an AES (or Camellia) key during key transfer. See Bleichenbacher's "Million Messages" on the RSA ; and "Million Messages" in 15,000 messages . This means that a busy website may have to change its long-term signature / encryption key every 10 minutes or so.

You also have other side channel / oracle attacks, such as Duong and Rizzo compression attacks. Their attacks target both the socket level (CRIME) and the application layer (BREACH) and apply to HTTPS and related protocols such as SPDY.


SHA1 has known collisions, SHA2-256 is the new minimum known safe standard, right?

It depends on how you use it and who you ask. If you use it as a pseudo-random function (PRF) in a random number generator, then its OK to use SHA1. If you use it where collision resistance is required, such as digital signature, then SHA1 is below the theoretical security level of 2 80- bit. In fact, we know that he is closer to 2 60 thanks to Mark Stevens (see HashClash ). This means that this may be available to some attackers.

For SSL / TLS, SHA1 must be collision resistant enough to push SSL / TLS recording over the air or along the wire. 2 60 is probably good enough for this because the length of time is small - its near the 2MSL network. In other words, an attacker probably cannot fake a network packet in less than 2 minutes.

On the other hand, you probably want to get X509 certificates with SHA256, because the lifetime is almost unlimited, unlike the TLS record.

Nearly unlimited time on the X509's long-lived certificate is why FLAME was developed efficiently. Attackers had unlimited time to find a collision of the MD5 prefix with the Microsoft TS certificate. Once it is found, it can be used wherever there is a Microsoft mailbox on which the service is running.


SHA1 has known collisions, SHA2-256 is the new minimum known safe standard, right?

There are several bodies that publish these minimum standards, and 112-bit security seems to be a consistent minimum. Thus, this means that the algorithms:

  • DH-2048
  • RSA-2048
  • SHA-224
  • 3-key TDEA (Triple DES)
  • AES128 (or higher)

These are ordinary American algorithms. You can also use Camellia (equivalent to AES) and Whirlpool (equivalent to SHA), if necessary. The 2-key TDEA provides 80-bit protection and should not be used. (A 3-key TDEA uses 24-byte keys, and a 2-key uses 16-byte keys. SSL / TLS specifies a 24-byte sort in RFC 2246).

There are also sizes for elliptic curves, also based on the size of the main field or the characteristic of the binary field. For example, if you want an elliptic curve over a simple field with 112-bit security, then I suggest that you would use P-224 or higher (or binary fields of size 233 or higher).

I think a good read on this subject can be found at the Crypto ++ Security Levels . It discusses security levels and calls standard bodies such as ECRYPT (Asia), ISO / IEC (Worldwide), NESSIE (Europe) and NIST (USA).

Even the NSA has a minimal level of security. Its 128-bit (rather than 112-bit) is for SECRET, and the algorithms are specified in Suite B.


SHA1 has known collisions, SHA2-256 is the new minimum known safe standard, right?

You must be careful when removing SHA1. If you do, you can remove all common TLSv1.0 algorithms that could affect usability. Personally, I would like to bury TLSv1.0, but I think it is necessary for interaction, because so few clients and the server fully implement TLSv1.1 and TLSv1.2.

In addition, Ubuntu 12.04 LTS disables TLSv1.1 and TLSv1.2. So you will need the best hash that you can use for TLSv1.0, and I believe that SHA1. See Ubuntu 12.04 LTS: DownSSL version of OpenSSL 1.0.0 and does not support TLS 1.2 .

In this case, you can probably still use SHA1, but click it in the list of preferred ciphers.


Ephemeral keys are the only way to achieve perfect secrecy when using OpenSSL or TLS 1.2, right?

Ephemeral key exchanges provide Perfect Forward Secrecy (PFS). This means that if the long-term signature key is compromised, then past sessions are not at risk of losing personal life. That is, an attacker cannot restore the plain text of past sessions.

Ephemeral key exchanges first appeared in SSL 3.0. Here are the algorithms that interest us (I exclude the RC4 and MD options because I do not use them):

  • EDH-DSS-DES-CBC3-SHA
  • EDH-RSA-DES-CBC3-SHA

TLS 1.0 added the following:

  • DHE-DSS-AES256-SHA
  • DHE-RSA-AES256-SHA
  • DHE-DSS-AES128-SHA
  • DHE-RSA-AES128-SHA

TLS 1.1 has not added any algorithms.

In TLS 1.2, the following has been added:

  • ECDHE-ECDSA-AES256-GCM-SHA384
  • ECDHE-RSA-AES256-GCM-SHA384
  • ECDHE-ECDSA-AES128-GCM-SHA256
  • ECDHE-RSA-AES128-GCM-SHA256
  • DHE-DSS-AES256-GCM-SHA384
  • DHE-RSA-AES256-GCM-SHA384
  • DHE-DSS-AES128-GCM-SHA256
  • DHE-RSA-AES128-GCM-SHA256

Ephemeral keys are the only way to achieve perfect secrecy when using OpenSSL or TLS 1.2, right?

There are ways to destroy PFS even when using ephemeral key exchanges. For example, resuming a session requires maintaining the secrecy of early administration. Maintaining the secrecy of the preaster to execute the next = Hash( current) type destroys this property.

In an Apache server farm, this also means that the preposition secret is written to disk. (The last time I checked, Apache was not able to distribute it in memory to the farm servers).


Is there a mathematical or probable reason to consider GCM more secure than CBC after the current round of OpenSSL updates?

GCM is a streaming mode, so it should not suffer from overlay attacks. However, there are some who claim to be trading in a devil whom you know for a devil whom you do not know. See, for example, the discussion of the Real-World Cryptography Workshop on the cryptography mailing list.


Related: by managing a set of ciphers (e.g. ECDHE-RSA-AES256-GCM-SHA384 ) you can control an algorithm (e.g. ECDHE , RSA , AES256 , SHA384 ) and control a protocol (e.g. TLSv1 0.2).

In OpenSSL, its three-step process for managing these things (step 2 is optional below):

  • Remove broken / wounded protocols. Use the SSLv23_method method, and then call SSL_CTX_set_options with SSL_OP_NO_SSLv2 | SSL_OP_NO_SSLv3 SSL_OP_NO_SSLv2 | SSL_OP_NO_SSLv3 . This gives you TLSv1.0 and higher.
  • SSL_OP_NO_COMPRESSION also be a good idea due to CRIME. You still have to monitor compression leaks at higher levels due to BREACH for protocols such as SPDY and HTTP.
  • Select your encryption kits, and then install them using SSL_set_cipher_list . Do not use encrypted suites with algorithms such as RC4 or MD5. Below is an example of a string that also removes RC4 and SHA1 (and other smaller ciphers).
  • On the server, do not allow the client to choose a weak or wounded cipher (by default, for RFC, the server evaluates the client’s choice). Do not let the client choose, asking the server to make a choice using SSL_CTX_set_options and SSL_OP_CIPHER_SERVER_PREFERENCE .

Here's what the code looks like when setting up an encryption list to manage encryption sets and protocols:

 const char* const PREFERRED_CIPHERS = "kEECDH:kEDH:kRSA:AESGCM:AES256:AES128:3DES:" "!MD5:!RC4:!aNULL:!eNULL:!EXP:!LOW:!MEDIUM:!ADH:!AECDH"; int res = SSL_set_cipher_list(ssl, PREFERRED_CIPHERS); if(1 != res) handleFailure(); 

Here's another way to set a list of ciphers:

 const char* const PREFERRED_CIPHERS = /* TLS 1.2 only */ "ECDHE-ECDSA-AES256-GCM-SHA384:" "ECDHE-RSA-AES256-GCM-SHA384:" "ECDHE-ECDSA-AES128-GCM-SHA256:" "ECDHE-RSA-AES128-GCM-SHA256:" /* TLS 1.2 only */ "DHE-DSS-AES256-GCM-SHA384:" "DHE-RSA-AES256-GCM-SHA384:" "DHE-DSS-AES128-GCM-SHA256:" "DHE-RSA-AES128-GCM-SHA256:" /* TLS 1.0 and above */ "DHE-DSS-AES256-SHA:" "DHE-RSA-AES256-SHA:" "DHE-DSS-AES128-SHA:" "DHE-RSA-AES128-SHA:" /* SSL 3.0 and TLS 1.0 */ "EDH-DSS-DES-CBC3-SHA:" "EDH-RSA-DES-CBC3-SHA:" "DH-DSS-DES-CBC3-SHA:" "DH-RSA-DES-CBC3-SHA"; 

Related: according to Steffen below, F5 BIG-IP load balancers have an error in which they reject ClientHello , which is too large. This is another reason to limit the number of allowed encryption lists, since each set of ciphers requires two bytes. Thus, you can reduce the size required for cipher suites from 160 bytes (80 each cipher suite) to about 30 bytes (about 15 cipher suites).

Below is ClientHello from the openssl s_client -connect www.google.com:443 under Wireshark. Note 79 cipher suites.

Wireshark trace of ClientHello


Related: Apple has a bug in TLSv1.2 code where Safari cannot negotiate ECDHE-ECDSA ciphers as advertised. The error is present in OS X 10.8 through 10.8.3 and is claimed to have been fixed in OS X 10.8.4. Apple did not provide a fix or applied a fix to affected versions of its SecureTransport , so 10.8 to 10.8.3 will remain broken. And some versions of iOS are likely to be broken.

Be sure to use OpenSSL SSL_OP_SAFARI_ECDHE_ECDSA_BUG as the context parameter. See SSL_OP_SAFARI_ECDHE_ECDSA_BUG and Apple, apparently bullies ... for more details.


Related: there is still the devil in detail with OpenSSL and elliptical curves. Currently, you cannot specify fields when using OpenSSL 1.0.1e. Thus, you can get a weak curve (for example, P-160 with 80-bit protection) when negotiating AES256 encryption (256 bits of security). Obviously, your attacker will go after a weak curve, and not strong block encryption. That is why its important to meet security levels.

If you want to be aware of the security levels, you will have to schedule OpenSSL t1_lib.c at line 1690 to ensure that pref_list does not include weaker curves.

OpenSSL 1.0.2 allows you to set the dimensions of an elliptical curve. See SSL_CTX_set1_curves .


Related: PSK is a preshared key and SRP is a secure remote password. I see that many people delete PSK and SRP as bad ciphers (for example, the preferred cipher includes "! PSK :! SRP"). In fact, they are usually not needed, because no one uses them, but they are not bad, like RC4 or MD5 . They are truly preferred since they have the desired safety properties.

PSK and SRP are most desirable for applications that use passwords or shared secrets. They are most desirable because they provide mutual authentication of the client and server, and they do not suffer from Man-in-the-Middle intercepts. That is, both the client and the server know that the password and channel settings are successfully completed, or one (or both) do not know that the password or secret and channel settings do not work. They do not put the username or password on the wire in plain text, so the server raid or the enemy does not receive anything during the attack.

PSK and SRP have a “channel binding” property, which is an important security feature. In the RSA-based classic SSL / TLS protocol, a secure channel is configured, and then the user password is pushed down the wire at a non-crossing step. If the client makes a mistake and establishes a channel with the rogue server, then the username and password are provided on the rogue server. In this case, the authentication mechanism is a disjoint step or "unrelated" to the work of the layer below. PSK and SRP do not carry unrelated channels because the binding is built into the protocol.

+10
source

Source: https://habr.com/ru/post/948818/


All Articles