More KRACK patches

This commit is contained in:
Tad 2017-11-02 16:48:37 -04:00
parent c3c75e7b73
commit 6cb184876a
4 changed files with 671 additions and 0 deletions

View File

@ -0,0 +1,631 @@
From 3f83649ff66900caab28f576b5169e011afe1580 Mon Sep 17 00:00:00 2001
From: James Yonan <james@openvpn.net>
Date: Thu, 26 Sep 2013 02:20:39 -0600
Subject: [PATCH 1/3] crypto: crypto_memneq - add equality testing of memory
regions w/o timing leaks
When comparing MAC hashes, AEAD authentication tags, or other hash
values in the context of authentication or integrity checking, it
is important not to leak timing information to a potential attacker,
i.e. when communication happens over a network.
Bytewise memory comparisons (such as memcmp) are usually optimized so
that they return a nonzero value as soon as a mismatch is found. E.g,
on x86_64/i5 for 512 bytes this can be ~50 cyc for a full mismatch
and up to ~850 cyc for a full match (cold). This early-return behavior
can leak timing information as a side channel, allowing an attacker to
iteratively guess the correct result.
This patch adds a new method crypto_memneq ("memory not equal to each
other") to the crypto API that compares memory areas of the same length
in roughly "constant time" (cache misses could change the timing, but
since they don't reveal information about the content of the strings
being compared, they are effectively benign). Iow, best and worst case
behaviour take the same amount of time to complete (in contrast to
memcmp).
Note that crypto_memneq (unlike memcmp) can only be used to test for
equality or inequality, NOT for lexicographical order. This, however,
is not an issue for its use-cases within the crypto API.
We tried to locate all of the places in the crypto API where memcmp was
being used for authentication or integrity checking, and convert them
over to crypto_memneq.
crypto_memneq is declared noinline, placed in its own source file,
and compiled with optimizations that might increase code size disabled
("Os") because a smart compiler (or LTO) might notice that the return
value is always compared against zero/nonzero, and might then
reintroduce the same early-return optimization that we are trying to
avoid.
Using #pragma or __attribute__ optimization annotations of the code
for disabling optimization was avoided as it seems to be considered
broken or unmaintained for long time in GCC [1]. Therefore, we work
around that by specifying the compile flag for memneq.o directly in
the Makefile. We found that this seems to be most appropriate.
As we use ("Os"), this patch also provides a loop-free "fast-path" for
frequently used 16 byte digests. Similarly to kernel library string
functions, leave an option for future even further optimized architecture
specific assembler implementations.
This was a joint work of James Yonan and Daniel Borkmann. Also thanks
for feedback from Florian Weimer on this and earlier proposals [2].
[1] http://gcc.gnu.org/ml/gcc/2012-07/msg00211.html
[2] https://lkml.org/lkml/2013/2/10/131
Change-Id: Ib2f5b485e28bd274b6d26b945e91acdf3bec8674
Signed-off-by: James Yonan <james@openvpn.net>
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Cc: Florian Weimer <fw@deneb.enyo.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---
crypto/Makefile | 7 ++-
crypto/authenc.c | 6 +--
crypto/authencesn.c | 8 +--
crypto/ccm.c | 4 +-
crypto/gcm.c | 2 +-
crypto/memneq.c | 139 ++++++++++++++++++++++++++++++++++++++++++++++++
include/crypto/algapi.h | 18 ++++++-
7 files changed, 172 insertions(+), 12 deletions(-)
create mode 100644 crypto/memneq.c
diff --git a/crypto/Makefile b/crypto/Makefile
index 30f33d67533..ae3684d16f3 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -2,8 +2,13 @@
# Cryptographic API
#
+# memneq MUST be built with -Os or -O0 to prevent early-return optimizations
+# that will defeat memneq's actual purpose to prevent timing attacks.
+CFLAGS_REMOVE_memneq.o := -O1 -O2 -O3
+CFLAGS_memneq.o := -Os
+
obj-$(CONFIG_CRYPTO) += crypto.o
-crypto-y := api.o cipher.o compress.o
+crypto-y := api.o cipher.o compress.o memneq.o
obj-$(CONFIG_CRYPTO_WORKQUEUE) += crypto_wq.o
diff --git a/crypto/authenc.c b/crypto/authenc.c
index 5ef7ba6b6a7..5ea49b331a7 100644
--- a/crypto/authenc.c
+++ b/crypto/authenc.c
@@ -188,7 +188,7 @@ static void authenc_verify_ahash_update_done(struct crypto_async_request *areq,
scatterwalk_map_and_copy(ihash, areq_ctx->sg, areq_ctx->cryptlen,
authsize, 0);
- err = memcmp(ihash, ahreq->result, authsize) ? -EBADMSG : 0;
+ err = crypto_memneq(ihash, ahreq->result, authsize) ? -EBADMSG : 0;
if (err)
goto out;
@@ -227,7 +227,7 @@ static void authenc_verify_ahash_done(struct crypto_async_request *areq,
scatterwalk_map_and_copy(ihash, areq_ctx->sg, areq_ctx->cryptlen,
authsize, 0);
- err = memcmp(ihash, ahreq->result, authsize) ? -EBADMSG : 0;
+ err = crypto_memneq(ihash, ahreq->result, authsize) ? -EBADMSG : 0;
if (err)
goto out;
@@ -462,7 +462,7 @@ static int crypto_authenc_verify(struct aead_request *req,
ihash = ohash + authsize;
scatterwalk_map_and_copy(ihash, areq_ctx->sg, areq_ctx->cryptlen,
authsize, 0);
- return memcmp(ihash, ohash, authsize) ? -EBADMSG : 0;
+ return crypto_memneq(ihash, ohash, authsize) ? -EBADMSG : 0;
}
static int crypto_authenc_iverify(struct aead_request *req, u8 *iv,
diff --git a/crypto/authencesn.c b/crypto/authencesn.c
index 136b68b9d8d..9f9a03c0c27 100644
--- a/crypto/authencesn.c
+++ b/crypto/authencesn.c
@@ -247,7 +247,7 @@ static void authenc_esn_verify_ahash_update_done(struct crypto_async_request *ar
scatterwalk_map_and_copy(ihash, areq_ctx->sg, areq_ctx->cryptlen,
authsize, 0);
- err = memcmp(ihash, ahreq->result, authsize) ? -EBADMSG : 0;
+ err = crypto_memneq(ihash, ahreq->result, authsize) ? -EBADMSG : 0;
if (err)
goto out;
@@ -296,7 +296,7 @@ static void authenc_esn_verify_ahash_update_done2(struct crypto_async_request *a
scatterwalk_map_and_copy(ihash, areq_ctx->sg, areq_ctx->cryptlen,
authsize, 0);
- err = memcmp(ihash, ahreq->result, authsize) ? -EBADMSG : 0;
+ err = crypto_memneq(ihash, ahreq->result, authsize) ? -EBADMSG : 0;
if (err)
goto out;
@@ -336,7 +336,7 @@ static void authenc_esn_verify_ahash_done(struct crypto_async_request *areq,
scatterwalk_map_and_copy(ihash, areq_ctx->sg, areq_ctx->cryptlen,
authsize, 0);
- err = memcmp(ihash, ahreq->result, authsize) ? -EBADMSG : 0;
+ err = crypto_memneq(ihash, ahreq->result, authsize) ? -EBADMSG : 0;
if (err)
goto out;
@@ -568,7 +568,7 @@ static int crypto_authenc_esn_verify(struct aead_request *req)
ihash = ohash + authsize;
scatterwalk_map_and_copy(ihash, areq_ctx->sg, areq_ctx->cryptlen,
authsize, 0);
- return memcmp(ihash, ohash, authsize) ? -EBADMSG : 0;
+ return crypto_memneq(ihash, ohash, authsize) ? -EBADMSG : 0;
}
static int crypto_authenc_esn_iverify(struct aead_request *req, u8 *iv,
diff --git a/crypto/ccm.c b/crypto/ccm.c
index 32fe1bb5dec..44fbe81ff2c 100644
--- a/crypto/ccm.c
+++ b/crypto/ccm.c
@@ -363,7 +363,7 @@ static void crypto_ccm_decrypt_done(struct crypto_async_request *areq,
if (!err) {
err = crypto_ccm_auth(req, req->dst, cryptlen);
- if (!err && memcmp(pctx->auth_tag, pctx->odata, authsize))
+ if (!err && crypto_memneq(pctx->auth_tag, pctx->odata, authsize))
err = -EBADMSG;
}
aead_request_complete(req, err);
@@ -422,7 +422,7 @@ static int crypto_ccm_decrypt(struct aead_request *req)
return err;
/* verify */
- if (memcmp(authtag, odata, authsize))
+ if (crypto_memneq(authtag, odata, authsize))
return -EBADMSG;
return err;
diff --git a/crypto/gcm.c b/crypto/gcm.c
index 1a252639ef9..57153f9a1c2 100644
--- a/crypto/gcm.c
+++ b/crypto/gcm.c
@@ -575,7 +575,7 @@ static int crypto_gcm_verify(struct aead_request *req,
crypto_xor(auth_tag, iauth_tag, 16);
scatterwalk_map_and_copy(iauth_tag, req->src, cryptlen, authsize, 0);
- return memcmp(iauth_tag, auth_tag, authsize) ? -EBADMSG : 0;
+ return crypto_memneq(iauth_tag, auth_tag, authsize) ? -EBADMSG : 0;
}
static void gcm_decrypt_done(struct crypto_async_request *areq, int err)
diff --git a/crypto/memneq.c b/crypto/memneq.c
new file mode 100644
index 00000000000..40dfa50d39b
--- /dev/null
+++ b/crypto/memneq.c
@@ -0,0 +1,139 @@
+/*
+ * Constant-time equality testing of memory regions.
+ *
+ * Authors:
+ *
+ * James Yonan <james@openvpn.net>
+ * Daniel Borkmann <dborkman@redhat.com>
+ *
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ * GPL LICENSE SUMMARY
+ *
+ * Copyright(c) 2013 OpenVPN Technologies, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
+ * The full GNU General Public License is included in this distribution
+ * in the file called LICENSE.GPL.
+ *
+ * BSD LICENSE
+ *
+ * Copyright(c) 2013 OpenVPN Technologies, Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of OpenVPN Technologies nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <crypto/algapi.h>
+#include <linux/export.h>
+
+#ifndef __HAVE_ARCH_CRYPTO_MEMNEQ
+
+/* Generic path for arbitrary size */
+static inline unsigned long
+__crypto_memneq_generic(const void *a, const void *b, size_t size)
+{
+ unsigned long neq = 0;
+
+#if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)
+ while (size >= sizeof(unsigned long)) {
+ neq |= *(unsigned long *)a ^ *(unsigned long *)b;
+ a += sizeof(unsigned long);
+ b += sizeof(unsigned long);
+ size -= sizeof(unsigned long);
+ }
+#endif /* CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS */
+ while (size > 0) {
+ neq |= *(unsigned char *)a ^ *(unsigned char *)b;
+ a += 1;
+ b += 1;
+ size -= 1;
+ }
+ return neq;
+}
+
+/* Loop-free fast-path for frequently used 16-byte size */
+static inline unsigned long __crypto_memneq_16(const void *a, const void *b)
+{
+#ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
+ if (sizeof(unsigned long) == 8)
+ return ((*(unsigned long *)(a) ^ *(unsigned long *)(b))
+ | (*(unsigned long *)(a+8) ^ *(unsigned long *)(b+8)));
+ else if (sizeof(unsigned int) == 4)
+ return ((*(unsigned int *)(a) ^ *(unsigned int *)(b))
+ | (*(unsigned int *)(a+4) ^ *(unsigned int *)(b+4))
+ | (*(unsigned int *)(a+8) ^ *(unsigned int *)(b+8))
+ | (*(unsigned int *)(a+12) ^ *(unsigned int *)(b+12)));
+ else
+#endif /* CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS */
+ return ((*(unsigned char *)(a) ^ *(unsigned char *)(b))
+ | (*(unsigned char *)(a+1) ^ *(unsigned char *)(b+1))
+ | (*(unsigned char *)(a+2) ^ *(unsigned char *)(b+2))
+ | (*(unsigned char *)(a+3) ^ *(unsigned char *)(b+3))
+ | (*(unsigned char *)(a+4) ^ *(unsigned char *)(b+4))
+ | (*(unsigned char *)(a+5) ^ *(unsigned char *)(b+5))
+ | (*(unsigned char *)(a+6) ^ *(unsigned char *)(b+6))
+ | (*(unsigned char *)(a+7) ^ *(unsigned char *)(b+7))
+ | (*(unsigned char *)(a+8) ^ *(unsigned char *)(b+8))
+ | (*(unsigned char *)(a+9) ^ *(unsigned char *)(b+9))
+ | (*(unsigned char *)(a+10) ^ *(unsigned char *)(b+10))
+ | (*(unsigned char *)(a+11) ^ *(unsigned char *)(b+11))
+ | (*(unsigned char *)(a+12) ^ *(unsigned char *)(b+12))
+ | (*(unsigned char *)(a+13) ^ *(unsigned char *)(b+13))
+ | (*(unsigned char *)(a+14) ^ *(unsigned char *)(b+14))
+ | (*(unsigned char *)(a+15) ^ *(unsigned char *)(b+15)));
+}
+
+/* Compare two areas of memory without leaking timing information,
+ * and with special optimizations for common sizes. Users should
+ * not call this function directly, but should instead use
+ * crypto_memneq defined in crypto/algapi.h.
+ */
+noinline unsigned long __crypto_memneq(const void *a, const void *b,
+ size_t size)
+{
+ switch (size) {
+ case 16:
+ return __crypto_memneq_16(a, b);
+ default:
+ return __crypto_memneq_generic(a, b, size);
+ }
+}
+EXPORT_SYMBOL(__crypto_memneq);
+
+#endif /* __HAVE_ARCH_CRYPTO_MEMNEQ */
diff --git a/include/crypto/algapi.h b/include/crypto/algapi.h
index 418d270e180..e73c19e90e3 100644
--- a/include/crypto/algapi.h
+++ b/include/crypto/algapi.h
@@ -386,5 +386,21 @@ static inline int crypto_requires_sync(u32 type, u32 mask)
return (type ^ CRYPTO_ALG_ASYNC) & mask & CRYPTO_ALG_ASYNC;
}
-#endif /* _CRYPTO_ALGAPI_H */
+noinline unsigned long __crypto_memneq(const void *a, const void *b, size_t size);
+
+/**
+ * crypto_memneq - Compare two areas of memory without leaking
+ * timing information.
+ *
+ * @a: One area of memory
+ * @b: Another area of memory
+ * @size: The size of the area.
+ *
+ * Returns 0 when data is equal, 1 otherwise.
+ */
+static inline int crypto_memneq(const void *a, const void *b, size_t size)
+{
+ return __crypto_memneq(a, b, size) != 0UL ? 1 : 0;
+}
+#endif /* _CRYPTO_ALGAPI_H */
--
2.15.0
From cc5616af91b2ac309c62afe2b784fde49f92ff18 Mon Sep 17 00:00:00 2001
From: Cesar Eduardo Barros <cesarb@cesarb.eti.br>
Date: Mon, 25 Nov 2013 22:00:41 -0200
Subject: [PATCH 2/3] crypto: more robust crypto_memneq
Disabling compiler optimizations can be fragile, since a new
optimization could be added to -O0 or -Os that breaks the assumptions
the code is making.
Instead of disabling compiler optimizations, use a dummy inline assembly
(based on RELOC_HIDE) to block the problematic kinds of optimization,
while still allowing other optimizations to be applied to the code.
The dummy inline assembly is added after every OR, and has the
accumulator variable as its input and output. The compiler is forced to
assume that the dummy inline assembly could both depend on the
accumulator variable and change the accumulator variable, so it is
forced to compute the value correctly before the inline assembly, and
cannot assume anything about its value after the inline assembly.
This change should be enough to make crypto_memneq work correctly (with
data-independent timing) even if it is inlined at its call sites. That
can be done later in a followup patch.
Compile-tested on x86_64.
Change-Id: Ib82641bedec576d2be3793db4d8da36a4ccbbe75
Signed-off-by: Cesar Eduardo Barros <cesarb@cesarb.eti.br>
Acked-by: Daniel Borkmann <dborkman@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---
crypto/Makefile | 5 ---
crypto/memneq.c | 79 +++++++++++++++++++++++++++++-------------
include/linux/compiler-gcc.h | 3 ++
include/linux/compiler-intel.h | 7 ++++
include/linux/compiler.h | 4 +++
5 files changed, 68 insertions(+), 30 deletions(-)
diff --git a/crypto/Makefile b/crypto/Makefile
index ae3684d16f3..4c75316f7d6 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -2,11 +2,6 @@
# Cryptographic API
#
-# memneq MUST be built with -Os or -O0 to prevent early-return optimizations
-# that will defeat memneq's actual purpose to prevent timing attacks.
-CFLAGS_REMOVE_memneq.o := -O1 -O2 -O3
-CFLAGS_memneq.o := -Os
-
obj-$(CONFIG_CRYPTO) += crypto.o
crypto-y := api.o cipher.o compress.o memneq.o
diff --git a/crypto/memneq.c b/crypto/memneq.c
index 40dfa50d39b..a285a744bc7 100644
--- a/crypto/memneq.c
+++ b/crypto/memneq.c
@@ -73,6 +73,7 @@ __crypto_memneq_generic(const void *a, const void *b, size_t size)
#if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)
while (size >= sizeof(unsigned long)) {
neq |= *(unsigned long *)a ^ *(unsigned long *)b;
+ OPTIMIZER_HIDE_VAR(neq);
a += sizeof(unsigned long);
b += sizeof(unsigned long);
size -= sizeof(unsigned long);
@@ -80,6 +81,7 @@ __crypto_memneq_generic(const void *a, const void *b, size_t size)
#endif /* CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS */
while (size > 0) {
neq |= *(unsigned char *)a ^ *(unsigned char *)b;
+ OPTIMIZER_HIDE_VAR(neq);
a += 1;
b += 1;
size -= 1;
@@ -90,33 +92,60 @@ __crypto_memneq_generic(const void *a, const void *b, size_t size)
/* Loop-free fast-path for frequently used 16-byte size */
static inline unsigned long __crypto_memneq_16(const void *a, const void *b)
{
+ unsigned long neq = 0;
+
#ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
- if (sizeof(unsigned long) == 8)
- return ((*(unsigned long *)(a) ^ *(unsigned long *)(b))
- | (*(unsigned long *)(a+8) ^ *(unsigned long *)(b+8)));
- else if (sizeof(unsigned int) == 4)
- return ((*(unsigned int *)(a) ^ *(unsigned int *)(b))
- | (*(unsigned int *)(a+4) ^ *(unsigned int *)(b+4))
- | (*(unsigned int *)(a+8) ^ *(unsigned int *)(b+8))
- | (*(unsigned int *)(a+12) ^ *(unsigned int *)(b+12)));
- else
+ if (sizeof(unsigned long) == 8) {
+ neq |= *(unsigned long *)(a) ^ *(unsigned long *)(b);
+ OPTIMIZER_HIDE_VAR(neq);
+ neq |= *(unsigned long *)(a+8) ^ *(unsigned long *)(b+8);
+ OPTIMIZER_HIDE_VAR(neq);
+ } else if (sizeof(unsigned int) == 4) {
+ neq |= *(unsigned int *)(a) ^ *(unsigned int *)(b);
+ OPTIMIZER_HIDE_VAR(neq);
+ neq |= *(unsigned int *)(a+4) ^ *(unsigned int *)(b+4);
+ OPTIMIZER_HIDE_VAR(neq);
+ neq |= *(unsigned int *)(a+8) ^ *(unsigned int *)(b+8);
+ OPTIMIZER_HIDE_VAR(neq);
+ neq |= *(unsigned int *)(a+12) ^ *(unsigned int *)(b+12);
+ OPTIMIZER_HIDE_VAR(neq);
+ } else {
#endif /* CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS */
- return ((*(unsigned char *)(a) ^ *(unsigned char *)(b))
- | (*(unsigned char *)(a+1) ^ *(unsigned char *)(b+1))
- | (*(unsigned char *)(a+2) ^ *(unsigned char *)(b+2))
- | (*(unsigned char *)(a+3) ^ *(unsigned char *)(b+3))
- | (*(unsigned char *)(a+4) ^ *(unsigned char *)(b+4))
- | (*(unsigned char *)(a+5) ^ *(unsigned char *)(b+5))
- | (*(unsigned char *)(a+6) ^ *(unsigned char *)(b+6))
- | (*(unsigned char *)(a+7) ^ *(unsigned char *)(b+7))
- | (*(unsigned char *)(a+8) ^ *(unsigned char *)(b+8))
- | (*(unsigned char *)(a+9) ^ *(unsigned char *)(b+9))
- | (*(unsigned char *)(a+10) ^ *(unsigned char *)(b+10))
- | (*(unsigned char *)(a+11) ^ *(unsigned char *)(b+11))
- | (*(unsigned char *)(a+12) ^ *(unsigned char *)(b+12))
- | (*(unsigned char *)(a+13) ^ *(unsigned char *)(b+13))
- | (*(unsigned char *)(a+14) ^ *(unsigned char *)(b+14))
- | (*(unsigned char *)(a+15) ^ *(unsigned char *)(b+15)));
+ neq |= *(unsigned char *)(a) ^ *(unsigned char *)(b);
+ OPTIMIZER_HIDE_VAR(neq);
+ neq |= *(unsigned char *)(a+1) ^ *(unsigned char *)(b+1);
+ OPTIMIZER_HIDE_VAR(neq);
+ neq |= *(unsigned char *)(a+2) ^ *(unsigned char *)(b+2);
+ OPTIMIZER_HIDE_VAR(neq);
+ neq |= *(unsigned char *)(a+3) ^ *(unsigned char *)(b+3);
+ OPTIMIZER_HIDE_VAR(neq);
+ neq |= *(unsigned char *)(a+4) ^ *(unsigned char *)(b+4);
+ OPTIMIZER_HIDE_VAR(neq);
+ neq |= *(unsigned char *)(a+5) ^ *(unsigned char *)(b+5);
+ OPTIMIZER_HIDE_VAR(neq);
+ neq |= *(unsigned char *)(a+6) ^ *(unsigned char *)(b+6);
+ OPTIMIZER_HIDE_VAR(neq);
+ neq |= *(unsigned char *)(a+7) ^ *(unsigned char *)(b+7);
+ OPTIMIZER_HIDE_VAR(neq);
+ neq |= *(unsigned char *)(a+8) ^ *(unsigned char *)(b+8);
+ OPTIMIZER_HIDE_VAR(neq);
+ neq |= *(unsigned char *)(a+9) ^ *(unsigned char *)(b+9);
+ OPTIMIZER_HIDE_VAR(neq);
+ neq |= *(unsigned char *)(a+10) ^ *(unsigned char *)(b+10);
+ OPTIMIZER_HIDE_VAR(neq);
+ neq |= *(unsigned char *)(a+11) ^ *(unsigned char *)(b+11);
+ OPTIMIZER_HIDE_VAR(neq);
+ neq |= *(unsigned char *)(a+12) ^ *(unsigned char *)(b+12);
+ OPTIMIZER_HIDE_VAR(neq);
+ neq |= *(unsigned char *)(a+13) ^ *(unsigned char *)(b+13);
+ OPTIMIZER_HIDE_VAR(neq);
+ neq |= *(unsigned char *)(a+14) ^ *(unsigned char *)(b+14);
+ OPTIMIZER_HIDE_VAR(neq);
+ neq |= *(unsigned char *)(a+15) ^ *(unsigned char *)(b+15);
+ OPTIMIZER_HIDE_VAR(neq);
+ }
+
+ return neq;
}
/* Compare two areas of memory without leaking timing information,
diff --git a/include/linux/compiler-gcc.h b/include/linux/compiler-gcc.h
index e5834aa24b9..8c999ad4545 100644
--- a/include/linux/compiler-gcc.h
+++ b/include/linux/compiler-gcc.h
@@ -34,6 +34,9 @@
__asm__ ("" : "=r"(__ptr) : "0"(ptr)); \
(typeof(ptr)) (__ptr + (off)); })
+/* Make the optimizer believe the variable can be manipulated arbitrarily. */
+#define OPTIMIZER_HIDE_VAR(var) __asm__ ("" : "=r" (var) : "0" (var))
+
#ifdef __CHECKER__
#define __must_be_array(arr) 0
#else
diff --git a/include/linux/compiler-intel.h b/include/linux/compiler-intel.h
index d8e636e5607..966fa6820d9 100644
--- a/include/linux/compiler-intel.h
+++ b/include/linux/compiler-intel.h
@@ -15,6 +15,7 @@
*/
#undef barrier
#undef RELOC_HIDE
+#undef OPTIMIZER_HIDE_VAR
#define barrier() __memory_barrier()
@@ -23,6 +24,12 @@
__ptr = (unsigned long) (ptr); \
(typeof(ptr)) (__ptr + (off)); })
+/* This should act as an optimization barrier on var.
+ * Given that this compiler does not have inline assembly, a compiler barrier
+ * is the best we can do.
+ */
+#define OPTIMIZER_HIDE_VAR(var) barrier()
+
/* Intel ECC compiler doesn't support __builtin_types_compatible_p() */
#define __must_be_array(a) 0
diff --git a/include/linux/compiler.h b/include/linux/compiler.h
index 923d093c9ce..a8ef3ca7af2 100644
--- a/include/linux/compiler.h
+++ b/include/linux/compiler.h
@@ -164,6 +164,10 @@ void ftrace_likely_update(struct ftrace_branch_data *f, int val, int expect);
(typeof(ptr)) (__ptr + (off)); })
#endif
+#ifndef OPTIMIZER_HIDE_VAR
+#define OPTIMIZER_HIDE_VAR(var) barrier()
+#endif
+
#endif /* __KERNEL__ */
#endif /* __ASSEMBLY__ */
--
2.15.0
From df07b54fc3e647ef666b3d3c6da4badfba273ad9 Mon Sep 17 00:00:00 2001
From: Daniel Borkmann <dborkman@redhat.com>
Date: Fri, 6 Dec 2013 00:33:33 +0100
Subject: [PATCH 3/3] crypto: memneq - fix for archs without efficient
unaligned access
Commit fe8c8a126806 introduced a possible build error for archs
that do not have CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS set. :/
Fix this up by bringing else braces outside of the ifdef.
Change-Id: I08195a468653062a87eaaa01031b6ee6ab8c7508
Reported-by: Fengguang Wu <fengguang.wu@intel.com>
Fixes: fe8c8a126806 ("crypto: more robust crypto_memneq")
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Acked-By: Cesar Eduardo Barros <cesarb@cesarb.eti.br>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---
crypto/memneq.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/crypto/memneq.c b/crypto/memneq.c
index a285a744bc7..3cfae80ed48 100644
--- a/crypto/memneq.c
+++ b/crypto/memneq.c
@@ -109,8 +109,9 @@ static inline unsigned long __crypto_memneq_16(const void *a, const void *b)
OPTIMIZER_HIDE_VAR(neq);
neq |= *(unsigned int *)(a+12) ^ *(unsigned int *)(b+12);
OPTIMIZER_HIDE_VAR(neq);
- } else {
+ } else
#endif /* CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS */
+ {
neq |= *(unsigned char *)(a) ^ *(unsigned char *)(b);
OPTIMIZER_HIDE_VAR(neq);
neq |= *(unsigned char *)(a+1) ^ *(unsigned char *)(b+1);
--
2.15.0

View File

@ -0,0 +1,36 @@
From 706ccb5adc54e349c491ebeb462c121d6467c863 Mon Sep 17 00:00:00 2001
From: "Jason A. Donenfeld" <Jason@zx2c4.com>
Date: Tue, 17 Oct 2017 20:32:07 +0200
Subject: [PATCH] mac80211: use constant time comparison with keys
Otherwise we risk leaking information via timing side channel.
Change-Id: I9d6f1a4606312d3854746aa2eec2ec46d5629a2b
Fixes: fdf7cb4185b6 ("mac80211: accept key reinstall without changing anything")
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
---
net/mac80211/key.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/net/mac80211/key.c b/net/mac80211/key.c
index cebe30315d9..6d804cc2736 100644
--- a/net/mac80211/key.c
+++ b/net/mac80211/key.c
@@ -18,6 +18,7 @@
#include <linux/slab.h>
#include <linux/export.h>
#include <net/mac80211.h>
+#include <crypto/algapi.h>
#include <asm/unaligned.h>
#include "ieee80211_i.h"
#include "driver-ops.h"
@@ -494,7 +495,7 @@ int ieee80211_key_link(struct ieee80211_key *key,
* new version of the key to avoid nonce reuse or replay issues.
*/
if (old_key && key->conf.keylen == old_key->conf.keylen &&
- !memcmp(key->conf.key, old_key->conf.key, key->conf.keylen)) {
+ !crypto_memneq(key->conf.key, old_key->conf.key, key->conf.keylen)) {
ieee80211_key_free_unused(key);
ret = 0;
goto out;

View File

@ -37,6 +37,8 @@ git apply $cvePatches/CVE-2017-0786/ANY/0.patch
git apply $cvePatches/CVE-2017-10662/ANY/0.patch
git apply $cvePatches/CVE-2017-11000/ANY/0.patch
git apply $cvePatches/CVE-2017-13080/ANY/0.patch
git apply $cvePatches/CVE-2017-13080-Extra/ANY/0.patch
git apply $cvePatches/CVE-2017-13080-Extra/ANY/1.patch
git apply $cvePatches/CVE-2017-15265/ANY/0.patch
git apply $cvePatches/CVE-2017-2671/ANY/0.patch
git apply $cvePatches/CVE-2017-5970/ANY/0.patch

View File

@ -11,5 +11,7 @@ git apply $cvePatches/CVE-2017-0750/ANY/0.patch
git apply $cvePatches/CVE-2017-0751/ANY/0.patch
git apply $cvePatches/CVE-2017-0786/ANY/0.patch
git apply $cvePatches/CVE-2017-13080/ANY/0.patch
git apply $cvePatches/CVE-2017-13080-Extra/ANY/0.patch
git apply $cvePatches/CVE-2017-13080-Extra/ANY/1.patch
git apply $cvePatches/CVE-2017-15265/ANY/0.patch
cd $base