From 4256415b296348ff16cd17a5b8f8dce4dea37328 Mon Sep 17 00:00:00 2001 From: Larry Bassel Date: Mon, 29 Jul 2013 13:43:17 -0700 Subject: msm: Make CONFIG_STRICT_MEMORY_RWX even stricter If CONFIG_STRICT_MEMORY_RWX was set, the first section (containing the kernel page table and the initial code) and the section containing the init code were both given RWX permission, which is a potential security hole. Pad the first section after the initial code (which will never be executed when the MMU is on) to make the rest of the kernel text start in the second section and make the first section RW. Move some data which had ended up in the "init text" section into the "init data" one, as this is RW, not RX. Make the "init text" RX. We will not free the section containing the "init text", because if we do, the kernel will allocate memory for RW data there. Change-Id: I6ca5f4e07342c374246f04a3fee18042fd47c33b CRs-fixed: 513919 Signed-off-by: Larry Bassel --- arch/arm/kernel/vmlinux.lds.S | 12 +++++++----- arch/arm/mm/init.c | 9 +++++++++ arch/arm/mm/mmu.c | 15 +++++++-------- 3 files changed, 23 insertions(+), 13 deletions(-) diff --git a/arch/arm/kernel/vmlinux.lds.S b/arch/arm/kernel/vmlinux.lds.S index ae59e5a..0bf55ae 100644 --- a/arch/arm/kernel/vmlinux.lds.S +++ b/arch/arm/kernel/vmlinux.lds.S @@ -93,6 +93,9 @@ SECTIONS _text = .; HEAD_TEXT } +#ifdef CONFIG_STRICT_MEMORY_RWX + . = ALIGN(1<