aboutsummaryrefslogtreecommitdiff
path: root/arch/arm64
AgeCommit message (Collapse)Author
2015-05-26Merge branch 'v3.10/topic/arm64-hmp' into linux-linaro-lsk-v3.10HEADlsk-v3.10-15.05linux-linaro-lskKevin Hilman
* v3.10/topic/arm64-hmp: arm64: topology: fix cpu power calculation
2015-05-26arm64: topology: fix cpu power calculationv3.10/topic/arm64-hmpJorge Ramirez-Ortiz
This commit sets the power of the average CPU in SMP systems to SCHED_CAPACITY_SCALE. Ignoring the condition "min_capacity==max_capacity" causes the function update_cpu_power( .. ) to generate out of range values. This is because the default value of middle_capacity is used in the final calculation instead of a valid scaling factor. Incidentally, when out of range values are generated and if SCHED_FEAT(ARCH_POWER) is true, the load balancing algorithm makes incorrect scheduling decisions typically overallocating all the work on one of the CPU cores. This proposed solution to arm64 is in line with the upstream solution present in arm32 since the commit below was merged: * SHA: 816a8de0017f16c32e747abc5367bf379515b20a * From: Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com> * Date: Mon, 17 Jun 2013 14:20:00 +0100 * Subject: ARM: topology: remove hwid/MPIDR dependency from cpu_capac Signed-off-by: Jorge Ramirez-Ortiz <jorge.ramirez-ortiz@linaro.org> Acked-by: Vincent Guittot <vincent.guittot@linaro.org> Signed-off-by: Kevin Hilman <khilman@linaro.org>
2015-05-18Merge branch 'v3.10/topic/arm64-errata' into linux-linaro-lsk-v3.10Kevin Hilman
* v3.10/topic/arm64-errata: arm64: errata: add workaround for cortex-a53 erratum #845719 arm64: Remove unused cpu_name ascii in arch/arm64/mm/proc.S
2015-05-18arm64: errata: add workaround for cortex-a53 erratum #845719v3.10/topic/arm64-errataWill Deacon
When running a compat (AArch32) userspace on Cortex-A53, a load at EL0 from a virtual address that matches the bottom 32 bits of the virtual address used by a recent load at (AArch64) EL1 might return incorrect data. This patch works around the issue by writing to the contextidr_el1 register on the exception return path when returning to a 32-bit task. Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> (cherry picked from commit 905e8c5dcaa147163672b06fe9dcb5abaacbc711) [khilman: modified to remove dependency on alternatives framwork. Feature is now only compile-time selectable, and defaults to off. ] Signed-off-by: Kevin Hilman <khilman@linaro.org>
2015-05-18arm64: Remove unused cpu_name ascii in arch/arm64/mm/proc.SCatalin Marinas
This string has been moved to arch/arm64/kernel/cputable.c. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> (cherry picked from commit f3a1d7d53dccf51959aec16b574617cc6bfeca09) Signed-off-by: Kevin Hilman <khilman@linaro.org>
2015-04-28Merge branch 'lsk-3.10-armlt-fixes' of ↵Kevin Hilman
git://git.linaro.org/landing-teams/working/arm/kernel into linux-linaro-lsk-v3.10 * 'lsk-3.10-armlt-fixes' of git://git.linaro.org/landing-teams/working/arm/kernel: arm64: psci: move psci firmware calls out of line configs: Remove duplicate CONFIG_FUNCTION_TRACER kconfig: Fix warning "‘jump’ may be used uninitialized" kconfig: fix bug in search results string: use strlen(gstr->s), not gstr->len scripts/sortextable: suppress warning: `relocs_size' may be used uninitialized netfilter: nfnetlink_queue: Fix "discards ‘const’ qualifier" warning drm/cma: Fix printk formats in drm_gem_cma_describe
2015-04-24arm64: psci: move psci firmware calls out of lineWill Deacon
An arm64 allmodconfig fails to build with GCC 5 due to __asmeq assertions in the PSCI firmware calling code firing due to mcount preambles breaking our assumptions about register allocation of function arguments: /tmp/ccDqJsJ6.s: Assembler messages: /tmp/ccDqJsJ6.s:60: Error: .err encountered /tmp/ccDqJsJ6.s:61: Error: .err encountered /tmp/ccDqJsJ6.s:62: Error: .err encountered /tmp/ccDqJsJ6.s:99: Error: .err encountered /tmp/ccDqJsJ6.s:100: Error: .err encountered /tmp/ccDqJsJ6.s:101: Error: .err encountered This patch fixes the issue by moving the PSCI calls out-of-line into their own assembly files, which are safe from the compiler's meddling fingers. Reported-by: Andy Whitcroft <apw@canonical.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> (cherry picked from commit f5e0a12ca2d939e47995f73428d9bf1ad372b289) Signed-off-by: Jon Medhurst <tixy@linaro.org>
2015-04-23arm64: respect mem= for EFIMark Rutland
When booting with EFI, we acquire the EFI memory map after parsing the early params. This unfortuantely renders the option useless as we call memblock_enforce_memory_limit (which uses memblock_remove_range behind the scenes) before we've added any memblocks. We end up removing nothing, then adding all of memory later when efi_init calls reserve_regions. Instead, we can log the limit and apply this later when we do the rest of the memblock work in memblock_init, which should work regardless of the presence of EFI. At the same time we may as well move the early parameter into arm64's mm/init.c, close to arm64_memblock_init. Any memory which must be mapped (e.g. for use by EFI runtime services) must be mapped explicitly reather than relying on the linear mapping, which may be truncated as a result of a mem= option passed on the kernel command line. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Leif Lindholm <leif.lindholm@linaro.org> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> (cherry picked from commit 6083fe74b7bfffc2c7be8c711596608bda0cda6e) Signed-off-by: Alex Shi <alex.shi@linaro.org>
2015-03-17Merge remote-tracking branch 'lsk/v3.10/topic/gcov' into linux-linaro-lskMark Brown
Conflicts: arch/arm64/Kconfig
2015-03-17gcov: enable GCOV_PROFILE_ALL from ARCH Kconfigsv3.10/topic/gcovRiku Voipio
Following the suggestions from Andrew Morton and Stephen Rothwell, Dont expand the ARCH list in kernel/gcov/Kconfig. Instead, define a ARCH_HAS_GCOV_PROFILE_ALL bool which architectures can enable. set ARCH_HAS_GCOV_PROFILE_ALL on Architectures where it was previously allowed + ARM64 which I tested. Signed-off-by: Riku Voipio <riku.voipio@linaro.org> Cc: Peter Oberparleiter <oberpar@linux.vnet.ibm.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> (cherry picked from commit 957e3facd147510f2cf8780e38606f1d707f0e33) Signed-off-by: Mark Brown <broonie@kernel.org> Conflicts: arch/arm/Kconfig arch/arm64/Kconfig arch/microblaze/Kconfig arch/s390/Kconfig arch/x86/Kconfig
2015-03-10 Merge tag 'v3.10.71' into linux-linaro-lskAlex Shi
This is the 3.10.71 stable release Conflicts: arch/arm64/kernel/setup.c
2015-03-06arm64: compat Fix siginfo_t -> compat_siginfo_t conversion on big endianCatalin Marinas
commit 9d42d48a342aee208c1154696196497fdc556bbf upstream. The native (64-bit) sigval_t union contains sival_int (32-bit) and sival_ptr (64-bit). When a compat application invokes a syscall that takes a sigval_t value (as part of a larger structure, e.g. compat_sys_mq_notify, compat_sys_timer_create), the compat_sigval_t union is converted to the native sigval_t with sival_int overlapping with either the least or the most significant half of sival_ptr, depending on endianness. When the corresponding signal is delivered to a compat application, on big endian the current (compat_uptr_t)sival_ptr cast always returns 0 since sival_int corresponds to the top part of sival_ptr. This patch fixes copy_siginfo_to_user32() so that sival_int is copied to the compat_siginfo_t structure. Reported-by: Bamvor Jian Zhang <bamvor.zhangjian@huawei.com> Tested-by: Bamvor Jian Zhang <bamvor.zhangjian@huawei.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2015-02-11arm64: Fix up /proc/cpuinfoMark Rutland
commit 44b82b7700d05a52cd983799d3ecde1a976b3bed upstream. Commit d7a49086f263164a (arm64: cpuinfo: print info for all CPUs) attempted to clean up /proc/cpuinfo, but due to concerns regarding further changes was reverted in commit 5e39977edf6500fd (Revert "arm64: cpuinfo: print info for all CPUs"). There are two major issues with the arm64 /proc/cpuinfo format currently: * The "Features" line describes (only) the 64-bit hwcaps, which is problematic for some 32-bit applications which attempt to parse it. As the same names are used for analogous ISA features (e.g. aes) despite these generally being architecturally unrelated, it is not possible to simply append the 64-bit and 32-bit hwcaps in a manner that might not be misleading to some applications. Various potential solutions have appeared in vendor kernels. Typically the format of the Features line varies depending on whether the task is 32-bit. * Information is only printed regarding a single CPU. This does not match the ARM format, and does not provide sufficient information in big.LITTLE systems where CPUs are heterogeneous. The CPU information printed is queried from the current CPU's registers, which is racy w.r.t. cross-cpu migration. This patch attempts to solve these issues. The following changes are made: * When a task with a LINUX32 personality attempts to read /proc/cpuinfo, the "Features" line contains the decoded 32-bit hwcaps, as with the arm port. Otherwise, the decoded 64-bit hwcaps are shown. This aligns with the behaviour of COMPAT_UTS_MACHINE and COMPAT_ELF_PLATFORM. In the absense of compat support, the Features line is empty. The set of hwcaps injected into a task's auxval are unaffected. * Properties are printed per-cpu, as with the ARM port. The per-cpu information is queried from pre-recorded cpu information (as used by the sanity checks). * As with the previous attempt at fixing up /proc/cpuinfo, the hardware field is removed. The only users so far are 32-bit applications tied to particular boards, so no portable applications should be affected, and this should prevent future tying to particular boards. The following differences remain: * No model_name is printed, as this cannot be queried from the hardware and cannot be provided in a stable fashion. Use of the CPU {implementor,variant,part,revision} fields is sufficient to identify a CPU and is portable across arm and arm64. * The following system-wide properties are not provided, as they are not possible to provide generally. Programs relying on these are already tied to particular (32-bit only) boards: - Hardware - Revision - Serial No software has yet been identified for which these remaining differences are problematic. Cc: Greg Hackmann <ghackmann@google.com> Cc: Ian Campbell <ijc@hellion.org.uk> Cc: Serban Constantinescu <serban.constantinescu@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: cross-distro@lists.linaro.org Cc: linux-api@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Acked-by: Catalin Marinas <catalin.marinas@arm.com> [Mark: backport to v3.10.x] Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2015-01-23Merge remote-tracking branch 'lsk/v3.10/topic/arm64-misc' into linux-linaro-lsklsk-v3.10-15.01Mark Brown
Conflicts: arch/arm64/include/asm/proc-fns.h arch/arm64/kernel/debug-monitors.c arch/arm64/kernel/psci.c
2015-01-23arm64: add atomic pool for non-coherent and CMA allocationsv3.10/topic/arm64-miscLaura Abbott
Neither CMA nor noncoherent allocations support atomic allocations. Add a dedicated atomic pool to support this. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Laura Abbott <lauraa@codeaurora.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: David Riley <davidriley@chromium.org> Cc: Olof Johansson <olof@lixom.net> Cc: Ritesh Harjain <ritesh.harjani@gmail.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Thierry Reding <thierry.reding@gmail.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> (cherry picked from commit d4932f9e81ae7a7bf3c3967e48373909b9c98ee5) Signed-off-by: Mark Brown <broonie@kernel.org>
2015-01-23Revert "arm64: vdso: move to _install_special_mapping and remove arch_vma_name"Mark Brown
This reverts commit 95c91bdd8eafdc74337049c45d74c903b7dac49c. Signed-off-by: Mark Brown <broonie@kernel.org>
2015-01-23Revert "arm64: vdso: move data page before code pages"Mark Brown
This reverts commit 82a95d0521cb6258559d18dca736da8272ba05a7. Signed-off-by: Mark Brown <broonie@kernel.org>
2015-01-23Merge remote-tracking branch 'lsk/v3.10/topic/arm64-perf' into linux-linaro-lskMark Brown
2015-01-22arm64: Add brackets around user_stack_pointer()v3.10/topic/arm64-perfCatalin Marinas
Commit 5f888a1d33 (ARM64: perf: support dwarf unwinding in compat mode) changes user_stack_pointer() to return the compat SP for 32-bit tasks but without brackets around the whole definition, with possible issues on the call sites (noticed with a subsequent fix for KSTK_ESP). Fixes: 5f888a1d33c4 (ARM64: perf: support dwarf unwinding in compat mode) Reported-by: Sudeep Holla <sudeep.holla@arm.com> Cc: <stable@vger.kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> (cherry picked from commit 2520d039728b2a3c5ae7f79fe2a0e9d182855b12) Signed-off-by: Mark Brown <broonie@kernel.org>
2015-01-22arm64: LLVMLinux: Use global stack register variable for aarch64Mark Charlebois
To support both Clang and GCC, use the global stack register variable vs a local register variable. Author: Mark Charlebois <charlebm@gmail.com> Signed-off-by: Mark Charlebois <charlebm@gmail.com> Signed-off-by: Behan Webster <behanw@converseincode.com> Signed-off-by: Will Deacon <will.deacon@arm.com> (cherry picked from commit 34ccf8f455f1ae7761810a74308f82daca67ced1) Signed-off-by: Mark Brown <broonie@kernel.org>
2015-01-22arm64: LLVMLinux: Use current_stack_pointer in kernel/traps.cBehan Webster
Use the global current_stack_pointer to get the value of the stack pointer. This change supports being able to compile the kernel with both gcc and clang. Signed-off-by: Behan Webster <behanw@converseincode.com> Signed-off-by: Mark Charlebois <charlebm@gmail.com> Reviewed-by: Olof Johansson <olof@lixom.net> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> (cherry picked from commit 2128df143d840a20e12818290eb6e40b95cc4ac0) Signed-off-by: Mark Brown <broonie@kernel.org>
2015-01-22arm64: LLVMLinux: Calculate current_thread_info from current_stack_pointerBehan Webster
Use the global current_stack_pointer to get the value of the stack pointer. This change supports being able to compile the kernel with both gcc and clang. Signed-off-by: Behan Webster <behanw@converseincode.com> Signed-off-by: Mark Charlebois <charlebm@gmail.com> Reviewed-by: Jan-Simon Möller <dl9pf@gmx.de> Reviewed-by: Olof Johansson <olof@lixom.net> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> (cherry picked from commit 786248705ecf5290f26534e8eef62ba6dd63b806) Signed-off-by: Mark Brown <broonie@kernel.org>
2015-01-22arm64: LLVMLinux: Use current_stack_pointer in save_stack_trace_tskBehan Webster
Use the global current_stack_pointer to get the value of the stack pointer. This change supports being able to compile the kernel with both gcc and clang. Signed-off-by: Behan Webster <behanw@converseincode.com> Signed-off-by: Mark Charlebois <charlebm@gmail.com> Reviewed-by: Jan-Simon Möller <dl9pf@gmx.de> Reviewed-by: Olof Johansson <olof@lixom.net> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> (cherry picked from commit bb28cec4ea2f5151c08e061c6de825a8c853bbd6) Signed-off-by: Mark Brown <broonie@kernel.org>
2015-01-22arm64: LLVMLinux: Add current_stack_pointer() for arm64Behan Webster
Define a global named register for current_stack_pointer. The use of this new variable guarantees that both gcc and clang can access this register in C code. Signed-off-by: Behan Webster <behanw@converseincode.com> Reviewed-by: Jan-Simon Möller <dl9pf@gmx.de> Reviewed-by: Mark Charlebois <charlebm@gmail.com> Reviewed-by: Olof Johansson <olof@lixom.net> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> (cherry picked from commit 3337a10e0d0cbc9225cefc23aa7a604b698367ed) Signed-off-by: Mark Brown <broonie@kernel.org>
2015-01-22arm64: Add CONFIG_DEBUG_SET_MODULE_RONX supportLaura Abbott
In a similar fashion to other architecture, add the infrastructure and Kconfig to enable DEBUG_SET_MODULE_RONX support. When enabled, module ranges will be marked read-only/no-execute as appropriate. Signed-off-by: Laura Abbott <lauraa@codeaurora.org> [will: fixed off-by-one in module end check] Signed-off-by: Will Deacon <will.deacon@arm.com> (cherry picked from commit 11d91a770f1fff44dafdf88d6089a3451f99c9b6) Signed-off-by: Mark Brown <broonie@kernel.org> Conflicts: arch/arm64/Kconfig.debug
2015-01-22arm64: Introduce {set,clear}_pte_bitLaura Abbott
It's useful to be able to change individual bits in ptes at times. Introduce functions for this and update existing pte_mk* functions to use these primatives. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Laura Abbott <lauraa@codeaurora.org> [will: added missing inline keyword for new header functions] Signed-off-by: Will Deacon <will.deacon@arm.com> (cherry picked from commit b6d4f2800b7bad654caf00654f4bff21594ef838) Signed-off-by: Mark Brown <broonie@kernel.org>
2015-01-22arm64: convert part of soft_restart() to assemblyArun Chandran
The current soft_restart() and setup_restart implementations incorrectly assume that compiler will not spill/fill values to/from stack. However this assumption seems to be wrong, revealed by the disassembly of the currently existing code (v3.16) built with Linaro GCC 4.9-2014.05. ffffffc000085224 <soft_restart>: ffffffc000085224: a9be7bfd stp x29, x30, [sp,#-32]! ffffffc000085228: 910003fd mov x29, sp ffffffc00008522c: f9000fa0 str x0, [x29,#24] ffffffc000085230: 94003d21 bl ffffffc0000946b4 <setup_mm_for_reboot> ffffffc000085234: 94003b33 bl ffffffc000093f00 <flush_cache_all> ffffffc000085238: 94003dfa bl ffffffc000094a20 <cpu_cache_off> ffffffc00008523c: 94003b31 bl ffffffc000093f00 <flush_cache_all> ffffffc000085240: b0003321 adrp x1, ffffffc0006ea000 <reset_devices> ffffffc000085244: f9400fa0 ldr x0, [x29,#24] ----> spilled addr ffffffc000085248: f942fc22 ldr x2, [x1,#1528] ----> global memstart_addr ffffffc00008524c: f0000061 adrp x1, ffffffc000094000 <__inval_cache_range+0x40> ffffffc000085250: 91290021 add x1, x1, #0xa40 ffffffc000085254: 8b010041 add x1, x2, x1 ffffffc000085258: d2c00802 mov x2, #0x4000000000 // #274877906944 ffffffc00008525c: 8b020021 add x1, x1, x2 ffffffc000085260: d63f0020 blr x1 ... Here the compiler generates memory accesses after the cache is disabled, loading stale values for the spilled value and global variable. As we cannot control when the compiler will access memory we must rewrite the functions in assembly to stash values we need in registers prior to disabling the cache, avoiding the use of memory. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Arun Chandran <achandran@mvista.com> Signed-off-by: Will Deacon <will.deacon@arm.com> (cherry picked from commit 5e051531447259e5df95c44bccb69979537c19e4) Signed-off-by: Mark Brown <broonie@kernel.org> Conflicts: arch/arm64/include/asm/proc-fns.h
2015-01-22arm64: use irq_set_affinity with force=false when migrating irqsSudeep Holla
The arm64 interrupt migration code on cpu offline calls irqchip.irq_set_affinity() with the argument force=true. Originally this argument had no effect because it was not used by any interrupt chip driver and there was no semantics defined. This changed with commit 01f8fa4f01d8 ("genirq: Allow forcing cpu affinity of interrupts") which made the force argument useful to route interrupts to not yet online cpus without checking the target cpu against the cpu online mask. The following commit ffde1de64012 ("irqchip: gic: Support forced affinity setting") implemented this for the GIC interrupt controller. As a consequence the cpu offline irq migration fails if CPU0 is offlined, because CPU0 is still set in the affinity mask and the validation against cpu online mask is skipped to the force argument being true. The following first_cpu(mask) selection always selects CPU0 as the target. Commit 601c942176d8("arm64: use cpu_online_mask when using forced irq_set_affinity") intended to fix the above mentioned issue but introduced another issue where affinity can be migrated to a wrong CPU due to unconditional copy of cpu_online_mask. As with for arm, solve the issue by calling irq_set_affinity() with force=false from the CPU offline irq migration code so the GIC driver validates the affinity mask against CPU online mask and therefore removes CPU0 from the possible target candidates. Also revert the changes done in the commit 601c942176d8 as it's no longer needed. Tested on Juno platform. Fixes: 601c942176d8("arm64: use cpu_online_mask when using forced irq_set_affinity") Signed-off-by: Sudeep Holla <sudeep.holla@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: <stable@vger.kernel.org> # 3.10.x Signed-off-by: Will Deacon <will.deacon@arm.com> (cherry picked from commit 3d8afe3099ebc602848aa7f09235cce3a9a023ce) Signed-off-by: Mark Brown <broonie@kernel.org>
2015-01-22arm64: report correct stack pointer in KSTK_ESP for compat tasksWill Deacon
The KSTK_ESP macro is used to determine the user stack pointer for a given task. In particular, this is used to to report the '[stack]' VMA in /proc/self/maps, which is used by Android to determine the stack location for children of the main thread. This patch fixes the macro to use user_stack_pointer instead of directly returning sp. This means that we report w13 instead of sp, since the former is used as the stack pointer when executing in AArch32 state. Cc: <stable@vger.kernel.org> Reported-by: Serban Constantinescu <Serban.Constantinescu@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> (cherry picked from commit 3168a743461ecf86adf3e7dcfcd79271828fb263) Signed-off-by: Mark Brown <broonie@kernel.org>
2015-01-22arm64: ptrace: fix compat reg getter/setter return valuesWill Deacon
copy_{to,from}_user return the number of bytes remaining on failure, not an error code. This patch returns -EFAULT when the copy operation didn't complete, rather than expose the number of bytes not copied directly to userspace. Signed-off-by: Will Deacon <will.deacon@arm.com> (cherry picked from commit 85487edd252fa04718dcd735bc0f41213bbb9546) Signed-off-by: Mark Brown <broonie@kernel.org>
2015-01-22arm64: ptrace: fix compat hardware watchpoint reportingWill Deacon
I'm not sure what I was on when I wrote this, but when iterating over the hardware watchpoint array (hbp_watch_array), our index is off by ARM_MAX_BRP, so we walk off the end of our thread_struct... ... except, a dodgy condition in the loop means that it never executes at all (bp cannot be NULL). This patch fixes the code so that we remove the bp check and use the correct index for accessing the watchpoint structures. Cc: <stable@vger.kernel.org> Signed-off-by: Will Deacon <will.deacon@arm.com> (cherry picked from commit 27d7ff273c2aad37b28f6ff0cab2cfa35b51e648) Signed-off-by: Mark Brown <broonie@kernel.org>
2015-01-22arm64: Remove unused variable in head.SGeoff Levand
Remove an unused local variable from head.S. It seems this was never used even from the initial commit 9703d9d7f77ce129621f7d80a844822e2daa7008 (arm64: Kernel booting and initialisation), and is a left over from a previous implementation of __calc_phys_offset. Signed-off-by: Geoff Levand <geoff@infradead.org> Signed-off-by: Will Deacon <will.deacon@arm.com> (cherry picked from commit 5843be2279d7a91ef48c20ac31715d1eb9607a84) Signed-off-by: Mark Brown <broonie@kernel.org>
2015-01-22arm64: mm: update max pa bits to 48Ganapatrao Kulkarni
Now that we support 48-bit physical addressing, update MAX_PHYSMEM_BITS accordingly. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Ganapatrao Kulkarni <ganapatrao.kulkarni@caviumnetworks.com> Signed-off-by: Will Deacon <will.deacon@arm.com> (cherry picked from commit 07a15dd55a3d65f81b4b09eab293f4afc720b082) Signed-off-by: Mark Brown <broonie@kernel.org>
2015-01-22arm64: align randomized TEXT_OFFSET on 4 kB boundaryArd Biesheuvel
When booting via UEFI, the kernel Image is loaded at a 4 kB boundary and the embedded EFI stub is executed in place. The EFI stub relocates the Image to reside TEXT_OFFSET bytes above a 2 MB boundary, and jumps into the kernel proper. In AArch64, PC relative symbol references are emitted using adrp/add or adrp/ldr pairs, where the offset into a 4 kB page is resolved using a separate :lo12: relocation. This implicitly assumes that the code will always be executed at the same relative offset with respect to a 4 kB boundary, or the references will point to the wrong address. This means we should link the kernel at a 4 kB aligned base address in order to remain compatible with the base address the UEFI loader uses when doing the initial load of Image. So update the code that generates TEXT_OFFSET to choose a multiple of 4 kB. At the same time, update the code so it chooses from the interval [0..2MB) as the author originally intended. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> (cherry picked from commit 4190312beb2acfb7bfb1bb971e24a759aa96b0e8) Signed-off-by: Mark Brown <broonie@kernel.org> Conflicts: arch/arm64/Makefile arch/arm64/kernel/head.S
2015-01-22arm64: Limit the CMA buffer to 32-bit if ZONE_DMACatalin Marinas
When the CMA buffer is allocated, it is too early to know whether devices will require ZONE_DMA memory. This patch limits the CMA buffer to (DMA_BIT_MASK(32) + 1) if CONFIG_ZONE_DMA is enabled. In addition, it computes the dma_to_phys(DMA_BIT_MASK(32)) before the increment (no current functional change). Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> (cherry picked from commit 2d5a5612bceda8edd25b29f363c4e2c6cda28bab) Signed-off-by: Mark Brown <broonie@kernel.org> Conflicts: arch/arm64/mm/init.c
2015-01-22arm64: kernel: initialize broadcast hrtimer based clock event deviceLorenzo Pieralisi
On platforms implementing CPU power management, the CPUidle subsystem can allow CPUs to enter idle states where local timers logic is lost on power down. To keep the software timers functional the kernel relies on an always-on broadcast timer to be present in the platform to relay the interrupt signalling the timer expiries. For platforms implementing CPU core gating that do not implement an always-on HW timer or implement it in a broken way, this patch adds code to initialize the kernel hrtimer based clock event device upon boot (which can be chosen as tick broadcast device by the kernel). It relies on a dynamically chosen CPU to be always powered-up. This CPU then relays the timer interrupt to CPUs in deep-idle states through its HW local timer device. Having a CPU always-on has implications on power management platform capabilities and makes CPUidle suboptimal, since at least a CPU is kept always in a shallow idle state by the kernel to relay timer interrupts, but at least leaves the kernel with a functional system with some working power management capabilities. The hrtimer based clock event device is unconditionally registered, but has the lowest possible rating such that any broadcast-capable HW clock event device present will be chosen in preference as the tick broadcast device. Reviewed-by: Preeti U Murthy <preeti@linux.vnet.ibm.com> Acked-by: Will Deacon <will.deacon@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> (cherry picked from commit 9358d755bd5cba8965ea79f2a446e689323409f9, again after temporary revert) Signed-off-by: Mark Brown <broonie@kernel.org>
2015-01-21arm64: don't call break hooks for BRK exceptions from EL0Will Deacon
Our break hooks are used to handle brk exceptions from kgdb (and potentially kprobes if that code ever resurfaces), so don't bother calling them if the BRK exception comes from userspace. This prevents userspace from trapping to a kdb shell on systems where kgdb is enabled and active. Cc: <stable@vger.kernel.org> Reported-by: Omar Sandoval <osandov@osandov.com> Signed-off-by: Will Deacon <will.deacon@arm.com> (cherry picked from commit c878e0cff5c5e56b216951cbe75f7a3dd500a736) Signed-off-by: Mark Brown <broonie@kernel.org> Conflicts: arch/arm64/kernel/debug-monitors.c
2015-01-21arm64: vdso: fix build error when switching from LE to BEArun Chandran
Building a kernel with CPU_BIG_ENDIAN fails if there are stale objects from a !CPU_BIG_ENDIAN build. Due to a missing FORCE prerequisite on an if_changed rule in the VDSO Makefile, we attempt to link a stale LE object into the new BE kernel. According to Documentation/kbuild/makefiles.txt, FORCE is required for if_changed rules and forgetting it is a common mistake, so fix it by 'Forcing' the build of vdso. This patch fixes build errors like these: arch/arm64/kernel/vdso/note.o: compiled for a little endian system and target is big endian failed to merge target specific data of file arch/arm64/kernel/vdso/note.o arch/arm64/kernel/vdso/sigreturn.o: compiled for a little endian system and target is big endian failed to merge target specific data of file arch/arm64/kernel/vdso/sigreturn.o Tested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Arun Chandran <achandran@mvista.com> Signed-off-by: Will Deacon <will.deacon@arm.com> (cherry picked from commit 1915e2ad1cf548217c963121e4076b3d44dd0169) Signed-off-by: Mark Brown <broonie@kernel.org>
2015-01-21arm64: fix soft lockup due to large tlb flush rangeMark Salter
Under certain loads, this soft lockup has been observed: BUG: soft lockup - CPU#2 stuck for 22s! [ip6tables:1016] Modules linked in: ip6t_rpfilter ip6t_REJECT cfg80211 rfkill xt_conntrack ebtable_nat ebtable_broute bridge stp llc ebtable_filter ebtables ip6table_nat nf_conntrack_ipv6 nf_defrag_ipv6 nf_nat_ipv6 ip6table_mangle ip6table_security ip6table_raw ip6table_filter ip6_tables iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack iptable_mangle iptable_security iptable_raw vfat fat efivarfs xfs libcrc32c CPU: 2 PID: 1016 Comm: ip6tables Not tainted 3.13.0-0.rc7.30.sa2.aarch64 #1 task: fffffe03e81d1400 ti: fffffe03f01f8000 task.ti: fffffe03f01f8000 PC is at __cpu_flush_kern_tlb_range+0xc/0x40 LR is at __purge_vmap_area_lazy+0x28c/0x3ac pc : [<fffffe000009c5cc>] lr : [<fffffe0000182710>] pstate: 80000145 sp : fffffe03f01fbb70 x29: fffffe03f01fbb70 x28: fffffe03f01f8000 x27: fffffe0000b19000 x26: 00000000000000d0 x25: 000000000000001c x24: fffffe03f01fbc50 x23: fffffe03f01fbc58 x22: fffffe03f01fbc10 x21: fffffe0000b2a3f8 x20: 0000000000000802 x19: fffffe0000b2a3c8 x18: 000003fffdf52710 x17: 000003ff9d8bb910 x16: fffffe000050fbfc x15: 0000000000005735 x14: 000003ff9d7e1a5c x13: 0000000000000000 x12: 000003ff9d7e1a5c x11: 0000000000000007 x10: fffffe0000c09af0 x9 : fffffe0000ad1000 x8 : 000000000000005c x7 : fffffe03e8624000 x6 : 0000000000000000 x5 : 0000000000000000 x4 : 0000000000000000 x3 : fffffe0000c09cc8 x2 : 0000000000000000 x1 : 000fffffdfffca80 x0 : 000fffffcd742150 The __cpu_flush_kern_tlb_range() function looks like: ENTRY(__cpu_flush_kern_tlb_range) dsb sy lsr x0, x0, #12 lsr x1, x1, #12 1: tlbi vaae1is, x0 add x0, x0, #1 cmp x0, x1 b.lo 1b dsb sy isb ret ENDPROC(__cpu_flush_kern_tlb_range) The above soft lockup shows the PC at tlbi insn with: x0 = 0x000fffffcd742150 x1 = 0x000fffffdfffca80 So __cpu_flush_kern_tlb_range has 0x128ba930 tlbi flushes left after it has already been looping for 23 seconds!. Looking up one frame at __purge_vmap_area_lazy(), there is: ... list_for_each_entry_rcu(va, &vmap_area_list, list) { if (va->flags & VM_LAZY_FREE) { if (va->va_start < *start) *start = va->va_start; if (va->va_end > *end) *end = va->va_end; nr += (va->va_end - va->va_start) >> PAGE_SHIFT; list_add_tail(&va->purge_list, &valist); va->flags |= VM_LAZY_FREEING; va->flags &= ~VM_LAZY_FREE; } } ... if (nr || force_flush) flush_tlb_kernel_range(*start, *end); So if two areas are being freed, the range passed to flush_tlb_kernel_range() may be as large as the vmalloc space. For arm64, this is ~240GB for 4k pagesize and ~2TB for 64kpage size. This patch works around this problem by adding a loop limit. If the range is larger than the limit, use flush_tlb_all() rather than flushing based on individual pages. The limit chosen is arbitrary as the TLB size is implementation specific and not accessible in an architected way. The aim of the arbitrary limit is to avoid soft lockup. Signed-off-by: Mark Salter <msalter@redhat.com> [catalin.marinas@arm.com: commit log update] [catalin.marinas@arm.com: marginal optimisation] [catalin.marinas@arm.com: changed to MAX_TLB_RANGE and added comment] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> (cherry picked from commit 05ac65305437e8ef63d2d19cac704138a2a05aa5) Signed-off-by: Mark Brown <broonie@kernel.org>
2015-01-21arm64: Do not initialise the fixmap page tables in head.SCatalin Marinas
The early_ioremap_init() function already handles fixmap pte initialisation, so upgrade this to cover all of pud/pmd/pte and remove one page from swapper_pg_dir. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: Jungseok Lee <jungseoklee85@gmail.com> (cherry picked from commit 7edd88ad7e59c2b7b49da0e00f251884fb785d4f) Signed-off-by: Mark Brown <broonie@kernel.org>
2015-01-21arm64: Create non-empty ZONE_DMA when DRAM starts above 4GBCatalin Marinas
ZONE_DMA is created to allow 32-bit only devices to access memory in the absence of an IOMMU. On systems where the memory starts above 4GB, it is expected that some devices have a DMA offset hardwired to be able to access the bottom of the memory. Linux currently supports DT bindings for the DMA offsets but they are not (easily) available early during boot. This patch tries to guess a DMA offset and assumes that ZONE_DMA corresponds to the 32-bit mask above the start of DRAM. Fixes: 2d5a5612bc (arm64: Limit the CMA buffer to 32-bit if ZONE_DMA) Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Reported-by: Mark Salter <msalter@redhat.com> Tested-by: Mark Salter <msalter@redhat.com> Tested-by: Anup Patel <anup.patel@linaro.org> (cherry picked from commit d50314a6b0702c630c35b88148c1acb76d2e4ede) Signed-off-by: Mark Brown <broonie@kernel.org> Conflicts: arch/arm64/mm/init.c
2015-01-21arm64: add MIDR_EL1 field accessorsMark Rutland
The MIDR_EL1 register is composed of a number of bitfields, and uses of the fields has so far involved open-coding of the shifts and masks required. This patch adds shifts and masks for each of the MIDR_EL1 subfields, and also provides accessors built atop of these. Existing uses within cputype.h are updated to use these accessors. The read_cpuid_part_number macro is modified to return the extracted bitfield rather than returning the value in-place with all other fields (including revision) masked out, to better match the other accessors. As the value is only used in comparison with the *_CPU_PART_* macros which are similarly updated, and these values are never exposed to userspace, this change should not affect any functionality. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> (cherry picked from commit 89c4a306e7631bcb71cc537c8a029172af6047fe) Signed-off-by: Mark Brown <broonie@kernel.org>
2015-01-21arm64: kernel: add __init marker to PSCI init functionsLorenzo Pieralisi
PSCI init functions must be marked as __init so that they are freed by the kernel upon boot. This patch marks the PSCI init functions as such since they need not be persistent in the kernel address space after the kernel has booted. Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> (cherry picked from commit b9e97ef93c630404f305350d88d09391d1a55648) Signed-off-by: Mark Brown <broonie@kernel.org>
2015-01-21arm64: kernel: enable PSCI cpu operations on UP systemsLorenzo Pieralisi
PSCI CPU operations have to be enabled on UP kernels so that calls like eg cpu_suspend can be made functional on UP too. This patch reworks the PSCI CPU operations so that they can be enabled on UP systems. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> (cherry picked from commit 756854d9b99a735f86bc3b86df5c19be12e8746e) Signed-off-by: Mark Brown <broonie@kernel.org>
2015-01-21arm64: vdso: move data page before code pagesWill Deacon
Andy pointed out that binutils generates additional sections in the vdso image (e.g. section string table) which, if our .text section gets big enough, could cross a page boundary and end up screwing up the location where the kernel expects to put the data page. This patch solves the issue in the same manner as x86_32, by moving the data page before the code pages. Cc: Andy Lutomirski <luto@amacapital.net> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> (cherry picked from commit 601255ae3c98fdeeee3a8bb4696425e4f868b4f1) Signed-off-by: Mark Brown <broonie@kernel.org>
2015-01-21arm64: vdso: move to _install_special_mapping and remove arch_vma_nameWill Deacon
_install_special_mapping replaces install_special_mapping and removes the need to detect special VMA in arch_vma_name. This patch moves the vdso and compat vectors page code over to the new API. Cc: Andy Lutomirski <luto@amacapital.net> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> (cherry picked from commit 2fea7f6c98f5957e539eb8aa0ce849729b900342) Signed-off-by: Mark Brown <broonie@kernel.org>
2015-01-21arm64: Use pr_* instead of printkJungseok Lee
This patch fixed the following checkpatch complaint as using pr_* instead of printk. WARNING: printk() should include KERN_ facility level Signed-off-by: Jungseok Lee <jays.lee@samsung.com> Reviewed-by: Sungjinn Chung <sungjinn.chung@samsung.com> Acked-by: Kukjin Kim <kgene.kim@samsung.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> (cherry picked from commit ac7b406c1a9d50ddbf5e5cbce8ca4d68d36ac2db) Signed-off-by: Mark Brown <broonie@kernel.org>
2015-01-18Merge remote-tracking branch 'lsk/v3.10/topic/arm64-cpuidle' into ↵Mark Brown
linux-linaro-lsk Conflicts: arch/arm64/kernel/Makefile drivers/cpuidle/Makefile
2015-01-18arm64: kernel: introduce cpu_init_idle CPU operationLorenzo Pieralisi
The CPUidle subsystem on ARM64 machines requires the idle states implementation back-end to initialize idle states parameter upon boot. This patch adds a hook in the CPU operations structure that should be initialized by the CPU operations back-end in order to provide a function that initializes cpu idle states. This patch also adds the infrastructure to arm64 kernel required to export the CPU operations based initialization interface, so that drivers (ie CPUidle) can use it when they are initialized at probe time. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> (cherry picked from commit d64f84f696463c58e1908510e45b0f5d450f737a) Signed-off-by: Mark Brown <broonie@kernel.org> Conflicts: arch/arm64/kernel/Makefile
2015-01-17Merge remote-tracking branch 'lsk/v3.10/topic/arm64-misc' into linux-linaro-lskMark Brown
Conflicts: arch/arm/include/asm/psci.h arch/arm/kernel/psci.c arch/arm/kernel/psci_smp.c arch/arm/kernel/setup.c arch/arm64/Kconfig arch/arm64/include/asm/cpu_ops.h arch/arm64/include/asm/pgtable.h arch/arm64/include/asm/psci.h arch/arm64/kernel/head.S arch/arm64/kernel/psci.c arch/arm64/kernel/ptrace.c