aboutsummaryrefslogtreecommitdiff
path: root/arch/arm64
AgeCommit message (Collapse)Author
2015-04-23arm64: respect mem= for EFIlsk-v3.14-15.04Mark Rutland
When booting with EFI, we acquire the EFI memory map after parsing the early params. This unfortuantely renders the option useless as we call memblock_enforce_memory_limit (which uses memblock_remove_range behind the scenes) before we've added any memblocks. We end up removing nothing, then adding all of memory later when efi_init calls reserve_regions. Instead, we can log the limit and apply this later when we do the rest of the memblock work in memblock_init, which should work regardless of the presence of EFI. At the same time we may as well move the early parameter into arm64's mm/init.c, close to arm64_memblock_init. Any memory which must be mapped (e.g. for use by EFI runtime services) must be mapped explicitly reather than relying on the linear mapping, which may be truncated as a result of a mem= option passed on the kernel command line. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Leif Lindholm <leif.lindholm@linaro.org> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> (cherry picked from commit 6083fe74b7bfffc2c7be8c711596608bda0cda6e) Signed-off-by: Alex Shi <alex.shi@linaro.org>
2015-04-18Merge remote-tracking branch 'origin/v3.14/topic/overlayfs' into ↵Alex Shi
linux-linaro-lsk-v3.14
2015-04-13arm64: Use the reserved TTBR0 if context switching to the init_mmCatalin Marinas
commit e53f21bce4d35a93b23d8fa1a840860f6c74f59e upstream. The idle_task_exit() function may call switch_mm() with next == &init_mm. On arm64, init_mm.pgd cannot be used for user mappings, so this patch simply sets the reserved TTBR0. Reported-by: Jon Medhurst (Tixy) <tixy@linaro.org> Tested-by: Jon Medhurst (Tixy) <tixy@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2015-03-30 Merge tag 'v3.14.37' into linux-linaro-lsk-v3.14Alex Shi
This is the 3.14.37 stable release
2015-03-26arm64: Honor __GFP_ZERO in dma allocationsSuzuki K. Poulose
commit 7132813c384515c9dede1ae20e56f3895feb7f1e upstream. Current implementation doesn't zero out the pages allocated. Honor the __GFP_ZERO flag and zero out if set. Cc: <stable@vger.kernel.org> # v3.14+ Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2015-03-19Merge remote-tracking branch 'lsk/v3.14/topic/gcov' into linux-linaro-lsk-v3.14Mark Brown
2015-03-19gcov: enable GCOV_PROFILE_ALL from ARCH KconfigsRiku Voipio
Following the suggestions from Andrew Morton and Stephen Rothwell, Dont expand the ARCH list in kernel/gcov/Kconfig. Instead, define a ARCH_HAS_GCOV_PROFILE_ALL bool which architectures can enable. set ARCH_HAS_GCOV_PROFILE_ALL on Architectures where it was previously allowed + ARM64 which I tested. Signed-off-by: Riku Voipio <riku.voipio@linaro.org> Cc: Peter Oberparleiter <oberpar@linux.vnet.ibm.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> (cherry picked from commit 957e3facd147510f2cf8780e38606f1d707f0e33) Signed-off-by: Mark Brown <broonie@kernel.org> Conflicts: arch/arm64/Kconfig arch/x86/Kconfig
2015-03-10 Merge tag 'v3.14.35' into linux-linaro-lsk-v3.14Alex Shi
This is the 3.14.35 stable release
2015-03-06arm64: compat Fix siginfo_t -> compat_siginfo_t conversion on big endianCatalin Marinas
commit 9d42d48a342aee208c1154696196497fdc556bbf upstream. The native (64-bit) sigval_t union contains sival_int (32-bit) and sival_ptr (64-bit). When a compat application invokes a syscall that takes a sigval_t value (as part of a larger structure, e.g. compat_sys_mq_notify, compat_sys_timer_create), the compat_sigval_t union is converted to the native sigval_t with sival_int overlapping with either the least or the most significant half of sival_ptr, depending on endianness. When the corresponding signal is delivered to a compat application, on big endian the current (compat_uptr_t)sival_ptr cast always returns 0 since sival_int corresponds to the top part of sival_ptr. This patch fixes copy_siginfo_to_user32() so that sival_int is copied to the compat_siginfo_t structure. Reported-by: Bamvor Jian Zhang <bamvor.zhangjian@huawei.com> Tested-by: Bamvor Jian Zhang <bamvor.zhangjian@huawei.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2015-03-03Merge tag 'v3.14.34' into linux-linaro-lsk-v3.14Mark Brown
This is the 3.14.34 stable release Conflicts: arch/arm64/kernel/setup.c
2015-02-11arm64: Fix up /proc/cpuinfoMark Rutland
commit 44b82b7700d05a52cd983799d3ecde1a976b3bed upstream. Commit d7a49086f263164a (arm64: cpuinfo: print info for all CPUs) attempted to clean up /proc/cpuinfo, but due to concerns regarding further changes was reverted in commit 5e39977edf6500fd (Revert "arm64: cpuinfo: print info for all CPUs"). There are two major issues with the arm64 /proc/cpuinfo format currently: * The "Features" line describes (only) the 64-bit hwcaps, which is problematic for some 32-bit applications which attempt to parse it. As the same names are used for analogous ISA features (e.g. aes) despite these generally being architecturally unrelated, it is not possible to simply append the 64-bit and 32-bit hwcaps in a manner that might not be misleading to some applications. Various potential solutions have appeared in vendor kernels. Typically the format of the Features line varies depending on whether the task is 32-bit. * Information is only printed regarding a single CPU. This does not match the ARM format, and does not provide sufficient information in big.LITTLE systems where CPUs are heterogeneous. The CPU information printed is queried from the current CPU's registers, which is racy w.r.t. cross-cpu migration. This patch attempts to solve these issues. The following changes are made: * When a task with a LINUX32 personality attempts to read /proc/cpuinfo, the "Features" line contains the decoded 32-bit hwcaps, as with the arm port. Otherwise, the decoded 64-bit hwcaps are shown. This aligns with the behaviour of COMPAT_UTS_MACHINE and COMPAT_ELF_PLATFORM. In the absense of compat support, the Features line is empty. The set of hwcaps injected into a task's auxval are unaffected. * Properties are printed per-cpu, as with the ARM port. The per-cpu information is queried from pre-recorded cpu information (as used by the sanity checks). * As with the previous attempt at fixing up /proc/cpuinfo, the hardware field is removed. The only users so far are 32-bit applications tied to particular boards, so no portable applications should be affected, and this should prevent future tying to particular boards. The following differences remain: * No model_name is printed, as this cannot be queried from the hardware and cannot be provided in a stable fashion. Use of the CPU {implementor,variant,part,revision} fields is sufficient to identify a CPU and is portable across arm and arm64. * The following system-wide properties are not provided, as they are not possible to provide generally. Programs relying on these are already tied to particular (32-bit only) boards: - Hardware - Revision - Serial No software has yet been identified for which these remaining differences are problematic. Cc: Greg Hackmann <ghackmann@google.com> Cc: Ian Campbell <ijc@hellion.org.uk> Cc: Serban Constantinescu <serban.constantinescu@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: cross-distro@lists.linaro.org Cc: linux-api@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2015-01-23Merge remote-tracking branch 'lsk/v3.14/topic/arm64-efi' into ↵lsk-v3.14-15.01Mark Brown
linux-linaro-lsk-v3.14 Conflicts: arch/arm64/kernel/Makefile drivers/firmware/efi/efi-stub-helper.c
2015-01-16Merge tag 'v3.14.29' into linux-linaro-lsk-v3.14Mark Brown
This is the 3.14.29 stable release
2015-01-16arm64: kernel: fix __cpu_suspend mm switch on warm-bootLorenzo Pieralisi
commit f43c27188a49111b58e9611afa2f0365b0b55625 upstream. On arm64 the TTBR0_EL1 register is set to either the reserved TTBR0 page tables on boot or to the active_mm mappings belonging to user space processes, it must never be set to swapper_pg_dir page tables mappings. When a CPU is booted its active_mm is set to init_mm even though its TTBR0_EL1 points at the reserved TTBR0 page mappings. This implies that when __cpu_suspend is triggered the active_mm can point at init_mm even if the current TTBR0_EL1 register contains the reserved TTBR0_EL1 mappings. Therefore, the mm save and restore executed in __cpu_suspend might turn out to be erroneous in that, if the current->active_mm corresponds to init_mm, on resume from low power it ends up restoring in the TTBR0_EL1 the init_mm mappings that are global and can cause speculation of TLB entries which end up being propagated to user space. This patch fixes the issue by checking the active_mm pointer before restoring the TTBR0 mappings. If the current active_mm == &init_mm, the code sets the TTBR0_EL1 to the reserved TTBR0 mapping instead of switching back to the active_mm, which is the expected behaviour corresponding to the TTBR0_EL1 settings when __cpu_suspend was entered. Fixes: 95322526ef62 ("arm64: kernel: cpu_{suspend/resume} implementation") Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2015-01-16arm64: Move cpu_resume into the text sectionLaura Abbott
commit c3684fbb446501b48dec6677a6a9f61c215053de upstream. The function cpu_resume currently lives in the .data section. There's no reason for it to be there since we can use relative instructions without a problem. Move a few cpu_resume data structures out of the assembly file so the .data annotation can be dropped completely and cpu_resume ends up in the read only text section. Reviewed-by: Kees Cook <keescook@chromium.org> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Tested-by: Kees Cook <keescook@chromium.org> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Laura Abbott <lauraa@codeaurora.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2015-01-16arm64: kernel: refactor the CPU suspend API for retention statesLorenzo Pieralisi
commit 714f59925595b9c2ea9c22b107b340d38e3b3bc9 upstream. CPU suspend is the standard kernel interface to be used to enter low-power states on ARM64 systems. Current cpu_suspend implementation by default assumes that all low power states are losing the CPU context, so the CPU registers must be saved and cleaned to DRAM upon state entry. Furthermore, the current cpu_suspend() implementation assumes that if the CPU suspend back-end method returns when called, this has to be considered an error regardless of the return code (which can be successful) since the CPU was not expected to return from a code path that is different from cpu_resume code path - eg returning from the reset vector. All in all this means that the current API does not cope well with low-power states that preserve the CPU context when entered (ie retention states), since first of all the context is saved for nothing on state entry for those states and a successful state entry can return as a normal function return, which is considered an error by the current CPU suspend implementation. This patch refactors the cpu_suspend() API so that it can be split in two separate functionalities. The arm64 cpu_suspend API just provides a wrapper around CPU suspend operation hook. A new function is introduced (for architecture code use only) for states that require context saving upon entry: __cpu_suspend(unsigned long arg, int (*fn)(unsigned long)) __cpu_suspend() saves the context on function entry and calls the so called suspend finisher (ie fn) to complete the suspend operation. The finisher is not expected to return, unless it fails in which case the error is propagated back to the __cpu_suspend caller. The API refactoring results in the following pseudo code call sequence for a suspending CPU, when triggered from a kernel subsystem: /* * int cpu_suspend(unsigned long idx) * @idx: idle state index */ { -> cpu_suspend(idx) |---> CPU operations suspend hook called, if present |--> if (retention_state) |--> direct suspend back-end call (eg PSCI suspend) else |--> __cpu_suspend(idx, &back_end_finisher); } By refactoring the cpu_suspend API this way, the CPU operations back-end has a chance to detect whether idle states require state saving or not and can call the required suspend operations accordingly either through simple function call or indirectly through __cpu_suspend() which carries out state saving and suspend finisher dispatching to complete idle state entry. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Hanjun Guo <hanjun.guo@linaro.org> Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2015-01-16arm64: kernel: add missing __init section marker to cpu_suspend_initLorenzo Pieralisi
commit 18ab7db6b749ac27aac08d572afbbd2f4d937934 upstream. Suspend init function must be marked as __init, since it is not needed after the kernel has booted. This patch moves the cpu_suspend_init() function to the __init section. Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2015-01-08Merge tag 'v3.14.28' into linux-linaro-lsk-v3.14Mark Brown
This is the 3.14.28 stable release
2015-01-08arm64: Add COMPAT_HWCAP_LPAECatalin Marinas
commit 7d57511d2dba03a8046c8b428dd9192a4bfc1e73 upstream. Commit a469abd0f868 (ARM: elf: add new hwcap for identifying atomic ldrd/strd instructions) introduces HWCAP_ELF for 32-bit ARM applications. As LPAE is always present on arm64, report the corresponding compat HWCAP to user space. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2014-12-04arm64: mm: Fix mismerge in pgtable.hMark Brown
Signed-off-by: Mark Brown <broonie@linaro.org>
2014-11-21Merge tag 'v3.14.25' into linux-linaro-lsk-v3.14Mark Brown
This is the 3.14.25 stable release
2014-11-21Correct the race condition in aarch64_insn_patch_text_sync()William Cohen
commit 899d5933b2dd2720f2b20b01eaa07871aa6ad096 upstream. When experimenting with patches to provide kprobes support for aarch64 smp machines would hang when inserting breakpoints into kernel code. The hangs were caused by a race condition in the code called by aarch64_insn_patch_text_sync(). The first processor in the aarch64_insn_patch_text_cb() function would patch the code while other processors were still entering the function and incrementing the cpu_count field. This resulted in some processors never observing the exit condition and exiting the function. Thus, processors in the system hung. The first processor to enter the patching function performs the patching and signals that the patching is complete with an increment of the cpu_count field. When all the processors have incremented the cpu_count field the cpu_count will be num_cpus_online()+1 and they will return to normal execution. Fixes: ae16480785de arm64: introduce interfaces to hotpatch kernel and module code Signed-off-by: William Cohen <wcohen@redhat.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2014-11-21arm64: __clear_user: handle exceptions on strbKyle McMartin
commit 97fc15436b36ee3956efad83e22a557991f7d19d upstream. ARM64 currently doesn't fix up faults on the single-byte (strb) case of __clear_user... which means that we can cause a nasty kernel panic as an ordinary user with any multiple PAGE_SIZE+1 read from /dev/zero. i.e.: dd if=/dev/zero of=foo ibs=1 count=1 (or ibs=65537, etc.) This is a pretty obscure bug in the general case since we'll only __do_kernel_fault (since there's no extable entry for pc) if the mmap_sem is contended. However, with CONFIG_DEBUG_VM enabled, we'll always fault. if (!down_read_trylock(&mm->mmap_sem)) { if (!user_mode(regs) && !search_exception_tables(regs->pc)) goto no_context; retry: down_read(&mm->mmap_sem); } else { /* * The above down_read_trylock() might have succeeded in * which * case, we'll have missed the might_sleep() from * down_read(). */ might_sleep(); if (!user_mode(regs) && !search_exception_tables(regs->pc)) goto no_context; } Fix that by adding an extable entry for the strb instruction, since it touches user memory, similar to the other stores in __clear_user. Signed-off-by: Kyle McMartin <kyle@redhat.com> Reported-by: Miloš Prchlík <mprchlik@redhat.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2014-11-18Merge remote-tracking branch 'lsk/v3.14/topic/arm64-fpsimd' into ↵Mark Brown
linux-linaro-lsk-v3.14 Conflicts: arch/arm64/include/asm/thread_info.h
2014-11-18arm64: add support for kernel mode NEON in interrupt contextArd Biesheuvel
This patch modifies kernel_neon_begin() and kernel_neon_end(), so they may be called from any context. To address the case where only a couple of registers are needed, kernel_neon_begin_partial(u32) is introduced which takes as a parameter the number of bottom 'n' NEON q-registers required. To mark the end of such a partial section, the regular kernel_neon_end() should be used. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> (cherry picked from commit 190f1ca85d071114930dd7abe6b5d103e9d5572f) Signed-off-by: Mark Brown <broonie@kernel.org>
2014-11-18arm64: defer reloading a task's FPSIMD state to userland resumeArd Biesheuvel
If a task gets scheduled out and back in again and nothing has touched its FPSIMD state in the mean time, there is really no reason to reload it from memory. Similarly, repeated calls to kernel_neon_begin() and kernel_neon_end() will preserve and restore the FPSIMD state every time. This patch defers the FPSIMD state restore to the last possible moment, i.e., right before the task returns to userland. If a task does not return to userland at all (for any reason), the existing FPSIMD state is preserved and may be reused by the owning task if it gets scheduled in again on the same CPU. This patch adds two more functions to abstract away from straight FPSIMD register file saves and restores: - fpsimd_restore_current_state -> ensure current's FPSIMD state is loaded - fpsimd_flush_task_state -> invalidate live copies of a task's FPSIMD state Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> (cherry picked from commit 005f78cd88494457ed38ce817f4e3fe5d372f0cb) Signed-off-by: Mark Brown <broonie@kernel.org>
2014-11-18arm64: add abstractions for FPSIMD state manipulationArd Biesheuvel
There are two tacit assumptions in the FPSIMD handling code that will no longer hold after the next patch that optimizes away some FPSIMD state restores: . the FPSIMD registers of this CPU contain the userland FPSIMD state of task 'current'; . when switching to a task, its FPSIMD state will always be restored from memory. This patch adds the following functions to abstract away from straight FPSIMD register file saves and restores: - fpsimd_preserve_current_state -> ensure current's FPSIMD state is saved - fpsimd_update_current_state -> replace current's FPSIMD state Where necessary, the signal handling and fork code are updated to use the above wrappers instead of poking into the FPSIMD registers directly. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> (cherry picked from commit c51f92693c35c141cf7d9b7e2fcbb81128324eb4) Signed-off-by: Mark Brown <broonie@kernel.org>
2014-10-31Merge tag 'v3.14.23' into linux-linaro-lsk-v3.14Mark Brown
This is the 3.14.23 stable release Conflicts: arch/s390/kvm/interrupt.c
2014-10-30arm64: compat: fix compat types affecting struct compat_elf_prpsinfoVictor Kamensky
commit 971a5b6fe634bb7b617d8c5f25b6a3ddbc600194 upstream. The compat_elf_prpsinfo structure does not match the arch/arm struct elf_pspsinfo definition. As result NT_PRPSINFO note in core file created by arm64 kernel for aarch32 (compat) process has wrong size. So gdb cannot display command that caused process crash. Fix is to change size of __compat_uid_t, __compat_gid_t so it would match size of similar fields in arch/arm case. Signed-off-by: Victor Kamensky <victor.kamensky@linaro.org> Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2014-10-15arm64: enable context trackingLarry Bassel
Backport of the following patch to 3.14 LSK: commit 6c81fe7925cc4c42de49e17be21eb86d1173c3a7 Author: Larry Bassel <larry.bassel@linaro.org> Date: Fri May 30 12:34:15 2014 -0700 arm64: enable context tracking Make calls to ct_user_enter when the kernel is exited and ct_user_exit when the kernel is entered (in el0_da, el0_ia, el0_svc, el0_irq and all of the "error" paths). These macros expand to function calls which will only work properly if el0_sync and related code has been rearranged (in a previous patch of this series). The calls to ct_user_exit are made after hw debugging has been enabled (enable_dbg_and_irq). The call to ct_user_enter is made at the beginning of the kernel_exit macro. This patch is based on earlier work by Kevin Hilman. Save/restore optimizations were also done by Kevin. Acked-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Kevin Hilman <khilman@linaro.org> Tested-by: Kevin Hilman <khilman@linaro.org> Signed-off-by: Larry Bassel <larry.bassel@linaro.org> Signed-off-by: Kevin Hilman <khilman@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Larry Bassel <larry.bassel@linaro.org>
2014-10-15arm64: adjust el0_sync so that a function can be calledLarry Bassel
Backport of the following patch to 3.14 LSK: commit 6ab6463aeb5fbc75fa3227befb508fc33b34dbf1 Author: Larry Bassel <larry.bassel@linaro.org> Date: Fri May 30 20:34:14 2014 +0100 arm64: adjust el0_sync so that a function can be called To implement the context tracker properly on arm64, a function call needs to be made after debugging and interrupts are turned on, but before the lr is changed to point to ret_to_user(). If the function call is made after the lr is changed the function will not return to the correct place. For similar reasons, defer the setting of x0 so that it doesn't need to be saved around the function call (save far_el1 in x26 temporarily instead). Acked-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Kevin Hilman <khilman@linaro.org> Tested-by: Kevin Hilman <khilman@linaro.org> Signed-off-by: Larry Bassel <larry.bassel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Larry Bassel <larry.bassel@linaro.org>
2014-10-15arm64: Support arch_irq_work_raise() via self IPIsLarry Bassel
Backport of the following patch to LSK 3.14: commit eb631bb5bf5b042202aaaee4a8dd8f863ba2a900 Author: Larry Bassel <larry.bassel@linaro.org> Date: Mon May 12 16:48:51 2014 +0100 arm64: Support arch_irq_work_raise() via self IPIs Support for arch_irq_work_raise() was missing from arm64 (a prerequisite for FULL_NOHZ). This patch is based on the arm32 patch ARM 7872/1. commit bf18525fd793101df42a1344ecc48b49b62e48c9 Author: Stephen Boyd <sboyd@codeaurora.org> Date: Tue Oct 29 20:32:56 2013 +0100 ARM: 7872/1: Support arch_irq_work_raise() via self IPIs By default, IRQ work is run from the tick interrupt (see irq_work_run() in update_process_times()). When we're in full NOHZ mode, restarting the tick requires the use of IRQ work and if the only place we run IRQ work is in the tick interrupt we have an unbreakable cycle. Implement arch_irq_work_raise() via self IPIs to break this cycle and get the tick started again. Note that we implement this via IPIs which are only available on SMP builds. This shouldn't be a problem because full NOHZ is only supported on SMP builds anyway. Signed-off-by: Stephen Boyd <sboyd@codeaurora.org> Reviewed-by: Kevin Hilman <khilman@linaro.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Larry Bassel <larry.bassel@linaro.org> Reviewed-by: Kevin Hilman <khilman@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Larry Bassel <larry.bassel@linaro.org>
2014-10-09Merge remote-tracking branch 'lsk/v3.14/topic/kvm' into linux-linaro-lsk-v3.14Mark Brown
2014-10-08Merge remote-tracking branch 'lsk/v3.14/topic/gicv3' into linux-linaro-lsk-v3.14Mark Brown
2014-10-08Merge branch 'lsk/v3.14/topic/kvm' into lsk/lsk-with-kvm-v3.14Christoffer Dall
Conflicts: arch/arm64/include/asm/debug-monitors.h
2014-10-08Merge branch 'lsk/v3.14/topic/gic-v3' into lsk/lsk-with-kvm-v3.14Christoffer Dall
2014-10-06Merge tag 'v3.14.20' into linux-linaro-lsk-v3.14Mark Brown
This is the 3.14.20 stable release
2014-10-05arm64: ptrace: fix compat hardware watchpoint reportingWill Deacon
commit 27d7ff273c2aad37b28f6ff0cab2cfa35b51e648 upstream. I'm not sure what I was on when I wrote this, but when iterating over the hardware watchpoint array (hbp_watch_array), our index is off by ARM_MAX_BRP, so we walk off the end of our thread_struct... ... except, a dodgy condition in the loop means that it never executes at all (bp cannot be NULL). This patch fixes the code so that we remove the bp check and use the correct index for accessing the watchpoint structures. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2014-10-05ARM/ARM64: KVM: Nuke Hyp-mode tlbs before enabling MMUPranavkumar Sawargaonkar
commit f6edbbf36da3a27b298b66c7955fc84e1dcca305 upstream. X-Gene u-boot runs in EL2 mode with MMU enabled hence we might have stale EL2 tlb enteris when we enable EL2 MMU on each host CPU. This can happen on any ARM/ARM64 board running bootloader in Hyp-mode (or EL2-mode) with MMU enabled. This patch ensures that we flush all Hyp-mode (or EL2-mode) TLBs on each host CPU before enabling Hyp-mode (or EL2-mode) MMU. Tested-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org> Signed-off-by: Anup Patel <anup.patel@linaro.org> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2014-10-05arm/arm64: KVM: Complete WFI/WFE instructionsChristoffer Dall
commit 05e0127f9e362b36aa35f17b1a3d52bca9322a3a upstream. The architecture specifies that when the processor wakes up from a WFE or WFI instruction, the instruction is considered complete, however we currrently return to EL1 (or EL0) at the WFI/WFE instruction itself. While most guests may not be affected by this because their local exception handler performs an exception returning setting the event bit or with an interrupt pending, some guests like UEFI will get wedged due this little mishap. Simply skip the instruction when we have completed the emulation. Acked-by: Marc Zyngier <marc.zyngier@arm.com> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2014-10-05arm64: use irq_set_affinity with force=false when migrating irqsSudeep Holla
commit 3d8afe3099ebc602848aa7f09235cce3a9a023ce upstream. The arm64 interrupt migration code on cpu offline calls irqchip.irq_set_affinity() with the argument force=true. Originally this argument had no effect because it was not used by any interrupt chip driver and there was no semantics defined. This changed with commit 01f8fa4f01d8 ("genirq: Allow forcing cpu affinity of interrupts") which made the force argument useful to route interrupts to not yet online cpus without checking the target cpu against the cpu online mask. The following commit ffde1de64012 ("irqchip: gic: Support forced affinity setting") implemented this for the GIC interrupt controller. As a consequence the cpu offline irq migration fails if CPU0 is offlined, because CPU0 is still set in the affinity mask and the validation against cpu online mask is skipped to the force argument being true. The following first_cpu(mask) selection always selects CPU0 as the target. Commit 601c942176d8("arm64: use cpu_online_mask when using forced irq_set_affinity") intended to fix the above mentioned issue but introduced another issue where affinity can be migrated to a wrong CPU due to unconditional copy of cpu_online_mask. As with for arm, solve the issue by calling irq_set_affinity() with force=false from the CPU offline irq migration code so the GIC driver validates the affinity mask against CPU online mask and therefore removes CPU0 from the possible target candidates. Also revert the changes done in the commit 601c942176d8 as it's no longer needed. Tested on Juno platform. Fixes: 601c942176d8("arm64: use cpu_online_mask when using forced irq_set_affinity") Signed-off-by: Sudeep Holla <sudeep.holla@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2014-10-05arm64: flush TLS registers during execWill Deacon
commit eb35bdd7bca29a13c8ecd44e6fd747a84ce675db upstream. Nathan reports that we leak TLS information from the parent context during an exec, as we don't clear the TLS registers when flushing the thread state. This patch updates the flushing code so that we: (1) Unconditionally zero the tpidr_el0 register (since this is fully context switched for native tasks and zeroed for compat tasks) (2) Zero the tp_value state in thread_info before clearing the tpidrr0_el0 register for compat tasks (since this is only writable by the set_tls compat syscall and therefore not fully switched). A missing compiler barrier is also added to the compat set_tls syscall. Acked-by: Nathan Lynch <Nathan_Lynch@mentor.com> Reported-by: Nathan Lynch <Nathan_Lynch@mentor.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2014-10-03arm64: barriers: wire up new barrier optionsWill Deacon
Now that all callers of the barrier macros are updated to pass the mandatory options, update the macros so the option is actually used. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> (cherry picked from commit 493e68747e07b69da3d746352525a1ebd6b61d82) Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2014-10-03arm/arm64: KVM: Report correct FSC for unsupported fault typesChristoffer Dall
When we catch something that's not a permission fault or a translation fault, we log the unsupported FSC in the kernel log, but we were masking off the bottom bits of the FSC which was not very helpful. Also correctly report the FSC for data and instruction faults rather than telling people it was a DFCS, which doesn't exist in the ARM ARM. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org> (cherry picked from commit 0496daa5cf99741ce8db82686b4c7446a37feabb) Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2014-10-03arm/arm64: KVM: Fix VTTBR_BADDR_MASK and pgd allocJoel Schopp
The current aarch64 calculation for VTTBR_BADDR_MASK masks only 39 bits and not all the bits in the PA range. This is clearly a bug that manifests itself on systems that allocate memory in the higher address space range. [ Modified from Joel's original patch to be based on PHYS_MASK_SHIFT instead of a hard-coded value and to move the alignment check of the allocation to mmu.c. Also added a comment explaining why we hardcode the IPA range and changed the stage-2 pgd allocation to be based on the 40 bit IPA range instead of the maximum possible 48 bit PA range. - Christoffer ] Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Joel Schopp <joel.schopp@amd.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org> (cherry picked from commit dbff124e29fa24aff9705b354b5f4648cd96e0bb) Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2014-10-03arm/arm64: KVM: vgic: make number of irqs a configurable attributeMarc Zyngier
In order to make the number of interrupts configurable, use the new fancy device management API to add KVM_DEV_ARM_VGIC_GRP_NR_IRQS as a VGIC configurable attribute. Userspace can now specify the exact size of the GIC (by increments of 32 interrupts). Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> (cherry picked from commit a98f26f183801685ef57333de4bafd4bbc692c7c) Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2014-10-02ARM/arm64: KVM: fix use of WnR bit in kvm_is_write_fault()Ard Biesheuvel
The ISS encoding for an exception from a Data Abort has a WnR bit[6] that indicates whether the Data Abort was caused by a read or a write instruction. While there are several fields in the encoding that are only valid if the ISV bit[24] is set, WnR is not one of them, so we can read it unconditionally. Instead of fixing both implementations of kvm_is_write_fault() in place, reimplement it just once using kvm_vcpu_dabt_iswrite(), which already does the right thing with respect to the WnR bit. Also fix up the callers to pass 'vcpu' Acked-by: Laszlo Ersek <lersek@redhat.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Acked-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> (cherry picked from commit a7d079cea2dffb112e26da2566dd84c0ef1fce97) Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2014-10-02KVM: remove garbage arg to *hardware_{en,dis}ableChristoffer Dall
In the beggining was on_each_cpu(), which required an unused argument to kvm_arch_ops.hardware_{en,dis}able, but this was soon forgotten. Remove unnecessary arguments that stem from this. Signed-off-by: Radim Krčmář <rkrcmar@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit 13a34e067eab24fec882e1834fbf2cc31911d474) Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2014-10-02KVM: static inline empty kvm_arch functionsChristoffer Dall
Using static inline is going to save few bytes and cycles. For example on powerpc, the difference is 700 B after stripping. (5 kB before) This patch also deals with two overlooked empty functions: kvm_arch_flush_shadow was not removed from arch/mips/kvm/mips.c 2df72e9bc KVM: split kvm_arch_flush_shadow and kvm_arch_sched_in never made it into arch/ia64/kvm/kvm-ia64.c. e790d9ef6 KVM: add kvm_arch_sched_in Signed-off-by: Radim Krčmář <rkrcmar@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit 0865e636aef751966e6e0f8950a26bc7391e923c) Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2014-10-02KVM: forward declare structs in kvm_types.hChristoffer Dall
Opaque KVM structs are useful for prototypes in asm/kvm_host.h, to avoid "'struct foo' declared inside parameter list" warnings (and consequent breakage due to conflicting types). Move them from individual files to a generic place in linux/kvm_types.h. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit 656473003bc7e056c3bbd4a4d9832dad01e86f76) Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>