aboutsummaryrefslogtreecommitdiff
path: root/target/arm/helper.c
AgeCommit message (Collapse)Author
2021-05-25target/arm: Add ID_AA64ZFR0 fields and isar_feature_aa64_sve2Richard Henderson
Will be used for SVE2 isa subset enablement. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210525010358.152808-2-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-25target/arm: Add support for FEAT_TLBIOSRebecca Cran
ARMv8.4 adds the mandatory FEAT_TLBIOS. It provides TLBI maintenance instructions that extend to the Outer Shareable domain. Signed-off-by: Rebecca Cran <rebecca@nuviainc.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210512182337.18563-3-rebecca@nuviainc.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-25target/arm: Add support for FEAT_TLBIRANGERebecca Cran
ARMv8.4 adds the mandatory FEAT_TLBIRANGE. It provides TLBI maintenance instructions that apply to a range of input addresses. Signed-off-by: Rebecca Cran <rebecca@nuviainc.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210512182337.18563-2-rebecca@nuviainc.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-10target/arm: Fix tlbbits calculation in tlbi_aa64_vae2is_write()Peter Maydell
In tlbi_aa64_vae2is_write() the calculation bits = tlbbits_for_regime(env, secure ? ARMMMUIdx_E2 : ARMMMUIdx_SE2, pageaddr) has the two arms of the ?: expression reversed. Fix the bug. Fixes: b6ad6062f1e5 Reported-by: Rebecca Cran <rebecca@nuviainc.com> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Rebecca Cran <rebecca@nuviainc.com> Message-id: 20210420123106.10861-1-peter.maydell@linaro.org
2021-04-30target/arm: Add ALIGN_MEM to TBFLAG_ANYRichard Henderson
Use this to signal when memory access alignment is required. This value comes from the CCR register for M-profile, and from the SCTLR register for A-profile. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210419202257.161730-11-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-04-30target/arm: Move mode specific TB flags to tb->cs_baseRichard Henderson
Now that we have all of the proper macros defined, expanding the CPUARMTBFlags structure and populating the two TB fields is relatively simple. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210419202257.161730-7-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-04-30target/arm: Introduce CPUARMTBFlagsRichard Henderson
In preparation for splitting tb->flags across multiple fields, introduce a structure to hold the value(s). So far this only migrates the one uint32_t and fixes all of the places that require adjustment to match. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210419202257.161730-6-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-04-30target/arm: Add wrapper macros for accessing tbflagsRichard Henderson
We're about to split tbflags into two parts. These macros will ensure that the correct part is used with the correct set of bits. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210419202257.161730-5-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-04-30target/arm: Rename TBFLAG_ANY, PSTATE_SSRichard Henderson
We're about to rearrange the macro expansion surrounding tbflags, and this field name will be expanded using the bit definition of the same name, resulting in a token pasting error. So PSTATE_SS -> PSTATE__SS in the uses, and document it. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210419202257.161730-4-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-04-30target/arm: Rename TBFLAG_A32, SCTLR_BRichard Henderson
We're about to rearrange the macro expansion surrounding tbflags, and this field name will be expanded using the bit definition of the same name, resulting in a token pasting error. So SCTLR_B -> SCTLR__B in the 3 uses, and document it. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210419202257.161730-3-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-04-06Revert "target/arm: Make number of counters in PMCR follow the CPU"Peter Maydell
This reverts commit f7fb73b8cdd3f77e26f9fcff8cf24ff1b58d200f. This change turned out to be a bit half-baked, and doesn't work with KVM, which fails with the error: "qemu-system-aarch64: Failed to retrieve host CPU features" because KVM does not allow accessing of the PMCR_EL0 value in the scratch "query CPU ID registers" VM unless we have first set the KVM_ARM_VCPU_PMU_V3 feature on the VM. Revert the change for 6.0. Reported-by: Zenghui Yu <yuzenghui@huawei.com> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Tested-by: Zenghui Yu <yuzenghui@huawei.com> Message-id: 20210331154822.23332-1-peter.maydell@linaro.org
2021-03-30target/arm: Make number of counters in PMCR follow the CPUPeter Maydell
Currently we give all the v7-and-up CPUs a PMU with 4 counters. This means that we don't provide the 6 counters that are required by the Arm BSA (Base System Architecture) specification if the CPU supports the Virtualization extensions. Instead of having a single PMCR_NUM_COUNTERS, make each CPU type specify the PMCR reset value (obtained from the appropriate TRM), and use the 'N' field of that value to define the number of counters provided. This means that we now supply 6 counters for Cortex-A53, A57, A72, A15 and A9 as well as '-cpu max'; Cortex-A7 and A8 stay at 4; and Cortex-R5 goes down to 3. Note that because we now use the PMCR reset value of the specific implementation, we no longer set the LC bit out of reset. This has an UNKNOWN value out of reset for all cores with any AArch32 support, so guest software should be setting it anyway if it wants it. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Tested-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org> Message-id: 20210311165947.27470-1-peter.maydell@linaro.org Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-03-10semihosting: Move include/hw/semihosting/ -> include/semihosting/Philippe Mathieu-Daudé
We want to move the semihosting code out of hw/ in the next patch. This patch contains the mechanical steps, created using: $ git mv include/hw/semihosting/ include/ $ sed -i s,hw/semihosting,semihosting, $(git grep -l hw/semihosting) Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20210226131356.3964782-2-f4bug@amsat.org> Message-Id: <20210305135451.15427-2-alex.bennee@linaro.org>
2021-03-05target/arm: Use TCF0 and TFSRE0 for unprivileged tag checksPeter Collingbourne
Section D6.7 of the ARM ARM states: For the purpose of determining Tag Check Fault handling, unprivileged load and store instructions are treated as if executed at EL0 when executed at either: - EL1, when the Effective value of PSTATE.UAO is 0. - EL2, when both the Effective value of HCR_EL2.{E2H, TGE} is {1, 1} and the Effective value of PSTATE.UAO is 0. ARM has confirmed a defect in the pseudocode function AArch64.TagCheckFault that makes it inconsistent with the above wording. The remedy is to adjust references to PSTATE.EL in that function to instead refer to AArch64.AccessUsesEL(acctype), so that unprivileged instructions use SCTLR_EL1.TCF0 and TFSRE0_EL1. The exception type for synchronous tag check faults remains unchanged. This patch implements the described change by partially reverting commits 50244cc76abc and cc97b0019bb5. Signed-off-by: Peter Collingbourne <pcc@google.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210219201820.2672077-1-pcc@google.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-03-05target/arm: Add support for FEAT_SSBS, Speculative Store Bypass SafeRebecca Cran
Add support for FEAT_SSBS. SSBS (Speculative Store Bypass Safe) is an optional feature in ARMv8.0, and mandatory in ARMv8.5. Signed-off-by: Rebecca Cran <rebecca@nuviainc.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210216224543.16142-2-rebecca@nuviainc.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-02-11target/arm: Correctly initialize MDCR_EL2.HPMNpull-target-arm-20210211-1Daniel Müller
When working with performance monitoring counters, we look at MDCR_EL2.HPMN as part of the check whether a counter is enabled. This check fails, because MDCR_EL2.HPMN is reset to 0, meaning that no counters are "enabled" for < EL2. That's in violation of the Arm specification, which states that > On a Warm reset, this field [MDCR_EL2.HPMN] resets to the value in > PMCR_EL0.N That's also what a comment in the code acknowledges, but the necessary adjustment seems to have been forgotten when support for more counters was added. This change fixes the issue by setting the reset value to PMCR.N, which is four. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-02-11target/arm: Support AA32 DIT by moving PSTATE_SS from cpsr into env->pstateRebecca Cran
cpsr has been treated as being the same as spsr, but it isn't. Since PSTATE_SS isn't in cpsr, remove it and move it into env->pstate. This allows us to add support for CPSR_DIT, adding helper functions to merge SPSR_ELx to and from CPSR. Signed-off-by: Rebecca Cran <rebecca@nuviainc.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210208065700.19454-3-rebecca@nuviainc.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-02-11target/arm: Add support for FEAT_DIT, Data Independent TimingRebecca Cran
Add support for FEAT_DIT. DIT (Data Independent Timing) is a required feature for ARMv8.4. Since virtual machine execution is largely nondeterministic and TCG is outside of the security domain, it's implemented as a NOP. Signed-off-by: Rebecca Cran <rebecca@nuviainc.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210208065700.19454-2-rebecca@nuviainc.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-02-11target/arm: Fix SCR RES1 handlingMike Nawrocki
The FW and AW bits of SCR_EL3 are RES1 only in some contexts. Force them to 1 only when there is no support for AArch32 at EL1 or above. The reset value will be 0x30 only if the CPU is AArch64-only; if there is support for AArch32 at EL1 or above, it will be reset to 0. Also adds helper function isar_feature_aa64_aa32_el1 to check if AArch32 is supported at EL1 or above. Signed-off-by: Mike Nawrocki <michael.nawrocki@gtri.gatech.edu> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210203165552.16306-2-michael.nawrocki@gtri.gatech.edu Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-02-05target/arm: do not use cc->do_interrupt for KVM directlyClaudio Fontana
cc->do_interrupt is in theory a TCG callback used in accel/tcg only, to prepare the emulated architecture to take an interrupt as defined in the hardware specifications, but in reality the _do_interrupt style of functions in targets are also occasionally reused by KVM to prepare the architecture state in a similar way where userspace code has identified that it needs to deliver an exception to the guest. In the case of ARM, that includes: 1) the vcpu thread got a SIGBUS indicating a memory error, and we need to deliver a Synchronous External Abort to the guest to let it know about the error. 2) the kernel told us about a debug exception (breakpoint, watchpoint) but it is not for one of QEMU's own gdbstub breakpoints/watchpoints so it must be a breakpoint the guest itself has set up, therefore we need to deliver it to the guest. So in order to reuse code, the same arm_do_interrupt function is used. This is all fine, but we need to avoid calling it using the callback registered in CPUClass, since that one is now TCG-only. Fortunately this is easily solved by replacing calls to CPUClass::do_interrupt() with explicit calls to arm_do_interrupt(). Signed-off-by: Claudio Fontana <cfontana@suse.de> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Cc: Peter Maydell <peter.maydell@linaro.org> Message-Id: <20210204163931.7358-9-cfontana@suse.de> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-01-29target/arm: Replace magic value by MMU_DATA_LOAD definitionPhilippe Mathieu-Daudé
cpu_get_phys_page_debug() uses 'DATA LOAD' MMU access type. Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20210127232822.3530782-1-f4bug@amsat.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-01-29target/arm: Conditionalize DBGDIDRRichard Henderson
Only define the register if it exists for the cpu. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210120031656.737646-1-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-01-29target/arm: Implement ID_PFR2Richard Henderson
This was defined at some point before ARMv8.4, and will shortly be used by new processor descriptions. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210120204400.1056582-1-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-01-19target/arm: refactor vae1_tlbmask()Rémi Denis-Courmont
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210112104511.36576-19-remi.denis.courmont@huawei.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-01-19target/arm: Implement SCR_EL2.EEL2Rémi Denis-Courmont
This adds handling for the SCR_EL3.EEL2 bit. Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Message-id: 20210112104511.36576-17-remi.denis.courmont@huawei.com [PMM: Applied fixes for review issues noted by RTH: - check for FEATURE_AARCH64 before checking sel2 isar feature - correct the commit message subject line] Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-01-19target/arm: set HPFAR_EL2.NS on secure stage 2 faultsRémi Denis-Courmont
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210112104511.36576-15-remi.denis.courmont@huawei.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-01-19target/arm: secure stage 2 translation regimeRémi Denis-Courmont
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210112104511.36576-14-remi.denis.courmont@huawei.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-01-19target/arm: generalize 2-stage page-walk conditionRémi Denis-Courmont
The stage_1_mmu_idx() already effectively keeps track of which translation regimes have two stages. Don't hard-code another test. Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210112104511.36576-13-remi.denis.courmont@huawei.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-01-19target/arm: translate NS bit in page-walksRémi Denis-Courmont
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210112104511.36576-12-remi.denis.courmont@huawei.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-01-19target/arm: do S1_ptw_translate() before address space lookupRémi Denis-Courmont
In the secure stage 2 translation regime, the VSTCR.SW and VTCR.NSW bits can invert the secure flag for pagetable walks. This patchset allows S1_ptw_translate() to change the non-secure bit. Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210112104511.36576-11-remi.denis.courmont@huawei.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-01-19target/arm: handle VMID change in secure stateRémi Denis-Courmont
The VTTBR write callback so far assumes that the underlying VM lies in non-secure state. This handles the secure state scenario. Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210112104511.36576-10-remi.denis.courmont@huawei.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-01-19target/arm: add ARMv8.4-SEL2 system registersRémi Denis-Courmont
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210112104511.36576-9-remi.denis.courmont@huawei.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-01-19target/arm: add MMU stage 1 for Secure EL2Rémi Denis-Courmont
This adds the MMU indices for EL2 stage 1 in secure state. To keep code contained, which is largelly identical between secure and non-secure modes, the MMU indices are reassigned. The new assignments provide a systematic pattern with a non-secure bit. Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210112104511.36576-8-remi.denis.courmont@huawei.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-01-19target/arm: add 64-bit S-EL2 to EL exception tableRémi Denis-Courmont
With the ARMv8.4-SEL2 extension, EL2 is a legal exception level in secure mode, though it can only be AArch64. This patch adds the target EL for exceptions from 64-bit S-EL2. It also fixes the target EL to EL2 when HCR.{A,F,I}MO are set in secure mode. Those values were never used in practice as the effective value of HCR was always 0 in secure mode. Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210112104511.36576-7-remi.denis.courmont@huawei.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-01-19target/arm: factor MDCR_EL2 common handlingRémi Denis-Courmont
This adds a common helper to compute the effective value of MDCR_EL2. That is the actual value if EL2 is enabled in the current security context, or 0 elsewise. Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210112104511.36576-5-remi.denis.courmont@huawei.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-01-19target/arm: use arm_hcr_el2_eff() where applicableRémi Denis-Courmont
This will simplify accessing HCR conditionally in secure state. Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210112104511.36576-4-remi.denis.courmont@huawei.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-01-19target/arm: use arm_is_el2_enabled() where applicableRémi Denis-Courmont
Do not assume that EL2 is available in and only in non-secure context. That equivalence is broken by ARMv8.4-SEL2. Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210112104511.36576-3-remi.denis.courmont@huawei.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-01-19target/arm: remove redundant testsRémi Denis-Courmont
In this context, the HCR value is the effective value, and thus is zero in secure mode. The tests for HCR.{F,I}MO are sufficient. Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210112104511.36576-1-remi.denis.courmont@huawei.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-01-18semihosting: Change common-semi API to be architecture-independentKeith Packard
The public API is now defined in hw/semihosting/common-semi.h. do_common_semihosting takes CPUState * instead of CPUARMState *. All internal functions have been renamed common_semi_ instead of arm_semi_ or arm_. Aside from the API change, there are no functional changes in this patch. Signed-off-by: Keith Packard <keithp@keithp.com> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-Id: <20210107170717.2098982-3-keithp@keithp.com> Message-Id: <20210108224256.2321-14-alex.bennee@linaro.org>
2021-01-18target/arm: use official org.gnu.gdb.aarch64.sve layout for registersAlex Bennée
While GDB can work with any XML description given to it there is special handling for SVE registers on the GDB side which makes the users life a little better. The changes aren't that major and all the registers save the $vg reported the same. All that changes is: - report org.gnu.gdb.aarch64.sve - use gdb nomenclature for names and types - minor re-ordering of the types to match reference - re-enable ieee_half (as we know gdb supports it now) - $vg is now a 64 bit int - check $vN and $zN aliasing in test Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Luis Machado <luis.machado@linaro.org> Message-Id: <20210108224256.2321-11-alex.bennee@linaro.org>
2021-01-12target/arm: ARMv8.4-TTST extensionRémi Denis-Courmont
This adds for the Small Translation tables extension in AArch64 state. Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-01-08target/arm: Fix MTE0_ACTIVERichard Henderson
In 50244cc76abc we updated mte_check_fail to match the ARM pseudocode, using the correct EL to select the TCF field. But we failed to update MTE0_ACTIVE the same way, which led to g_assert_not_reached(). Cc: qemu-stable@nongnu.org Buglink: https://bugs.launchpad.net/bugs/1907137 Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20201221204426.88514-1-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-12-19qapi: Use QAPI_LIST_PREPEND() where possibleEric Blake
Anywhere we create a list of just one item or by prepending items (typically because order doesn't matter), we can use QAPI_LIST_PREPEND(). But places where we must keep the list in order by appending remain open-coded until later patches. Note that as a side effect, this also performs a cleanup of two minor issues in qga/commands-posix.c: the old code was performing new = g_malloc0(sizeof(*ret)); which 1) is confusing because you have to verify whether 'new' and 'ret' are variables with the same type, and 2) would conflict with C++ compilation (not an actual problem for this file, but makes copy-and-paste harder). Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20201113011340.463563-5-eblake@redhat.com> Reviewed-by: Markus Armbruster <armbru@redhat.com> Acked-by: Stefan Hajnoczi <stefanha@redhat.com> [Straightforward conflicts due to commit a8aa94b5f8 "qga: update schema for guest-get-disks 'dependents' field" and commit a10b453a52 "target/mips: Move mips_cpu_add_definition() from helper.c to cpu.c" resolved. Commit message tweaked.] Signed-off-by: Markus Armbruster <armbru@redhat.com>
2020-12-10target/arm: Implement v8.1M PXN extensionPeter Maydell
In v8.1M the PXN architecture extension adds a new PXN bit to the MPU_RLAR registers, which forbids execution of code in the region from a privileged mode. This is another feature which is just in the generic "in v8.1M" set and has no ID register field indicating its presence. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20201119215617.29887-3-peter.maydell@linaro.org
2020-11-23target/arm: fix stage 2 page-walks in 32-bit emulationRémi Denis-Courmont
Using a target unsigned long would limit the Input Address to a LPAE page-walk to 32 bits on AArch32 and 64 bits on AArch64. This is okay for stage 1 or on AArch64, but it is insufficient for stage 2 on AArch32. In that later case, the Input Address can have up to 40 bits. Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20201118150414.18360-1-remi@remlab.net Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-11-10target/arm: add spaces around operatorXinhao Zhang
Fix code style. Operator needs spaces both sides. Signed-off-by: Xinhao Zhang <zhangxinhao1@huawei.com> Signed-off-by: Kai Deng <dengkai1@huawei.com> Message-id: 20201103114529.638233-1-zhangxinhao1@huawei.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-11-02target/arm: fix LORID_EL1 access checkRémi Denis-Courmont
Secure mode is not exempted from checking SCR_EL3.TLOR, and in the future HCR_EL2.TLOR when S-EL2 is enabled. Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-11-02target/arm: fix handling of HCR.FBRémi Denis-Courmont
HCR should be applied when NS is set, not when it is cleared. Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-10-20target/arm: Ignore HCR_EL2.ATA when {E2H,TGE} != 11Richard Henderson
Unlike many other bits in HCR_EL2, the description for this bit does not contain the phrase "if ... this field behaves as 0 for all purposes other than", so do not squash the bit in arm_hcr_el2_eff. Instead, replicate the E2H+TGE test in the two places that require it. Reported-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Tested-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Message-id: 20201008162155.161886-4-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-10-20target/arm: Use tlb_flush_page_bits_by_mmuidx*Richard Henderson
When TBI is enabled in a given regime, 56 bits of the address are significant and we need to clear out any other matching virtual addresses with differing tags. The other uses of tlb_flush_page (without mmuidx) in this file are only used by aarch32 mode. Fixes: 38d931687fa1 Reported-by: Jordan Frank <jordanfrank@fb.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20201016210754.818257-3-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>