summaryrefslogtreecommitdiff
path: root/target/arm/cpu64.c
AgeCommit message (Collapse)Author
2022-07-11target/arm: Enable SME for -cpu maxRichard Henderson
Note that SME remains effectively disabled for user-only, because we do not yet set CPACR_EL1.SMEN. This needs to wait until the kernel ABI is implemented. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220708151540.18136-33-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-07-07target/arm: Implement AArch32 DBGDEVID, DBGDEVID1, DBGDEVID2Peter Maydell
Starting with v7 of the debug architecture, there are three extra ID registers that add information on top of that provided in DBGDIDR. These are DBGDEVID, DBGDEVID1 and DBGDEVID2. In the v7 debug architecture, DBGDEVID is optional, present only of DBGDIDR.DEVID_imp is set. In v7.1 all three must be present. Implement the missing registers. Note that we only need to set the values in the ARMISARegisters struct for the CPUs Cortex-A7, A15, A53, A57 and A72 (plus the 32-bit 'max' which uses the Cortex-A53 values): earlier CPUs didn't implement v7 of the architecture, and our other 64-bit CPUs (Cortex-A76, Neoverse-N1 and A64fx) don't have AArch32 support at EL1. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220630194116.3438513-5-peter.maydell@linaro.org
2022-06-27target/arm: Add cpu properties for SMERichard Henderson
Mirror the properties for SVE. The main difference is that any arbitrary set of powers of 2 may be supported, and not the stricter constraints that apply to SVE. Include a property to control FEAT_SME_FA64, as failing to restrict the runtime to the proper subset of insns could be a major point for bugs. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20220620175235.60881-18-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-06-27target/arm: Unexport aarch64_add_*_propertiesRichard Henderson
These functions are not used outside cpu64.c, so make them static. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220620175235.60881-17-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-06-27target/arm: Generalize cpu_arm_{get, set}_default_vec_lenRichard Henderson
Rename from cpu_arm_{get,set}_sve_default_vec_len, and take the pointer to default_vq from opaque. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220620175235.60881-15-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-06-27target/arm: Generalize cpu_arm_{get,set}_vqRichard Henderson
Rename from cpu_arm_{get,set}_sve_vq, and take the ARMVQMap as the opaque parameter. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220620175235.60881-14-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-06-27target/arm: Create ARMVQMapRichard Henderson
Pull the three sve_vq_* values into a structure. This will be reused for SME. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220620175235.60881-13-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-06-27target/arm: Move error for sve%d property to arm_cpu_sve_finalizeRichard Henderson
Keep all of the error messages together. This does mean that when setting many sve length properties we'll only generate one error, but we only really need one. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220620175235.60881-12-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-06-08target/arm: Use uint32_t instead of bitmap for sve vq'sRichard Henderson
The bitmap need only hold 15 bits; bitmap is over-complicated. We can simplify operations quite a bit with plain logical ops. The introduction of SVE_VQ_POW2_MAP eliminates the need for looping in order to search for powers of two. Simply perform the logical ops and use count leading or trailing zeros as required to find the result. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220607203306.657998-12-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-06-08target/arm: Implement FEAT_DoubleFaultPeter Maydell
The FEAT_DoubleFault extension adds the following: * All external aborts on instruction fetches and translation table walks for instruction fetches must be synchronous. For QEMU this is already true. * SCR_EL3 has a new bit NMEA which disables the masking of SError interrupts by PSTATE.A when the SError interrupt is taken to EL3. For QEMU we only need to make the bit writable, because we have no sources of SError interrupts. * SCR_EL3 has a new bit EASE which causes synchronous external aborts taken to EL3 to be taken at the same entry point as SError. (Note that this does not mean that they are SErrors for purposes of PSTATE.A masking or that the syndrome register reports them as SErrors: it just means that the vector offset is different.) * The existing SCTLR_EL3.IESB has an effective value of 1 when SCR_EL3.NMEA is 1. For QEMU this is a no-op because we don't need different behaviour based on IESB (we don't need to do anything to ensure that error exceptions are synchronized). So for QEMU the things we need to change are: * Make SCR_EL3.{NMEA,EASE} writable * When taking a synchronous external abort at EL3, adjust the vector entry point if SCR_EL3.EASE is set * Advertise the feature in the ID registers Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220531151431.949322-1-peter.maydell@linaro.org
2022-06-08target/arm: Declare support for FEAT_RASv1p1Peter Maydell
The architectural feature RASv1p1 introduces the following new features: * new registers ERXPFGCDN_EL1, ERXPFGCTL_EL1 and ERXPFGF_EL1 * new bits in the fine-grained trap registers that control traps for these new registers * new trap bits HCR_EL2.FIEN and SCR_EL3.FIEN that control traps for ERXPFGCDN_EL1, ERXPFGCTL_EL1, ERXPFGP_EL1 * a larger number of the ERXMISC<n>_EL1 registers * the format of ERR<n>STATUS registers changes The architecture permits that if ERRIDR_EL1.NUM is 0 (as it is for QEMU) then all these new registers may UNDEF, and the HCR_EL2.FIEN and SCR_EL3.FIEN bits may be RES0. We don't have any ERR<n>STATUS registers (again, because ERRIDR_EL1.NUM is 0). QEMU does not yet implement the fine-grained-trap extension. So there is nothing we need to implement to be compliant with the feature spec. Make the 'max' CPU report the feature in its ID registers, and document it. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220531114258.855804-1-peter.maydell@linaro.org
2022-05-19target/arm: Enable FEAT_HCX for -cpu maxRichard Henderson
This feature adds a new register, HCRX_EL2, which controls many of the newer AArch64 features. So far the register is effectively RES0, because none of the new features are done. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220517054850.177016-2-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-05-19target/arm: Make number of counters in PMCR follow the CPUPeter Maydell
Currently we give all the v7-and-up CPUs a PMU with 4 counters. This means that we don't provide the 6 counters that are required by the Arm BSA (Base System Architecture) specification if the CPU supports the Virtualization extensions. Instead of having a single PMCR_NUM_COUNTERS, make each CPU type specify the PMCR reset value (obtained from the appropriate TRM), and use the 'N' field of that value to define the number of counters provided. This means that we now supply 6 counters instead of 4 for: Cortex-A9, Cortex-A15, Cortex-A53, Cortex-A57, Cortex-A72, Cortex-A76, Neoverse-N1, '-cpu max' This CPU goes from 4 to 8 counters: A64FX These CPUs remain with 4 counters: Cortex-A7, Cortex-A8 This CPU goes down from 4 to 3 counters: Cortex-R5 Note that because we now use the PMCR reset value of the specific implementation, we no longer set the LC bit out of reset. This has an UNKNOWN value out of reset for all cores with any AArch32 support, so guest software should be setting it anyway if it wants it. This change was originally landed in commit f7fb73b8cdd3f7 (during the 6.0 release cycle) but was then reverted by commit 21c2dd77a6aa517 before that release because it did not work with KVM. This version fixes that by creating the scratch vCPU in kvm_arm_get_host_cpu_features() with the KVM_ARM_VCPU_PMU_V3 feature if KVM supports it, and then only asking KVM for the PMCR_EL0 value if the vCPU has a PMU. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> [PMM: Added the correct value for a64fx] Message-id: 20220513122852.4063586-1-peter.maydell@linaro.org
2022-05-19hw/intc/arm_gicv3: Use correct number of priority bits for the CPUPeter Maydell
Make the GICv3 set its number of bits of physical priority from the implementation-specific value provided in the CPU state struct, in the same way we already do for virtual priority bits. Because this would be a migration compatibility break, we provide a property force-8-bit-prio which is enabled for 7.0 and earlier versioned board models to retain the legacy "always use 8 bits" behaviour. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220512151457.3899052-6-peter.maydell@linaro.org Message-id: 20220506162129.2896966-5-peter.maydell@linaro.org
2022-05-19target/arm: Implement FEAT_IDSTPeter Maydell
The Armv8.4 feature FEAT_IDST specifies that exceptions generated by read accesses to the feature ID space should report a syndrome code of 0x18 (EC_SYSTEMREGISTERTRAP) rather than 0x00 (EC_UNCATEGORIZED). The feature ID space is defined to be: op0 == 3, op1 == {0,1,3}, CRn == 0, CRm == {0-7}, op2 == {0-7} In our implementation we might return the EC_UNCATEGORIZED syndrome value for a system register access in four cases: * no reginfo struct in the hashtable * cp_access_ok() fails (ie ri->access doesn't permit the access) * ri->accessfn returns CP_ACCESS_TRAP_UNCATEGORIZED at runtime * ri->type includes ARM_CP_RAISES_EXC, and the readfn raises an UNDEF exception at runtime We have very few regdefs that set ARM_CP_RAISES_EXC, and none of them are in the feature ID space. (In the unlikely event that any are added in future they would need to take care of setting the correct syndrome themselves.) This patch deals with the other three cases, and enables FEAT_IDST for AArch64 -cpu max. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220509155457.3560724-1-peter.maydell@linaro.org
2022-05-19target/arm: Enable FEAT_S2FWB for -cpu maxPeter Maydell
Enable the FEAT_S2FWB for -cpu max. Since FEAT_S2FWB requires that CLIDR_EL1.{LoUU,LoUIS} are zero, we explicitly squash these (the inherited CLIDR_EL1 value from the Cortex-A57 has them as 1). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220505183950.2781801-5-peter.maydell@linaro.org
2022-05-09target/arm: Define neoverse-n1Richard Henderson
Enable the n1 for virt and sbsa board use. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220506180242.216785-25-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-05-09target/arm: Define cortex-a76Richard Henderson
Enable the a76 for virt and sbsa board use. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220506180242.216785-24-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-05-09target/arm: Enable FEAT_DGH for -cpu maxRichard Henderson
This extension concerns not merging memory access, which TCG does not implement. Thus we can trivially enable this feature. Add a comment to handle_hint for the DGH instruction, but no code. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220506180242.216785-23-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-05-09target/arm: Enable FEAT_CSV3 for -cpu maxRichard Henderson
This extension concerns cache speculation, which TCG does not implement. Thus we can trivially enable this feature. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220506180242.216785-22-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-05-09target/arm: Enable FEAT_CSV2_2 for -cpu maxRichard Henderson
There is no branch prediction in TCG, therefore there is no need to actually include the context number into the predictor. Therefore all we need to do is add the state for SCXTNUM_ELx. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220506180242.216785-21-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-05-09target/arm: Enable FEAT_CSV2 for -cpu maxRichard Henderson
This extension concerns branch speculation, which TCG does not implement. Thus we can trivially enable this feature. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220506180242.216785-20-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-05-09target/arm: Enable FEAT_IESB for -cpu maxRichard Henderson
This feature is AArch64 only, and applies to physical SErrors, which QEMU does not implement, thus the feature is a nop. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220506180242.216785-19-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-05-09target/arm: Enable FEAT_RAS for -cpu maxRichard Henderson
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220506180242.216785-18-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-05-09target/arm: Enable FEAT_Debugv8p4 for -cpu maxRichard Henderson
This extension concerns changes to the External Debug interface, with Secure and Non-secure access to the debug registers, and all of it is outside the scope of QEMU. Indicating support for this is mandatory with FEAT_SEL2, which we do implement. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220506180242.216785-13-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-05-09target/arm: Enable FEAT_Debugv8p2 for -cpu maxRichard Henderson
The only portion of FEAT_Debugv8p2 that is relevant to QEMU is CONTEXTIDR_EL2, which is also conditionally implemented with FEAT_VHE. The rest of the debug extension concerns the External debug interface, which is outside the scope of QEMU. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220506180242.216785-12-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-05-09target/arm: Annotate arm_max_initfn with FEAT identifiersRichard Henderson
Update the legacy feature names to the current names. Provide feature names for id changes that were not marked. Sort the field updates into increasing bitfield order. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220506180242.216785-10-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-05-09target/arm: Split out aa32_max_featuresRichard Henderson
Share the code to set AArch32 max features so that we no longer have code drift between qemu{-system,}-{arm,aarch64}. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220506180242.216785-9-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-05-09target/arm: Move cortex impdef sysregs to cpu_tcg.cRichard Henderson
Previously we were defining some of these in user-only mode, but none of them are accessible from user-only, therefore define them only in system mode. This will shortly be used from cpu_tcg.c also. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220506180242.216785-6-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-05-05target/arm: Replace sentinels with ARRAY_SIZE in cpregs.hRichard Henderson
Remove a possible source of error by removing REGINFO_SENTINEL and using ARRAY_SIZE (convinently hidden inside a macro) to find the end of the set of regs being registered or modified. The space saved by not having the extra array element reduces the executable's .data.rel.ro section by about 9k. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220501055028.646596-4-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-05-05target/arm: Split out cpregs.hRichard Henderson
Move ARMCPRegInfo and all related declarations to a new internal header, out of the public cpu.h. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220501055028.646596-2-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-04-28target/arm: Advertise support for FEAT_BBM level 2Peter Maydell
The description in the Arm ARM of the requirements of FEAT_BBM is admirably clear on the guarantees it provides software, but slightly more obscure on what that means for implementations. The description of the equivalent SMMU feature in the SMMU specification (IHI0070D.b section 3.21.1) is perhaps a bit more detailed and includes some example valid implementation choices. (The SMMU version of this feature is slightly tighter than the CPU version: the CPU is permitted to raise TLB Conflict aborts in some situations that the SMMU may not. This doesn't matter for QEMU because we don't want to do TLB Conflict aborts anyway.) The informal summary of FEAT_BBM is that it is about permitting an OS to switch a range of memory between "covered by a huge page" and "covered by a sequence of normal pages" without having to engage in the 'break-before-make' dance that has traditionally been necessary. The 'break-before-make' sequence is: * replace the old translation table entry with an invalid entry * execute a DSB insn * execute a broadcast TLB invalidate insn * execute a DSB insn * write the new translation table entry * execute a DSB insn The point of this is to ensure that no TLB can simultaneously contain TLB entries for the old and the new entry, which would traditionally be UNPREDICTABLE (allowing the CPU to generate a TLB Conflict fault or to use a random mishmash of values from the old and the new entry). FEAT_BBM level 2 says "for the specific case where the only thing that changed is the size of the block, the TLB is guaranteed not to do weird things even if there are multiple entries for an address", which means that software can now do: * replace old translation table entry with new entry * DSB * broadcast TLB invalidate * DSB As the SMMU spec notes, valid ways to do this include: * if there are multiple entries in the TLB for an address, choose one of them and use it, ignoring the others * if there are multiple entries in the TLB for an address, throw them all out and do a page table walk to get a new one QEMU's page table walk implementation for Arm CPUs already meets the requirements for FEAT_BBM level 2. When we cache an entry in our TCG TLB, we do so only for the specific (non-huge) page that the address is in, and there is no way for the TLB data structure to ever have more than one TLB entry for that page. (We handle huge pages only in that we track what part of the address space is covered by huge pages so that a TLB invalidate operation for an address in a huge page results in an invalidation of the whole TLB.) We ignore the Contiguous bit in page table entries, so we don't have to do anything for the parts of FEAT_BBM that deal with changis to the Contiguous bit. FEAT_BBM level 2 also requires that the nT bit in block descriptors must be ignored; since commit 39a1fd25287f5dece5 we do this. It's therefore safe for QEMU to advertise FEAT_BBM level 2 by setting ID_AA64MMFR2_EL1.BBM to 2. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220426160422.2353158-3-peter.maydell@linaro.org
2022-04-28target/arm: Advertise support for FEAT_TTLPeter Maydell
The Arm FEAT_TTL architectural feature allows the guest to provide an optional hint in an AArch64 TLB invalidate operation about which translation table level holds the leaf entry for the address being invalidated. QEMU's TLB implementation doesn't need that hint, and we correctly ignore the (previously RES0) bits in TLB invalidate operation values that are now used for the TTL field. So we can simply advertise support for it in our 'max' CPU. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220426160422.2353158-2-peter.maydell@linaro.org
2022-03-07target/arm: Provide cpu property for controling FEAT_LPA2Richard Henderson
There is a Linux kernel bug present until v5.12 that prevents booting with FEAT_LPA2 enabled. As a workaround for TCG, allow the feature to be disabled from -cpu max. Since this kernel bug is present in the Fedora 31 image that we test in avocado, disable lpa2 on the command-line. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-03-02target/arm: Advertise all page sizes for -cpu maxRichard Henderson
We support 16k pages, but do not advertize that in ID_AA64MMFR0. The value 0 in the TGRAN*_2 fields indicates that stage2 lookups defer to the same support as stage1 lookups. This setting is deprecated, so indicate support for all stage2 page sizes directly. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20220301215958.157011-16-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-03-02target/arm: Implement FEAT_LPARichard Henderson
This feature widens physical addresses (and intermediate physical addresses for 2-stage translation) from 48 to 52 bits, when using 64k pages. The only thing left at this point is to handle the extra bits in the TTBR and in the table descriptors. Note that PAR_EL1 and HPFAR_EL2 are nominally extended, but we don't mask out the high bits when writing to those registers, so no changes are required there. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220301215958.157011-11-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-03-02target/arm: Implement FEAT_LVARichard Henderson
This feature is relatively small, as it applies only to 64k pages and thus requires no additional changes to the table descriptor walking algorithm, only a change to the minimum TSZ (which is the inverse of the maximum virtual address space size). Note that this feature widens VBAR_ELx, but we already treat the register as being 64 bits wide. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220301215958.157011-10-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-02-21target/arm: Support PAuth extension for hvfPeter Maydell
Currently we don't allow guests under hvf to use the PAuth extension, because we didn't have any special code to handle that, and therefore in arm_cpu_pauth_finalize() we will sanitize the ID_AA64ISAR1 value the guest sees to clear the PAuth related fields. Add support for this in the same way that KVM does it, by defaulting to "PAuth enabled" if the host CPU has it and allowing the user to disable it via '-cpu pauth=no' on the command line. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Alexander Graf <agraf@csgraf.de> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220204165506.2846058-7-peter.maydell@linaro.org
2022-02-21target/arm: Fix '-cpu max' for HVFPeter Maydell
Currently when using hvf we mishandle '-cpu max': we fall through to the TCG version of its initfn, which then sets a lot of feature bits that the real host CPU doesn't have. The hvf accelerator code then exposes these bogus ID register values to the guest because it doesn't check that the host really has the features. Make '-cpu host' be like '-cpu max' for hvf, as we do with kvm. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Alexander Graf <agraf@csgraf.de> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220204165506.2846058-6-peter.maydell@linaro.org
2022-02-21target/arm: Unindent unnecessary else-clausePeter Maydell
Now that the if() branch of the condition in aarch64_max_initfn() returns early, we don't need to keep the rest of the code in the function inside an else block. Remove the else, unindenting that code. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Alexander Graf <agraf@csgraf.de> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220204165506.2846058-5-peter.maydell@linaro.org
2022-02-21target/arm: Make KVM -cpu max exactly like -cpu hostPeter Maydell
Currently for KVM the intention is that '-cpu max' and '-cpu host' are the same thing, but because we did this with two separate pieces of code they have got a little bit out of sync. Specifically, 'max' has a 'sve-max-vq' property, and 'host' does not. Bring the two together by having the initfn for 'max' actually call the initfn for 'host'. This will result in 'max' no longer exposing the 'sve-max-vq' property when using KVM. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Alexander Graf <agraf@csgraf.de> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220204165506.2846058-4-peter.maydell@linaro.org
2022-02-21target/arm: Use aarch64_cpu_register() for 'host' CPU typePeter Maydell
Use the aarch64_cpu_register() machinery to register the 'host' CPU type. This doesn't gain us anything functionally, but it does mean that the code for initializing it looks more like that for the other CPU types, in that its initfn then doesn't need to call arm_cpu_post_init() (because aarch64_cpu_instance_init() does that for it). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Alexander Graf <agraf@csgraf.de> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220204165506.2846058-3-peter.maydell@linaro.org
2022-02-21target/arm: Move '-cpu host' code to cpu64.cPeter Maydell
Now that KVM has dropped AArch32 host support, the 'host' CPU type is always AArch64, and we can move it to cpu64.c. This move will allow us to share code between it and '-cpu max', which should behave the same as '-cpu host' when using KVM or HVF. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Alexander Graf <agraf@csgraf.de> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220204165506.2846058-2-peter.maydell@linaro.org
2022-01-20hw/arm/virt: KVM: Enable PAuth when supported by the hostMarc Zyngier
Add basic support for Pointer Authentication when running a KVM guest and that the host supports it, loosely based on the SVE support. Although the feature is enabled by default when the host advertises it, it is possible to disable it by setting the 'pauth=off' CPU property. The 'pauth' comment is removed from cpu-features.rst, as it is now common to both TCG and KVM. Tested on an Apple M1 running 5.16-rc6. Cc: Eric Auger <eric.auger@redhat.com> Cc: Richard Henderson <richard.henderson@linaro.org> Cc: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20220107150154.2490308-1-maz@kernel.org [PMM: fixed indentation] Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-09-01target-arm: Add support for Fujitsu A64FXShuuichirou Ishii
Add a definition for the Fujitsu A64FX processor. The A64FX processor does not implement the AArch32 Execution state, so there are no associated AArch32 Identification registers. For SVE, the A64FX processor supports only 128,256 and 512bit vector lengths. The Identification register values are defined based on the FX700, and have been tested and confirmed. Signed-off-by: Shuuichirou Ishii <ishii.shuuichir@fujitsu.com> Reviewed-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-08-26target/arm/cpu64: Validate sve vector lengths are supportedAndrew Jones
Future CPU types may specify which vector lengths are supported. We can apply nearly the same logic to validate those lengths as we do for KVM's supported vector lengths. We merge the code where we can, but unfortunately can't completely merge it because KVM requires all vector lengths, power-of-two or not, smaller than the maximum enabled length to also be enabled. The architecture only requires all the power-of-two lengths, though, so TCG will only enforce that. Signed-off-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210823160647.34028-5-drjones@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-08-26target/arm/cpu64: Replace kvm_supported with sve_vq_supportedAndrew Jones
Now that we have an ARMCPU member sve_vq_supported we no longer need the local kvm_supported bitmap for KVM's supported vector lengths. Signed-off-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210823160647.34028-4-drjones@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-08-26target/arm/cpu: Introduce sve_vq_supported bitmapAndrew Jones
Allow CPUs that support SVE to specify which SVE vector lengths they support by setting them in this bitmap. Currently only the 'max' and 'host' CPU types supports SVE and 'host' requires KVM which obtains its supported bitmap from the host. So, we only need to initialize the bitmap for 'max' with TCG. And, since 'max' should support all SVE vector lengths we simply fill the bitmap. Future CPU types may have less trivial maps though. Signed-off-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210823160647.34028-2-drjones@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-07-27target/arm: Add sve-default-vector-length cpu propertyRichard Henderson
Mirror the behavour of /proc/sys/abi/sve_default_vector_length under the real linux kernel. We have no way of passing along a real default across exec like the kernel can, but this is a decent way of adjusting the startup vector length of a process. Resolves: https://gitlab.com/qemu-project/qemu/-/issues/482 Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20210723203344.968563-4-richard.henderson@linaro.org [PMM: tweaked docs formatting, document -1 special-case, added fixup patch from RTH mentioning QEMU's maximum veclen.] Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-06-24target/arm: Implement MTE3Peter Collingbourne
MTE3 introduces an asymmetric tag checking mode, in which loads are checked synchronously and stores are checked asynchronously. Add support for it. Signed-off-by: Peter Collingbourne <pcc@google.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210616195614.11785-1-pcc@google.com [PMM: Add line to emulation.rst] Signed-off-by: Peter Maydell <peter.maydell@linaro.org>