summaryrefslogtreecommitdiff
path: root/xen/arch/x86/numa.c
diff options
context:
space:
mode:
authorJulien Grall <julien.grall@arm.com>2018-02-21 13:46:27 +0000
committerJulien Grall <julien.grall@arm.com>2018-04-06 17:08:59 +0100
commitf46b6197344fca91db7e1d7bd6df0c4a2703ed6f (patch)
tree38a4ecd8d18769741e9c78170ef389b4d40e93f7 /xen/arch/x86/numa.c
parent115fb8e345b9377b400b9e2e9bca1750362d284b (diff)
xen: Convert page_to_mfn and mfn_to_page to use typesafe MFN
Most of the users of page_to_mfn and mfn_to_page are either overriding the macros to make them work with mfn_t or use mfn_x/_mfn because the rest of the function use mfn_t. So make page_to_mfn and mfn_to_page return mfn_t by default. The __* version are now dropped as this patch will convert all the remaining non-typesafe callers. Only reasonable clean-ups are done in this patch. The rest will use _mfn/mfn_x for the time being. Lastly, domain_page_to_mfn is also converted to use mfn_t given that most of the callers are now switched to _mfn(domain_page_to_mfn(...)). Signed-off-by: Julien Grall <julien.grall@arm.com> Acked-by: Razvan Cojocaru <rcojocaru@bitdefender.com> Reviewed-by: Paul Durrant <paul.durrant@citrix.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Wei Liu <wei.liu2@citrix.com> Acked-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: George Dunlap <george.dunlap@citrix.com> Acked-by: Tim Deegan <tim@xen.org> Acked-by: Stefano Stabellini <sstabellini@kernel.org>
Diffstat (limited to 'xen/arch/x86/numa.c')
-rw-r--r--xen/arch/x86/numa.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/xen/arch/x86/numa.c b/xen/arch/x86/numa.c
index 4fc967f893..a87987da6f 100644
--- a/xen/arch/x86/numa.c
+++ b/xen/arch/x86/numa.c
@@ -430,7 +430,7 @@ static void dump_numa(unsigned char key)
spin_lock(&d->page_alloc_lock);
page_list_for_each(page, &d->page_list)
{
- i = phys_to_nid((paddr_t)page_to_mfn(page) << PAGE_SHIFT);
+ i = phys_to_nid(page_to_maddr(page));
page_num_node[i]++;
}
spin_unlock(&d->page_alloc_lock);