From 24726275612140af6b1c0afc7c6611ad66233207 Mon Sep 17 00:00:00 2001 From: Eric DeVolder Date: Mon, 14 Aug 2023 17:44:40 -0400 Subject: crash: add generic infrastructure for crash hotplug support MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit To support crash hotplug, a mechanism is needed to update the crash elfcorehdr upon CPU or memory changes (eg. hot un/plug or off/ onlining). The crash elfcorehdr describes the CPUs and memory to be written into the vmcore. To track CPU changes, callbacks are registered with the cpuhp mechanism via cpuhp_setup_state_nocalls(CPUHP_BP_PREPARE_DYN). The crash hotplug elfcorehdr update has no explicit ordering requirement (relative to other cpuhp states), so meets the criteria for utilizing CPUHP_BP_PREPARE_DYN. CPUHP_BP_PREPARE_DYN is a dynamic state and avoids the need to introduce a new state for crash hotplug. Also, CPUHP_BP_PREPARE_DYN is the last state in the PREPARE group, just prior to the STARTING group, which is very close to the CPU starting up in a plug/online situation, or stopping in a unplug/ offline situation. This minimizes the window of time during an actual plug/online or unplug/offline situation in which the elfcorehdr would be inaccurate. Note that for a CPU being unplugged or offlined, the CPU will still be present in the list of CPUs generated by crash_prepare_elf64_headers(). However, there is no need to explicitly omit the CPU, see justification in 'crash: change crash_prepare_elf64_headers() to for_each_possible_cpu()'. To track memory changes, a notifier is registered to capture the memblock MEM_ONLINE and MEM_OFFLINE events via register_memory_notifier(). The CPU callbacks and memory notifiers invoke crash_handle_hotplug_event() which performs needed tasks and then dispatches the event to the architecture specific arch_crash_handle_hotplug_event() to update the elfcorehdr with the current state of CPUs and memory. During the process, the kexec_lock is held. Link: https://lkml.kernel.org/r/20230814214446.6659-3-eric.devolder@oracle.com Signed-off-by: Eric DeVolder Reviewed-by: Sourabh Jain Acked-by: Hari Bathini Acked-by: Baoquan He Cc: Akhil Raj Cc: Bjorn Helgaas Cc: Borislav Petkov (AMD) Cc: Boris Ostrovsky Cc: Dave Hansen Cc: Dave Young Cc: David Hildenbrand Cc: Eric W. Biederman Cc: Greg Kroah-Hartman Cc: "H. Peter Anvin" Cc: Ingo Molnar Cc: Jonathan Corbet Cc: Konrad Rzeszutek Wilk Cc: Mimi Zohar Cc: Naveen N. Rao Cc: Oscar Salvador Cc: "Rafael J. Wysocki" Cc: Sean Christopherson Cc: Takashi Iwai Cc: Thomas Gleixner Cc: Thomas Weißschuh Cc: Valentin Schneider Cc: Vivek Goyal Cc: Vlastimil Babka Signed-off-by: Andrew Morton --- kernel/Kconfig.kexec | 31 +++++++++++++++++++++++++++++++ 1 file changed, 31 insertions(+) (limited to 'kernel/Kconfig.kexec') diff --git a/kernel/Kconfig.kexec b/kernel/Kconfig.kexec index 701cd5336f4f..727190571544 100644 --- a/kernel/Kconfig.kexec +++ b/kernel/Kconfig.kexec @@ -112,4 +112,35 @@ config CRASH_DUMP For s390, this option also enables zfcpdump. See also +config CRASH_HOTPLUG + bool "Update the crash elfcorehdr on system configuration changes" + default y + depends on CRASH_DUMP && (HOTPLUG_CPU || MEMORY_HOTPLUG) + depends on ARCH_SUPPORTS_CRASH_HOTPLUG + help + Enable direct update to the crash elfcorehdr (which contains + the list of CPUs and memory regions to be dumped upon a crash) + in response to hot plug/unplug or online/offline of CPUs or + memory. This is a much more advanced approach than userspace + attempting that. + + If unsure, say Y. + +config CRASH_MAX_MEMORY_RANGES + int "Specify the maximum number of memory regions for the elfcorehdr" + default 8192 + depends on CRASH_HOTPLUG + help + For the kexec_file_load() syscall path, specify the maximum number of + memory regions that the elfcorehdr buffer/segment can accommodate. + These regions are obtained via walk_system_ram_res(); eg. the + 'System RAM' entries in /proc/iomem. + This value is combined with NR_CPUS_DEFAULT and multiplied by + sizeof(Elf64_Phdr) to determine the final elfcorehdr memory buffer/ + segment size. + The value 8192, for example, covers a (sparsely populated) 1TiB system + consisting of 128MiB memblocks, while resulting in an elfcorehdr + memory buffer/segment size under 1MiB. This represents a sane choice + to accommodate both baremetal and virtual machine configurations. + endmenu -- cgit v1.2.3