Age | Commit message (Collapse) | Author |
|
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
get_online_cpus() is a heavy weight function which involves a global
mutex. migrate_disable() wants a simpler construct which prevents only
a CPU from going doing while a task is in a migrate disabled section.
Implement a per cpu lockless mechanism, which serializes only in the
real unplug case on a global mutex. That serialization affects only
tasks on the cpu which should be brought down.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Needs thread context (pgd_lock) -> ifdeffed. workqueues wont work with
RT
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
posix-cpu-timer code takes non -rt safe locks in hard irq
context. Move it to a thread.
[ 3.0 fixes from Peter Zijlstra <peterz@infradead.org> ]
Signed-off-by: John Stultz <johnstul@us.ibm.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
In preempt-rt we can not call the callbacks which take sleeping locks
from the timer interrupt context.
Bring back the softirq split for now, until we fixed the signal
delivery problem for real.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
Make cancellation of a running callback in softirq context safe
against preemption.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
When softirqs can be preempted we need to make sure that cancelling
the timer from the active thread can not deadlock vs. a running timer
callback. Add a waitqueue to resolve that.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
He below is a boot-tested hack to shrink the page frame size back to
normal.
Should be a net win since there should be many less PTE-pages than
page-frames.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
bit_spin_locks break under RT.
Based on a previous patch from Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
--
include/linux/buffer_head.h | 10 ++++++++++
include/linux/jbd_common.h | 24 ++++++++++++++++++++++++
2 files changed, 34 insertions(+)
|
|
Wrap the bit_spin_lock calls into a separate inline and add the RT
replacements with a real spinlock.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Bit spinlocks are not working on RT. Replace them.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
RT needs a few preempt_disable/enable points which are not necessary
otherwise. Implement variants to avoid #ifdeffery.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Add local_irq_*_(no)rt variant which are mainly used to break
interrupt disabled sections on PREEMPT_RT or to explicitely disable
interrupts on PREEMPT_RT.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
This patch provides a recording mechanism to store data of potential
sources of system latencies. The recordings separately determine the
latency caused by a delayed timer expiration, by a delayed wakeup of the
related user space program and by the sum of both. The histograms can be
enabled and reset individually. The data are accessible via the debug
filesystem. For details please consult Documentation/trace/histograms.txt.
Signed-off-by: Carsten Emde <C.Emde@osadl.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
For code which protects the waitqueue itself with another lock it
makes no sense to acquire the waitqueue lock for wakeup all. Provide
__wake_up_all_locked.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: stable-rt@vger.kernel.org
|
|
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
timekeeping suspend/resume calls read_persistant_clock() which takes
rtc_lock. That results in might sleep warnings because at that point
we run with interrupts disabled.
We cannot convert rtc_lock to a raw spinlock as that would trigger
other might sleep warnings.
As a temporary workaround we disable the might sleep warnings by
setting system_state to SYSTEM_SUSPEND before calling sysdev_suspend()
and restoring it to SYSTEM_RUNNING afer sysdev_resume().
Needs to be revisited.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Adding migrate_disable() to pagefault_disable() to preserve the
per-cpu thing for kmap_atomic might not have been the best of choices.
But short of adding preempt_disable/migrate_disable foo all over the
kmap code it still seems the best way.
It does however yield the below borkage as well as wreck !-rt builds
since !-rt does rely on pagefault_disable() not preempting. So fix all
that up by adding raw_pagefault_disable().
<NMI> [<ffffffff81076d5c>] warn_slowpath_common+0x85/0x9d
[<ffffffff81076e17>] warn_slowpath_fmt+0x46/0x48
[<ffffffff814f7fca>] ? _raw_spin_lock+0x6c/0x73
[<ffffffff810cac87>] ? watchdog_overflow_callback+0x9b/0xd0
[<ffffffff810caca3>] watchdog_overflow_callback+0xb7/0xd0
[<ffffffff810f51bb>] __perf_event_overflow+0x11c/0x1fe
[<ffffffff810f298f>] ? perf_event_update_userpage+0x149/0x151
[<ffffffff810f2846>] ? perf_event_task_disable+0x7c/0x7c
[<ffffffff810f5b7c>] perf_event_overflow+0x14/0x16
[<ffffffff81046e02>] x86_pmu_handle_irq+0xcb/0x108
[<ffffffff814f9a6b>] perf_event_nmi_handler+0x46/0x91
[<ffffffff814fb2ba>] notifier_call_chain+0x79/0xa6
[<ffffffff814fb34d>] __atomic_notifier_call_chain+0x66/0x98
[<ffffffff814fb2e7>] ? notifier_call_chain+0xa6/0xa6
[<ffffffff814fb393>] atomic_notifier_call_chain+0x14/0x16
[<ffffffff814fb3c3>] notify_die+0x2e/0x30
[<ffffffff814f8f75>] do_nmi+0x7e/0x22b
[<ffffffff814f8bca>] nmi+0x1a/0x2c
[<ffffffff814fb130>] ? sub_preempt_count+0x4b/0xaa
<<EOE>> <IRQ> [<ffffffff812d44cc>] delay_tsc+0xac/0xd1
[<ffffffff812d4399>] __delay+0xf/0x11
[<ffffffff812d95d9>] do_raw_spin_lock+0xd2/0x13c
[<ffffffff814f813e>] _raw_spin_lock_irqsave+0x6b/0x85
[<ffffffff8106772a>] ? task_rq_lock+0x35/0x8d
[<ffffffff8106772a>] task_rq_lock+0x35/0x8d
[<ffffffff8106fe2f>] migrate_disable+0x65/0x12c
[<ffffffff81114e69>] pagefault_disable+0xe/0x1f
[<ffffffff81039c73>] dump_trace+0x21f/0x2e2
[<ffffffff8103ad79>] show_trace_log_lvl+0x54/0x5d
[<ffffffff8103ad97>] show_trace+0x15/0x17
[<ffffffff814f4f5f>] dump_stack+0x77/0x80
[<ffffffff812d94b0>] spin_bug+0x9c/0xa3
[<ffffffff81067745>] ? task_rq_lock+0x50/0x8d
[<ffffffff812d954e>] do_raw_spin_lock+0x47/0x13c
[<ffffffff814f7fbe>] _raw_spin_lock+0x60/0x73
[<ffffffff81067745>] ? task_rq_lock+0x50/0x8d
[<ffffffff81067745>] task_rq_lock+0x50/0x8d
[<ffffffff8106fe2f>] migrate_disable+0x65/0x12c
[<ffffffff81114e69>] pagefault_disable+0xe/0x1f
[<ffffffff81039c73>] dump_trace+0x21f/0x2e2
[<ffffffff8104369b>] save_stack_trace+0x2f/0x4c
[<ffffffff810a7848>] save_trace+0x3f/0xaf
[<ffffffff810aa2bd>] mark_lock+0x228/0x530
[<ffffffff810aac27>] __lock_acquire+0x662/0x1812
[<ffffffff8103dad4>] ? native_sched_clock+0x37/0x6d
[<ffffffff810a790e>] ? trace_hardirqs_off_caller+0x1f/0x99
[<ffffffff810693f6>] ? sched_rt_period_timer+0xbd/0x218
[<ffffffff810ac403>] lock_acquire+0x145/0x18a
[<ffffffff810693f6>] ? sched_rt_period_timer+0xbd/0x218
[<ffffffff814f7f9e>] _raw_spin_lock+0x40/0x73
[<ffffffff810693f6>] ? sched_rt_period_timer+0xbd/0x218
[<ffffffff810693f6>] sched_rt_period_timer+0xbd/0x218
[<ffffffff8109aa39>] __run_hrtimer+0x1e4/0x347
[<ffffffff81069339>] ? can_migrate_task.clone.82+0x14a/0x14a
[<ffffffff8109b97c>] hrtimer_interrupt+0xee/0x1d6
[<ffffffff814fb23d>] ? add_preempt_count+0xae/0xb2
[<ffffffff814ffb38>] smp_apic_timer_interrupt+0x85/0x98
[<ffffffff814fef13>] apic_timer_interrupt+0x13/0x20
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/n/tip-31keae8mkjiv8esq4rl76cib@git.kernel.org
|
|
Wrap the test for pagefault_disabled() into a helper, this allows us
to remove the need for current->pagefault_disabled on !-rt kernels.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/n/tip-3yy517m8zsi9fpsf14xfaqkw@git.kernel.org
|
|
Add a pagefault_disabled variable to task_struct to allow decoupling
the pagefault-disabled logic from the preempt count.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
No point in tracing those.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
On x86_64 we must disable preemption before we enable interrupts
for stack faults, int3 and debugging, because the current task is using
a per CPU debug stack defined by the IST. If we schedule out, another task
can come in and use the same stack and cause the stack to be corrupted
and crash the kernel on return.
When CONFIG_PREEMPT_RT_FULL is enabled, spin_locks become mutexes, and
one of these is the spin lock used in signal handling.
Some of the debug code (int3) causes do_trap() to send a signal.
This function calls a spin lock that has been converted to a mutex
and has the possibility to sleep. If this happens, the above issues with
the corrupted stack is possible.
Instead of calling the signal right away, for PREEMPT_RT and x86_64,
the signal information is stored on the stacks task_struct and
TIF_NOTIFY_RESUME is set. Then on exit of the trap, the signal resume
code will send the signal when preemption is enabled.
[ rostedt: Switched from #ifdef CONFIG_PREEMPT_RT_FULL to
ARCH_RT_DELAYS_SIGNAL_SEND and added comments to the code. ]
Cc: stable-rt@vger.kernel.org
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
To avoid allocation allow rt tasks to cache one sigqueue struct in
task struct.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Get rid of the ever repeating:
preempt_enable_no_resched();
schedule();
preempt_disable();
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
No point in having different implementations for the same thing.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
[ Upstream commit 124dff01afbdbff251f0385beca84ba1b9adda68 ]
Commit 130549fe ("netfilter: reset nf_trace in nf_reset") added code
to reset nf_trace in nf_reset(). This is wrong and unnecessary.
nf_reset() is used in the following cases:
- when passing packets up the the socket layer, at which point we want to
release all netfilter references that might keep modules pinned while
the packet is queued. nf_trace doesn't matter anymore at this point.
- when encapsulating or decapsulating IPsec packets. We want to continue
tracing these packets after IPsec processing.
- when passing packets through virtual network devices. Only devices on
that encapsulate in IPv4/v6 matter since otherwise nf_trace is not
used anymore. Its not entirely clear whether those packets should
be traced after that, however we've always done that.
- when passing packets through virtual network devices that make the
packet cross network namespace boundaries. This is the only cases
where we clearly want to reset nf_trace and is also what the
original patch intended to fix.
Add a new function nf_reset_trace() and use it in dev_forward_skb() to
fix this properly.
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
|