diff options
author | Changwoo Min <changwoo@igalia.com> | 2025-01-08 16:08:06 +0100 |
---|---|---|
committer | Tejun Heo <tj@kernel.org> | 2025-01-08 17:48:53 +0100 |
commit | 6268d5bc10354fc2ab8d44a0cd3b042d49a0417e (patch) | |
tree | c1d83d066b362faff2bd532122743a480e2cf120 /kernel | |
parent | sched_ext: keep running prev when prev->scx.slice != 0 (diff) | |
download | linux-6268d5bc10354fc2ab8d44a0cd3b042d49a0417e.tar.xz linux-6268d5bc10354fc2ab8d44a0cd3b042d49a0417e.zip |
sched_ext: Replace rq_lock() to raw_spin_rq_lock() in scx_ops_bypass()
scx_ops_bypass() iterates all CPUs to re-enqueue all the scx tasks.
For each CPU, it acquires a lock using rq_lock() regardless of whether
a CPU is offline or the CPU is currently running a task in a higher
scheduler class (e.g., deadline). The rq_lock() is supposed to be used
for online CPUs, and the use of rq_lock() may trigger an unnecessary
warning in rq_pin_lock(). Therefore, replace rq_lock() to
raw_spin_rq_lock() in scx_ops_bypass().
Without this change, we observe the following warning:
===== START =====
[ 6.615205] rq->balance_callback && rq->balance_callback != &balance_push_callback
[ 6.615208] WARNING: CPU: 2 PID: 0 at kernel/sched/sched.h:1730 __schedule+0x1130/0x1c90
===== END =====
Fixes: 0e7ffff1b811 ("scx: Fix raciness in scx_ops_bypass()")
Signed-off-by: Changwoo Min <changwoo@igalia.com>
Acked-by: Andrea Righi <arighi@nvidia.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
Diffstat (limited to 'kernel')
-rw-r--r-- | kernel/sched/ext.c | 12 |
1 files changed, 6 insertions, 6 deletions
diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c index 335371cc2cbd..11a0e1a9d86e 100644 --- a/kernel/sched/ext.c +++ b/kernel/sched/ext.c @@ -4747,10 +4747,9 @@ static void scx_ops_bypass(bool bypass) */ for_each_possible_cpu(cpu) { struct rq *rq = cpu_rq(cpu); - struct rq_flags rf; struct task_struct *p, *n; - rq_lock(rq, &rf); + raw_spin_rq_lock(rq); if (bypass) { WARN_ON_ONCE(rq->scx.flags & SCX_RQ_BYPASSING); @@ -4766,7 +4765,7 @@ static void scx_ops_bypass(bool bypass) * sees scx_rq_bypassing() before moving tasks to SCX. */ if (!scx_enabled()) { - rq_unlock(rq, &rf); + raw_spin_rq_unlock(rq); continue; } @@ -4786,10 +4785,11 @@ static void scx_ops_bypass(bool bypass) sched_enq_and_set_task(&ctx); } - rq_unlock(rq, &rf); - /* resched to restore ticks and idle state */ - resched_cpu(cpu); + if (cpu_online(cpu) || cpu == smp_processor_id()) + resched_curr(rq); + + raw_spin_rq_unlock(rq); } atomic_dec(&scx_ops_breather_depth); |