aboutsummaryrefslogtreecommitdiff
path: root/sys/kern
diff options
context:
space:
mode:
authorAndriy Gapon <avg@FreeBSD.org>2017-01-19 18:46:41 +0000
committerAndriy Gapon <avg@FreeBSD.org>2017-01-19 18:46:41 +0000
commitad9dadc437ff470f794ce2fd0cd1714d69d725d3 (patch)
treef7bc2af646648bba4dd9de30d47ac88d537c4783 /sys/kern
parent1c07d69bc22d7176d38cdb71201f282578fc7391 (diff)
downloadsrc-ad9dadc437ff470f794ce2fd0cd1714d69d725d3.tar.gz
src-ad9dadc437ff470f794ce2fd0cd1714d69d725d3.zip
fix a thread preemption regression in schedulers introduced in r270423
Commit r270423 fixed a regression in sched_yield() that was introduced in earlier changes. Unfortunately, at the same time it introduced an new regression. The problem is that SWT_RELINQUISH (6), like all other SWT_* constants and unlike SW_* flags, is not a bit flag. So, (flags & SWT_RELINQUISH) is true in cases where that was not really indended, for example, with SWT_OWEPREEMPT (2) and SWT_REMOTEPREEMPT (11). A straight forward fix would be to use (flags & SW_TYPE_MASK) == SWT_RELINQUISH, but my impression is that the switch types are designed mostly for gathering statistics, not for influencing scheduling decisions. So, I decided that it would be better to check for SW_PREEMPT flag instead. That's also the same flag that was checked before r239157. I double-checked how that flag is used and I am confident that the flag is set only in the places where we really have the preemption: - critical_exit + td_owepreempt - sched_preempt in the ULE scheduler - sched_preempt in the 4BSD scheduler Reviewed by: kib, mav MFC after: 4 days Sponsored by: Panzura Differential Revision: https://reviews.freebsd.org/D9230
Notes
Notes: svn path=/head/; revision=312426
Diffstat (limited to 'sys/kern')
-rw-r--r--sys/kern/sched_4bsd.c4
-rw-r--r--sys/kern/sched_ule.c4
2 files changed, 4 insertions, 4 deletions
diff --git a/sys/kern/sched_4bsd.c b/sys/kern/sched_4bsd.c
index f97da1714681..534f59cd5a42 100644
--- a/sys/kern/sched_4bsd.c
+++ b/sys/kern/sched_4bsd.c
@@ -968,8 +968,8 @@ sched_switch(struct thread *td, struct thread *newtd, int flags)
sched_load_rem();
td->td_lastcpu = td->td_oncpu;
- preempted = !((td->td_flags & TDF_SLICEEND) ||
- (flags & SWT_RELINQUISH));
+ preempted = (td->td_flags & TDF_SLICEEND) == 0 &&
+ (flags & SW_PREEMPT) != 0;
td->td_flags &= ~(TDF_NEEDRESCHED | TDF_SLICEEND);
td->td_owepreempt = 0;
td->td_oncpu = NOCPU;
diff --git a/sys/kern/sched_ule.c b/sys/kern/sched_ule.c
index 8bbd70952751..b650f24db9e5 100644
--- a/sys/kern/sched_ule.c
+++ b/sys/kern/sched_ule.c
@@ -1898,8 +1898,8 @@ sched_switch(struct thread *td, struct thread *newtd, int flags)
ts->ts_rltick = ticks;
td->td_lastcpu = td->td_oncpu;
td->td_oncpu = NOCPU;
- preempted = !((td->td_flags & TDF_SLICEEND) ||
- (flags & SWT_RELINQUISH));
+ preempted = (td->td_flags & TDF_SLICEEND) == 0 &&
+ (flags & SW_PREEMPT) != 0;
td->td_flags &= ~(TDF_NEEDRESCHED | TDF_SLICEEND);
td->td_owepreempt = 0;
if (!TD_IS_IDLETHREAD(td))