diff options
author | Peter Wemm <peter@FreeBSD.org> | 2004-08-28 00:49:22 +0000 |
---|---|---|
committer | Peter Wemm <peter@FreeBSD.org> | 2004-08-28 00:49:22 +0000 |
commit | 91c1172a5ab8e3de8d693774f2252e5cec94c449 (patch) | |
tree | d6e56573c8d95164b2b0aa52837597204c753850 | |
parent | 31c4f57542cb39e7c64c3ca2eb73d54b3ab854ff (diff) | |
download | src-91c1172a5ab8e3de8d693774f2252e5cec94c449.tar.gz src-91c1172a5ab8e3de8d693774f2252e5cec94c449.zip |
Commit Jeff's suggested changes for avoiding a bug that is exposed by
preemption and/or the rev 1.79 kern_switch.c change that was backed out.
The thread was being assigned to a runq without adding in the load, which
would cause the counter to hit -1.
Notes
Notes:
svn path=/head/; revision=134415
-rw-r--r-- | sys/kern/sched_ule.c | 6 |
1 files changed, 2 insertions, 4 deletions
diff --git a/sys/kern/sched_ule.c b/sys/kern/sched_ule.c index b2b9861c6490..0e88c7bba932 100644 --- a/sys/kern/sched_ule.c +++ b/sys/kern/sched_ule.c @@ -1197,11 +1197,9 @@ sched_switch(struct thread *td, struct thread *newtd) kse_reassign(ke); } } - if (newtd != NULL) { + if (newtd != NULL) kseq_load_add(KSEQ_SELF(), newtd->td_kse); - ke->ke_cpu = PCPU_GET(cpuid); - ke->ke_runq = KSEQ_SELF()->ksq_curr; - } else + else newtd = choosethread(); if (td != newtd) cpu_switch(td, newtd); |