diff options
author | John Baldwin <jhb@FreeBSD.org> | 2004-07-02 20:21:44 +0000 |
---|---|---|
committer | John Baldwin <jhb@FreeBSD.org> | 2004-07-02 20:21:44 +0000 |
commit | 0c0b25ae91328c6b388ef5faa77ec9089f2950a7 (patch) | |
tree | 2a5d6a91ba98f5b9e075eecc1a9ca724b8a9110a /sys/kern/kern_shutdown.c | |
parent | 5a66986defa715403bf55b0c3534040cf1b87027 (diff) | |
download | src-0c0b25ae91328c6b388ef5faa77ec9089f2950a7.tar.gz src-0c0b25ae91328c6b388ef5faa77ec9089f2950a7.zip |
Implement preemption of kernel threads natively in the scheduler rather
than as one-off hacks in various other parts of the kernel:
- Add a function maybe_preempt() that is called from sched_add() to
determine if a thread about to be added to a run queue should be
preempted to directly. If it is not safe to preempt or if the new
thread does not have a high enough priority, then the function returns
false and sched_add() adds the thread to the run queue. If the thread
should be preempted to but the current thread is in a nested critical
section, then the flag TDF_OWEPREEMPT is set and the thread is added
to the run queue. Otherwise, mi_switch() is called immediately and the
thread is never added to the run queue since it is switch to directly.
When exiting an outermost critical section, if TDF_OWEPREEMPT is set,
then clear it and call mi_switch() to perform the deferred preemption.
- Remove explicit preemption from ithread_schedule() as calling
setrunqueue() now does all the correct work. This also removes the
do_switch argument from ithread_schedule().
- Do not use the manual preemption code in mtx_unlock if the architecture
supports native preemption.
- Don't call mi_switch() in a loop during shutdown to give ithreads a
chance to run if the architecture supports native preemption since
the ithreads will just preempt DELAY().
- Don't call mi_switch() from the page zeroing idle thread for
architectures that support native preemption as it is unnecessary.
- Native preemption is enabled on the same archs that supported ithread
preemption, namely alpha, i386, and amd64.
This change should largely be a NOP for the default case as committed
except that we will do fewer context switches in a few cases and will
avoid the run queues completely when preempting.
Approved by: scottl (with his re@ hat)
Notes
Notes:
svn path=/head/; revision=131481
Diffstat (limited to 'sys/kern/kern_shutdown.c')
-rw-r--r-- | sys/kern/kern_shutdown.c | 37 |
1 files changed, 24 insertions, 13 deletions
diff --git a/sys/kern/kern_shutdown.c b/sys/kern/kern_shutdown.c index b9bfd393a1d6..fbf466075d33 100644 --- a/sys/kern/kern_shutdown.c +++ b/sys/kern/kern_shutdown.c @@ -269,7 +269,9 @@ boot(int howto) if (!cold && (howto & RB_NOSYNC) == 0 && waittime < 0) { register struct buf *bp; int iter, nbusy, pbusy; +#ifndef PREEMPTION int subiter; +#endif waittime = 0; printf("\nsyncing disks, buffers remaining... "); @@ -300,20 +302,29 @@ boot(int howto) iter = 0; pbusy = nbusy; sync(&thread0, NULL); - if (curthread != NULL) { - DROP_GIANT(); - for (subiter = 0; subiter < 50 * iter; subiter++) { - mtx_lock_spin(&sched_lock); - /* - * Allow interrupt threads to run - */ - mi_switch(SW_VOL, NULL); - mtx_unlock_spin(&sched_lock); - DELAY(1000); - } - PICKUP_GIANT(); - } else + +#ifdef PREEMPTION + /* + * Drop Giant and spin for a while to allow + * interrupt threads to run. + */ + DROP_GIANT(); DELAY(50000 * iter); + PICKUP_GIANT(); +#else + /* + * Drop Giant and context switch several times to + * allow interrupt threads to run. + */ + DROP_GIANT(); + for (subiter = 0; subiter < 50 * iter; subiter++) { + mtx_lock_spin(&sched_lock); + mi_switch(SW_VOL, NULL); + mtx_unlock_spin(&sched_lock); + DELAY(1000); + } + PICKUP_GIANT(); +#endif } printf("\n"); /* |