diff options
author | John Baldwin <jhb@FreeBSD.org> | 2004-07-02 20:21:44 +0000 |
---|---|---|
committer | John Baldwin <jhb@FreeBSD.org> | 2004-07-02 20:21:44 +0000 |
commit | 0c0b25ae91328c6b388ef5faa77ec9089f2950a7 (patch) | |
tree | 2a5d6a91ba98f5b9e075eecc1a9ca724b8a9110a /sys/amd64/include | |
parent | 5a66986defa715403bf55b0c3534040cf1b87027 (diff) | |
download | src-0c0b25ae91328c6b388ef5faa77ec9089f2950a7.tar.gz src-0c0b25ae91328c6b388ef5faa77ec9089f2950a7.zip |
Implement preemption of kernel threads natively in the scheduler rather
than as one-off hacks in various other parts of the kernel:
- Add a function maybe_preempt() that is called from sched_add() to
determine if a thread about to be added to a run queue should be
preempted to directly. If it is not safe to preempt or if the new
thread does not have a high enough priority, then the function returns
false and sched_add() adds the thread to the run queue. If the thread
should be preempted to but the current thread is in a nested critical
section, then the flag TDF_OWEPREEMPT is set and the thread is added
to the run queue. Otherwise, mi_switch() is called immediately and the
thread is never added to the run queue since it is switch to directly.
When exiting an outermost critical section, if TDF_OWEPREEMPT is set,
then clear it and call mi_switch() to perform the deferred preemption.
- Remove explicit preemption from ithread_schedule() as calling
setrunqueue() now does all the correct work. This also removes the
do_switch argument from ithread_schedule().
- Do not use the manual preemption code in mtx_unlock if the architecture
supports native preemption.
- Don't call mi_switch() in a loop during shutdown to give ithreads a
chance to run if the architecture supports native preemption since
the ithreads will just preempt DELAY().
- Don't call mi_switch() from the page zeroing idle thread for
architectures that support native preemption as it is unnecessary.
- Native preemption is enabled on the same archs that supported ithread
preemption, namely alpha, i386, and amd64.
This change should largely be a NOP for the default case as committed
except that we will do fewer context switches in a few cases and will
avoid the run queues completely when preempting.
Approved by: scottl (with his re@ hat)
Notes
Notes:
svn path=/head/; revision=131481
Diffstat (limited to 'sys/amd64/include')
-rw-r--r-- | sys/amd64/include/param.h | 2 |
1 files changed, 2 insertions, 0 deletions
diff --git a/sys/amd64/include/param.h b/sys/amd64/include/param.h index 5216c55a28dc..2f468374f873 100644 --- a/sys/amd64/include/param.h +++ b/sys/amd64/include/param.h @@ -119,6 +119,8 @@ #define NBPML4 (1ul<<PML4SHIFT)/* bytes/page map lev4 table */ #define PML4MASK (NBPML4-1) +#define PREEMPTION + #define IOPAGES 2 /* pages of i/o permission bitmap */ #ifndef KSTACK_PAGES |