diff options
author | Julian Elischer <julian@FreeBSD.org> | 2004-09-05 02:09:54 +0000 |
---|---|---|
committer | Julian Elischer <julian@FreeBSD.org> | 2004-09-05 02:09:54 +0000 |
commit | ed062c8d66c2b1c20522ff55798e6a07d95b017e (patch) | |
tree | 18da20638d66699090b682ef6c65384dc44ef3e3 /sys/kern/kern_fork.c | |
parent | 057f1760a8171825b260dad27502f74ed5f69faf (diff) | |
download | src-ed062c8d66c2b1c20522ff55798e6a07d95b017e.tar.gz src-ed062c8d66c2b1c20522ff55798e6a07d95b017e.zip |
Refactor a bunch of scheduler code to give basically the same behaviour
but with slightly cleaned up interfaces.
The KSE structure has become the same as the "per thread scheduler
private data" structure. In order to not make the diffs too great
one is #defined as the other at this time.
The KSE (or td_sched) structure is now allocated per thread and has no
allocation code of its own.
Concurrency for a KSEGRP is now kept track of via a simple pair of counters
rather than using KSE structures as tokens.
Since the KSE structure is different in each scheduler, kern_switch.c
is now included at the end of each scheduler. Nothing outside the
scheduler knows the contents of the KSE (aka td_sched) structure.
The fields in the ksegrp structure that are to do with the scheduler's
queueing mechanisms are now moved to the kg_sched structure.
(per ksegrp scheduler private data structure). In other words how the
scheduler queues and keeps track of threads is no-one's business except
the scheduler's. This should allow people to write experimental
schedulers with completely different internal structuring.
A scheduler call sched_set_concurrency(kg, N) has been added that
notifies teh scheduler that no more than N threads from that ksegrp
should be allowed to be on concurrently scheduled. This is also
used to enforce 'fainess' at this time so that a ksegrp with
10000 threads can not swamp a the run queue and force out a process
with 1 thread, since the current code will not set the concurrency above
NCPU, and both schedulers will not allow more than that many
onto the system run queue at a time. Each scheduler should eventualy develop
their own methods to do this now that they are effectively separated.
Rejig libthr's kernel interface to follow the same code paths as
linkse for scope system threads. This has slightly hurt libthr's performance
but I will work to recover as much of it as I can.
Thread exit code has been cleaned up greatly.
exit and exec code now transitions a process back to
'standard non-threaded mode' before taking the next step.
Reviewed by: scottl, peter
MFC after: 1 week
Notes
Notes:
svn path=/head/; revision=134791
Diffstat (limited to 'sys/kern/kern_fork.c')
-rw-r--r-- | sys/kern/kern_fork.c | 13 |
1 files changed, 2 insertions, 11 deletions
diff --git a/sys/kern/kern_fork.c b/sys/kern/kern_fork.c index b5459d8651aa..eeef44428e72 100644 --- a/sys/kern/kern_fork.c +++ b/sys/kern/kern_fork.c @@ -203,7 +203,6 @@ fork1(td, flags, pages, procp) struct filedesc *fd; struct filedesc_to_leader *fdtol; struct thread *td2; - struct kse *ke2; struct ksegrp *kg2; struct sigacts *newsigacts; int error; @@ -466,7 +465,6 @@ again: */ td2 = FIRST_THREAD_IN_PROC(p2); kg2 = FIRST_KSEGRP_IN_PROC(p2); - ke2 = FIRST_KSE_IN_KSEGRP(kg2); /* Allocate and switch to an alternate kstack if specified. */ if (pages != 0) @@ -479,8 +477,6 @@ again: bzero(&p2->p_startzero, (unsigned) RANGEOF(struct proc, p_startzero, p_endzero)); - bzero(&ke2->ke_startzero, - (unsigned) RANGEOF(struct kse, ke_startzero, ke_endzero)); bzero(&td2->td_startzero, (unsigned) RANGEOF(struct thread, td_startzero, td_endzero)); bzero(&kg2->kg_startzero, @@ -496,11 +492,6 @@ again: td2->td_sigstk = td->td_sigstk; - /* Set up the thread as an active thread (as if runnable). */ - ke2->ke_state = KES_THREAD; - ke2->ke_thread = td2; - td2->td_kse = ke2; - /* * Duplicate sub-structures as needed. * Increase reference counts on shared objects. @@ -515,7 +506,7 @@ again: * Allow the scheduler to adjust the priority of the child and * parent while we hold the sched_lock. */ - sched_fork(td, p2); + sched_fork(td, td2); mtx_unlock_spin(&sched_lock); p2->p_ucred = crhold(td->td_ucred); @@ -792,7 +783,7 @@ fork_exit(callout, arg, frame) mtx_assert(&sched_lock, MA_OWNED | MA_NOTRECURSED); cpu_critical_fork_exit(); CTR4(KTR_PROC, "fork_exit: new thread %p (kse %p, pid %d, %s)", - td, td->td_kse, p->p_pid, p->p_comm); + td, td->td_sched, p->p_pid, p->p_comm); /* * Processes normally resume in mi_switch() after being |