aboutsummaryrefslogtreecommitdiff
path: root/sys/vm/vm_glue.c
Commit message (Collapse)AuthorAgeFilesLines
* use 'void *' instead of 'caddr_t' for useracc, kernacc, vslock and vsunlock.Alfred Perlstein2003-01-211-4/+4
| | | | Notes: svn path=/head/; revision=109630
* Close the remaining user address mapping races for physicalMatthew Dillon2003-01-201-0/+12
| | | | | | | | | | I/O, CAM, and AIO. Still TODO: streamline useracc() checks. Reviewed by: alc, tegge MFC after: 7 days Notes: svn path=/head/; revision=109572
* - Hold the page queues lock around vm_page_wakeup().Alan Cox2002-12-241-0/+2
| | | | Notes: svn path=/head/; revision=108251
* This is David Schultz's swapoff code which I am finally able to commit.Matthew Dillon2002-12-151-0/+40
| | | | | | | | | | This should be considered highly experimental for the moment. Submitted by: David Schultz <dschultz@uclink.Berkeley.EDU> MFC after: 3 weeks Notes: svn path=/head/; revision=107913
* - Check that a process isn't a new process (p_state == PRS_NEW) beforeJohn Baldwin2002-10-221-20/+23
| | | | | | | | | | | | trying to acquire it's proc lock since the proc lock may not have been constructed yet. - Split up the one big comment at the top of the loop and put the pieces in the right order above the various checks. Reported by: kris (1) Notes: svn path=/head/; revision=105695
* Remove old useless debugging codeJulian Elischer2002-10-141-5/+0
| | | | Notes: svn path=/head/; revision=105126
* Be consistent about "static" functions: if the function is markedPoul-Henning Kamp2002-09-281-2/+2
| | | | | | | | | static in its prototype, mark it static at the definition too. Inspired by: FlexeLint warning #512 Notes: svn path=/head/; revision=104094
* Use the fields in the sysentvec and in the vm map header in place of theJake Burkholder2002-09-211-14/+6
| | | | | | | | | | | constants VM_MIN_ADDRESS, VM_MAXUSER_ADDRESS, USRSTACK and PS_STRINGS. This is mainly so that they can be variable even for the native abi, based on different machine types. Get stack protections from the sysentvec too. This makes it trivial to map the stack non-executable for certain abis, on machines that support it. Notes: svn path=/head/; revision=103767
* Completely redo thread states.Julian Elischer2002-09-111-10/+15
| | | | | | | Reviewed by: davidxu@freebsd.org Notes: svn path=/head/; revision=103216
* - Do not swap out a process if it is in creation. The process may have noSeigo Tanimura2002-09-091-0/+24
| | | | | | | | | | | address space yet. - Check whether a process is a system process prior to dereferencing its p_vmspace. Aio assumes that only the curthread switches the address space of a system process. Notes: svn path=/head/; revision=103123
* Use UMA as a complex object allocator.Julian Elischer2002-09-061-5/+0
| | | | | | | | | | | | | | | | | | The process allocator now caches and hands out complete process structures *including substructures* . i.e. it get's the process structure with the first thread (and soon KSE) already allocated and attached, all in one hit. For the average non threaded program (non KSE that is) the allocated thread and its stack remain attached to the process, even when the process is unused and in the process cache. This saves having to allocate and attach it later, effectively bringing us (hopefully) close to the efficiency of pre-KSE systems where these were a single structure. Reviewed by: davidxu@freebsd.org, peter@freebsd.org Notes: svn path=/head/; revision=103002
* s/SGNL/SIG/David Xu2002-09-051-1/+2
| | | | | | | | | | | | | s/SNGL/SINGLE/ s/SNGLE/SINGLE/ Fix abbreviation for P_STOPPED_* etc flags, in original code they were inconsistent and difficult to distinguish between them. Approved by: julian (mentor) Notes: svn path=/head/; revision=102950
* o Setting PG_MAPPED and PG_WRITEABLE on pages that are mapped and unmappedAlan Cox2002-07-311-2/+0
| | | | | | | | | | | | | | | by pmap_qenter() and pmap_qremove() is pointless. In fact, it probably leads to unnecessary pmap_page_protect() calls if one of these pages is paged out after unwiring. Note: setting PG_MAPPED asserts that the page's pv list may be non-empty. Since checking the status of the page's pv list isn't any harder than checking this flag, the flag should probably be eliminated. Alternatively, PG_MAPPED could be set by pmap_enter() exclusively rather than various places throughout the kernel. Notes: svn path=/head/; revision=101105
* - Optimize wakeup() and its friends; if a thread waken up is beingSeigo Tanimura2002-07-301-64/+65
| | | | | | | | | | | | | | | | | swapped in, we do not have to ask for the scheduler thread to do that. - Assert that a process is not swapped out in runq functions and swapout(). - Introduce thread_safetoswapout() for readability. - In swapout_procs(), perform a test that may block (check of a thread working on its vm map) first. This lets us call swapout() with the sched_lock held, providing a better atomicity. Notes: svn path=/head/; revision=100913
* Remove a XXXKSE comment. the code is no longer a problem..Julian Elischer2002-07-291-1/+1
| | | | Notes: svn path=/head/; revision=100885
* Create a new thread state to describe threads that would be ready to runJulian Elischer2002-07-291-16/+66
| | | | | | | | | | | | | | except for the fact tha they are presently swapped out. Also add a process flag to indicate that the process has started the struggle to swap back in. This will be needed for the case where multiple threads start the swapin action top a collision. Also add code to stop a process fropm being swapped out if one of the threads in this process is actually off running on another CPU.. that might hurt... Submitted by: Seigo Tanimura <tanimura@r.dl.itc.u-tokyo.ac.jp> Notes: svn path=/head/; revision=100884
* o Pass VM_ALLOC_WIRED to vm_page_grab() rather than calling vm_page_wire()Alan Cox2002-07-291-7/+2
| | | | | | | | in pmap_new_thread(), pmap_pinit(), and vm_proc_new(). o Lock page queue accesses by vm_page_free() in pmap_object_init_pt(). Notes: svn path=/head/; revision=100862
* Do not pass a thread with the state TDS_RUNQ to setrunqueue(), otherwiseSeigo Tanimura2002-07-211-1/+4
| | | | | | | assertion in setrunqueue() fails. Notes: svn path=/head/; revision=100438
* o Lock page queue accesses by vm_page_wire().Alan Cox2002-07-141-0/+2
| | | | Notes: svn path=/head/; revision=99985
* o Lock some page queue accesses, in particular, those by vm_page_unwire().Alan Cox2002-07-131-0/+4
| | | | Notes: svn path=/head/; revision=99920
* Avoid a vm_page_lookup() - that uses a spinlock protected hash. We canPeter Wemm2002-07-121-2/+5
| | | | | | | just use the object's memq for our nefarious purposes. Notes: svn path=/head/; revision=99851
* Avoid vm_page_lookup() [grabs a spinlock] and just process the upagePeter Wemm2002-07-081-14/+9
| | | | | | | | | object memq instead. Suggested by: alc Notes: svn path=/head/; revision=99563
* Collect all the (now equivalent) pmap_new_proc/pmap_dispose_proc/Peter Wemm2002-07-071-7/+152
| | | | | | | | | | | | | | | | pmap_swapin_proc/pmap_swapout_proc functions from the MD pmap code and use a single equivalent MI version. There are other cleanups needed still. While here, use the UMA zone hooks to keep a cache of preinitialized proc structures handy, just like the thread system does. This eliminates one dependency on 'struct proc' being persistent even after being freed. There are some comments about things that can be factored out into ctor/dtor functions if it is worth it. For now they are mostly just doing statistics to get a feel of how it is working. Notes: svn path=/head/; revision=99559
* A small cleanup.Julian Elischer2002-07-041-1/+0
| | | | Notes: svn path=/head/; revision=99408
* Don;t call teh thread setup routines from here..Julian Elischer2002-07-041-1/+0
| | | | | | | they are already called when uma calls thread_init() Notes: svn path=/head/; revision=99407
* Part 1 of KSE-IIIJulian Elischer2002-06-291-17/+31
| | | | | | | | | | | | | | | | The ability to schedule multiple threads per process (one one cpu) by making ALL system calls optionally asynchronous. to come: ia64 and power-pc patches, patches for gdb, test program (in tools) Reviewed by: Almost everyone who counts (at various times, peter, jhb, matt, alfred, mini, bernd, and a cast of thousands) NOTE: this is still Beta code, and contains lots of debugging stuff. expect slight instability in signals.. Notes: svn path=/head/; revision=99072
* o Remove GIANT_REQUIRED from vslock().Alan Cox2002-06-221-1/+10
| | | | | | | | | o Annotate kernacc(), useracc(), and vslock() as MPSAFE. Motivated by: alfred Notes: svn path=/head/; revision=98600
* o Remove GIANT_REQUIRED from useracc() and vsunlock(). NeitherAlan Cox2002-06-151-3/+4
| | | | | | | | vm_map_check_protection() nor vm_map_unwire() expect Giant to be held. Notes: svn path=/head/; revision=98263
* o Use vm_map_wire() and vm_map_unwire() in place of vm_map_pageable() andAlan Cox2002-06-141-4/+3
| | | | | | | | | | | | vm_map_user_pageable(). o Remove vm_map_pageable() and vm_map_user_pageable(). o Remove vm_map_clear_recursive() and vm_map_set_recursive(). (They were only used by vm_map_pageable() and vm_map_user_pageable().) Reviewed by: tegge Notes: svn path=/head/; revision=98226
* o Introduce and use vm_map_trylock() to replace several direct usesAlan Cox2002-04-281-3/+1
| | | | | | | | | of lockmgr(). o Add missing synchronization to vmspace_swap_count(): Obtain a read lock on the vm_map before traversing it. Notes: svn path=/head/; revision=95610
* Remove __P.Alfred Perlstein2002-03-191-3/+3
| | | | Notes: svn path=/head/; revision=92727
* Fix a gcc-3.1+ warning.Peter Wemm2002-03-191-0/+1
| | | | | | | | | | | | | | warning: deprecated use of label at end of compound statement ie: you cannot do this anymore: switch(foo) { .... default: } Notes: svn path=/head/; revision=92666
* Back out the modification of vm_map locks from lockmgr to sx locks. TheBrian Feldman2002-03-181-1/+3
| | | | | | | | | | | | | best path forward now is likely to change the lockmgr locks to simple sleep mutexes, then see if any extra contention it generates is greater than removed overhead of managing local locking state information, cost of extra calls into lockmgr, etc. Additionally, making the vm_map lock a mutex and respecting it properly will put us much closer to not needing Giant magic in vm. Notes: svn path=/head/; revision=92588
* Undo part of revision 1.57: Now that (o)sendsig() doesn't call useracc(),Alan Cox2002-03-171-13/+3
| | | | | | | | | | | the motivation for saving and restoring the map->hint in useracc() is gone. (The same tests that motivated this change in revision 1.57 now show that there is no performance loss from removing it.) This was really a hack and some day we would have had to add new synchronization here on map->hint to maintain it. Notes: svn path=/head/; revision=92475
* Acquire a read lock on the map inside of vm_map_check_protection() ratherAlan Cox2002-03-171-4/+1
| | | | | | | | than expecting the caller to do so. This (1) eliminates duplicated code in kernacc() and useracc() and (2) fixes missing synchronization in munmap(). Notes: svn path=/head/; revision=92466
* Rename SI_SUB_MUTEX to SI_SUB_MTX_POOL to make the name at all accurate.Brian Feldman2002-03-131-3/+1
| | | | | | | | | | | | | | | | | | | While doing this, move it earlier in the sysinit boot process so that the VM system can use it. After that, the system is now able to use sx locks instead of lockmgr locks in the VM system. To accomplish this, some of the more questionable uses of the locks (such as testing whether they are owned or not, as well as allowing shared+exclusive recursion) are removed, and simpler logic throughout is used so locks should also be easier to understand. This has been tested on my laptop for months, and has not shown any problems on SMP systems, either, so appears quite safe. One more user of lockmgr down, many more to go :) Notes: svn path=/head/; revision=92246
* - Remove a number of extra newlines that do not belong here according toEivind Eklund2002-03-101-2/+0
| | | | | | | | | | | | style(9) - Minor space adjustment in cases where we have "( ", " )", if(), return(), while(), for(), etc. - Add /* SYMBOL */ after a few #endifs. Reviewed by: alc Notes: svn path=/head/; revision=92029
* Remove unused variable (td)Peter Wemm2002-02-261-1/+0
| | | | Notes: svn path=/head/; revision=91263
* In a threaded world, differnt priorirites become properties ofJulian Elischer2002-02-111-2/+3
| | | | | | | | | different entities. Make it so. Reviewed by: jhb@freebsd.org (john baldwin) Notes: svn path=/head/; revision=90538
* Pre-KSE/M3 commit.Julian Elischer2002-02-071-4/+5
| | | | | | | | | | | | | this is a low-functionality change that changes the kernel to access the main thread of a process via the linked list of threads rather than assuming that it is embedded in the process. It IS still embeded there but remove all teh code that assumes that in preparation for the next commit which will actually move it out. Reviewed by: peter@freebsd.org, gallatin@cs.duke.edu, benno rice, Notes: svn path=/head/; revision=90361
* Fix a race with free'ing vmspaces at process exit when vmspaces areAlfred Perlstein2002-02-051-1/+1
| | | | | | | | | | | | | | | | | | | | | | shared. Also introduce vm_endcopy instead of using pointer tricks when initializing new vmspaces. The race occured because of how the reference was utilized: test vmspace reference, possibly block, decrement reference When sharing a vmspace between multiple processes it was possible for two processes exiting at the same time to test the reference count, possibly block and neither one free because they wouldn't see the other's update. Submitted by: green Notes: svn path=/head/; revision=90263
* Don't declare vm_swapout() in the NO_SWAPPING case when it is not defined.Bruce Evans2002-01-171-6/+4
| | | | | | | Fixed some style bugs. Notes: svn path=/head/; revision=89464
* Change the preemption code for software interrupt thread schedules andJohn Baldwin2002-01-051-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | mutex releases to not require flags for the cases when preemption is not allowed: The purpose of the MTX_NOSWITCH and SWI_NOSWITCH flags is to prevent switching to a higher priority thread on mutex releease and swi schedule, respectively when that switch is not safe. Now that the critical section API maintains a per-thread nesting count, the kernel can easily check whether or not it should switch without relying on flags from the programmer. This fixes a few bugs in that all current callers of swi_sched() used SWI_NOSWITCH, when in fact, only the ones called from fast interrupt handlers and the swi_sched of softclock needed this flag. Note that to ensure that swi_sched()'s in clock and fast interrupt handlers do not switch, these handlers have to be explicitly wrapped in critical_enter/exit pairs. Presently, just wrapping the handlers is sufficient, but in the future with the fully preemptive kernel, the interrupt must be EOI'd before critical_exit() is called. (critical_exit() can switch due to a deferred preemption in a fully preemptive kernel.) I've tested the changes to the interrupt code on i386 and alpha. I have not tested ia64, but the interrupt code is almost identical to the alpha code, so I expect it will work fine. PowerPC and ARM do not yet have interrupt code in the tree so they shouldn't be broken. Sparc64 is broken, but that's been ok'd by jake and tmm who will be fixing the interrupt code for sparc64 shortly. Reviewed by: peter Tested on: i386, alpha Notes: svn path=/head/; revision=88900
* Make MAXTSIZ, DFLDSIZ, MAXDSIZ, DFLSSIZ, MAXSSIZ, SGROWSIZ loaderPaul Saab2001-10-101-5/+4
| | | | | | | | | | tunable. Reviewed by: peter MFC after: 2 weeks Notes: svn path=/head/; revision=84783
* KSE Milestone 2Julian Elischer2001-09-121-61/+94
| | | | | | | | | | | | | | | | | Note ALL MODULES MUST BE RECOMPILED make the kernel aware that there are smaller units of scheduling than the process. (but only allow one thread per process at this time). This is functionally equivalent to teh previousl -current except that there is a thread associated with each process. Sorry john! (your next MFC will be a doosie!) Reviewed by: peter@freebsd.org, dillon@freebsd.org X-MFC after: ha ha ha ha Notes: svn path=/head/; revision=83366
* Rip some well duplicated code out of cpu_wait() and cpu_exit() and movePeter Wemm2001-09-101-1/+17
| | | | | | | | | | | | | | | it to the MI area. KSE touched cpu_wait() which had the same change replicated five ways for each platform. Now it can just do it once. The only MD parts seemed to be dealing with fpu state cleanup and things like vm86 cleanup on x86. The rest was identical. XXX: ia64 and powerpc did not have cpu_throw(), so I've put a functional stub in place. Reviewed by: jake, tmm, dillon Notes: svn path=/head/; revision=83276
* whitespace / register cleanupMatthew Dillon2001-07-041-7/+7
| | | | Notes: svn path=/head/; revision=79242
* With Alfred's permission, remove vm_mtx in favor of a fine-grained approachMatthew Dillon2001-07-041-35/+10
| | | | | | | | | | | | (this commit is just the first stage). Also add various GIANT_ macros to formalize the removal of Giant, making it easy to test in a more piecemeal fashion. These macros will allow us to test fine-grained locks to a degree before removing Giant, and also after, and to remove Giant in a piecemeal fashion via sysctl's on those subsystems which the authors believe can operate without Giant. Notes: svn path=/head/; revision=79224
* Put the scheduler, vmdaemon, and pagedaemon kthreads back under Giant forJohn Baldwin2001-06-201-3/+0
| | | | | | | | now. The proc locking isn't actually safe yet and won't be until the proc locking is finished. Notes: svn path=/head/; revision=78481
* - Lock the VM around the pmap_swapin_proc() call in faultin().John Baldwin2001-05-231-15/+16
| | | | | | | | | | | | | | | | | | - Don't lock Giant in the scheduler() function except for when calling faultin(). - In swapout_procs(), lock the VM before the proccess to avoid a lock order violation. - In swapout_procs(), release the allproc lock before calling swapout(). We restart the process scan after swapping out a process. - In swapout_procs(), un #if 0 the code to bump the vmspace reference count and lock the process' vm structures. This bug was introduced by me and could result in the vmspace being free'd out from under a running process. - Fix an old bug where the vmspace reference count was not free'd if we failed the swap_idle_threshold2 test. Notes: svn path=/head/; revision=77089