aboutsummaryrefslogtreecommitdiff
path: root/sys/vm
Commit message (Collapse)AuthorAgeFilesLines
* Tidy up some loose ends.Peter Wemm2002-04-292-3/+0
| | | | | | | | | | | | | | | i386/ia64/alpha - catch up to sparc64/ppc: - replace pmap_kernel() with refs to kernel_pmap - change kernel_pmap pointer to (&kernel_pmap_store) (this is a speedup since ld can set these at compile/link time) all platforms (as suggested by jake): - gc unused pmap_reference - gc unused pmap_destroy - gc unused struct pmap.pm_count (we never used pm_count - we track address space sharing at the vmspace) Notes: svn path=/head/; revision=95710
* Document three synchronization issues in vm_fault().Alan Cox2002-04-291-0/+8
| | | | Notes: svn path=/head/; revision=95701
* Pass the caller's file name and line number to the vm_map locking functions.Alan Cox2002-04-282-20/+35
| | | | Notes: svn path=/head/; revision=95686
* o Introduce and use vm_map_trylock() to replace several direct usesAlan Cox2002-04-285-8/+14
| | | | | | | | | of lockmgr(). o Add missing synchronization to vmspace_swap_count(): Obtain a read lock on the vm_map before traversing it. Notes: svn path=/head/; revision=95610
* We do not necessarily need to map/unmap pages to zero parts of them.Peter Wemm2002-04-283-4/+14
| | | | | | | | On systems where physical memory is also direct mapped (alpha, sparc, ia64 etc) this is slightly harmful. Notes: svn path=/head/; revision=95598
* o Begin documenting the (existing) locking protocol on the vm_mapAlan Cox2002-04-272-25/+26
| | | | | | | | | | in the same style as sys/proc.h. o Undo the de-inlining of several trivial, MPSAFE methods on the vm_map. (Contrary to the commit message for vm_map.h revision 1.66 and vm_map.c revision 1.206, de-inlining these methods increased the kernel's size.) Notes: svn path=/head/; revision=95589
* o Control access to the vm_page_buckets with a mutex.Alan Cox2002-04-261-33/+17
| | | | | | | o Fix some style(9) bugs. Notes: svn path=/head/; revision=95532
* - Fix a round down bogon in uma_zone_set_max().Andrew R. Reiter2002-04-251-0/+2
| | | | | | | Submitted by: jeff@ Notes: svn path=/head/; revision=95432
* Reintroduce locking on accesses to vm_object_list.Alan Cox2002-04-203-1/+10
| | | | Notes: svn path=/head/; revision=95112
* o Move the acquisition of Giant from vm_fault() to the pointAlan Cox2002-04-191-12/+8
| | | | | | | | after initialization in vm_fault1(). o Fix some style problems in vm_fault1(). Notes: svn path=/head/; revision=95021
* Add a comment documenting a race condition in vm_fault(): Specifically, aAlan Cox2002-04-181-0/+3
| | | | | | | modification is made to the vm_map while only a read lock is held. Notes: svn path=/head/; revision=94981
* o Call vm_map_growstack() from vm_fault() if vm_map_lookup() has failedAlan Cox2002-04-181-1/+10
| | | | | | | | | | | | | | due to conditions that suggest the possible need for stack growth. This has two beneficial effects: (1) we can now remove calls to vm_map_growstack() from the MD trap handlers and (2) simple page faults are faster because we no longer unnecessarily perform vm_map_growstack() on every page fault. o Remove vm_map_growstack() from the i386's trap_pfault(). o Remove the acquisition and release of Giant from i386's trap_pfault(). (vm_fault() still acquires it.) Notes: svn path=/head/; revision=94977
* Do not free the vmspace until p->p_vmspace is set to null. OtherwisePeter Wemm2002-04-171-3/+7
| | | | | | | | | | | statclock can access it in the tail end of statclock_process() at an unfortunate time. This bit me several times on an SMP alpha (UP2000) and the problem went away with this change. I'm not sure why it doesn't break x86 as well. Maybe it's because the clocks are much faster on alpha (HZ=1024 by default). Notes: svn path=/head/; revision=94921
* Remove an unused option, VM_FAULT_HOLD, to vm_fault().Alan Cox2002-04-172-3/+0
| | | | Notes: svn path=/head/; revision=94912
* Pass vm_page_t instead of physical addresses to pmap_zero_page[_area]()Peter Wemm2002-04-154-28/+14
| | | | | | | | | | | | | | and pmap_copy_page(). This gets rid of a couple more physical addresses in upper layers, with the eventual aim of supporting PAE and dealing with the physical addressing mostly within pmap. (We will need either 64 bit physical addresses or page indexes, possibly both depending on the circumstances. Leaving this to pmap itself gives more flexibilitly.) Reviewed by: jake Tested on: i386, ia64 and (I believe) sparc64. (my alpha was hosed) Notes: svn path=/head/; revision=94777
* Fix a witness warning when expanding a hash table. We were allocating the newJeff Roberson2002-04-141-38/+79
| | | | | | | | | | | | | hash while holding the lock on a zone. Fix this by doing the allocation seperately from the actual hash expansion. The lock is dropped before the allocation and reacquired before the expansion. The expansion code checks to see if we lost the race and frees the new hash if we do. We really never will lose this race because the hash expansion is single threaded via the timeout mechanism. Notes: svn path=/head/; revision=94653
* Protect the initial list traversal in sysctl_vm_zone() with the uma_mtx.Jeff Roberson2002-04-141-0/+2
| | | | Notes: svn path=/head/; revision=94651
* Fix the calculation that determines uz_maxpages. It was off for large zones.Jeff Roberson2002-04-142-28/+53
| | | | | | | | | | | | | | | | | Fortunately we have no large zones with maximums specified yet, so it wasn't breaking anything. Implement blocking when a zone exceeds the maximum and M_WAITOK is specified. Previously this just failed like the old zone allocator did. The old zone allocator didn't support WAITOK/NOWAIT though so we should do what we advertise. While I was in there I cleaned up some more zalloc logic to further simplify that code path and reduce redundant code. This was needed to make the blocking work properly anyway. Notes: svn path=/head/; revision=94631
* Remember to unlock the zone if the fill count is too high.Jeff Roberson2002-04-101-3/+4
| | | | | | | Pointed out by: pete, jake, jhb Notes: svn path=/head/; revision=94329
* Quiet witness warnings about acquiring several zone locks. In the case thatJeff Roberson2002-04-081-1/+2
| | | | | | | this happens it is OK. Notes: svn path=/head/; revision=94240
* Add a mechanism to disable buckets when the v_free_count drops belowJeff Roberson2002-04-081-6/+29
| | | | | | | v_free_min. This should help performance in memory starved situations. Notes: svn path=/head/; revision=94165
* Don't release the zone lock until after the dtor has been called. As far as IJeff Roberson2002-04-081-3/+3
| | | | | | | | | | | can tell this could not have caused any problems yet because UMA is still called with giant. Pointy hat to: jeff Noticed by: jake Notes: svn path=/head/; revision=94163
* Implement uma_zdestroy(). It's prototype changed slightly. I decided that IJeff Roberson2002-04-082-30/+78
| | | | | | | | | | | didn't like the wait argument and that if you were removing a zone it had better be empty. Also, I broke out part of hash_expand and made a seperate hash_free() for use in uma_zdestroy. Notes: svn path=/head/; revision=94161
* Rework most of the bucket allocation and free code so that per cpu locks areJeff Roberson2002-04-082-215/+193
| | | | | | | | | | | | | | | | | | | never held across blocking operations. Also, fix two other lock order reversals that were exposed by jhb's witness change. The free path previously had a bug that would cause it to skip the free bucket list in some cases and go straight to allocating a new bucket. This has been fixed as well. These changes made the bucket handling code much cleaner and removed quite a few lock operations. This should be marginally faster now. It is now possible to call malloc w/o Giant and avoid any witness warnings. This still isn't entirely safe though because malloc_type statistics are not protected by any lock. Notes: svn path=/head/; revision=94159
* Spelling correction; s/seperate/separate/gJeff Roberson2002-04-072-2/+2
| | | | | | | Submitted by: eric Notes: svn path=/head/; revision=94157
* There should be no remaining references to these two files in the tree. IfJeff Roberson2002-04-072-631/+0
| | | | | | | there are, it is an error. vm_zone has been superseded by uma. Notes: svn path=/head/; revision=94156
* This fixes a bug where isitem never got set to 1 if a certain chain of eventsJeff Roberson2002-04-071-0/+2
| | | | | | | | | | relating to extreme low memory situations occured. This was only ever seen on the port build cluster, so many thanks to kris for helping me debug this. Tested by: kris Notes: svn path=/head/; revision=94155
* o Eliminate the use of grow_stack() and useracc() from sendsig(), osendsig(),Alan Cox2002-04-051-1/+0
| | | | | | | | | and osf1_sendsig(). o Eliminate the prototype for the MD grow_stack() now that it has been removed from all platforms. Notes: svn path=/head/; revision=93847
* Embed a struct vmmeter in the per-cpu structure and add a macro,Matthew Dillon2002-04-041-96/+129
| | | | | | | | | | | | | | | | | | | PCPU_LAZY_INC() which increments elements in it for cases where we can afford the occassional inaccuracy. Use of per-cpu stats counters avoids significant cache stalls in various critical paths that would otherwise severely limit our cpu scaleability. Adjust all sysctl's accessing cnt.* elements to now use a procedure which aggregates the requested field for all cpus and for the global vmmeter. The global vmmeter is retained, since some stats counters, like v_free_min, cannot be made per-cpu. Also, this allows us to convert counters from the global vmmeter to the per-cpu vmmeter in a piecemeal fashion, so have at it! Notes: svn path=/head/; revision=93823
* Change callers of mtx_init() to pass in an appropriate lock type name. InJohn Baldwin2002-04-0410-13/+16
| | | | | | | | | | most cases NULL is passed, but in some cases such as network driver locks (which use the MTX_NETWORK_LOCK macro) and UMA zone locks, a name is used. Tested on: i386, alpha, sparc64 Notes: svn path=/head/; revision=93818
* Fix a long standing 32bit-ism. Don't assume that the size of a chunk ofJake Burkholder2002-04-031-1/+1
| | | | | | | | | | memory in phys_avail will fit in 'int', use vm_size_t. This fixes booting on sparc64 machines with more than 2 gigs of ram. Thanks to Jan Chrillesen for providing me with access to a 4 gig machine. Notes: svn path=/head/; revision=93716
* fix comment typo, s/neccisary/necessary/gAlfred Perlstein2002-04-021-2/+2
| | | | Notes: svn path=/head/; revision=93697
* Change the suser() API to take advantage of td_ucred as well as do aJohn Baldwin2002-04-012-4/+4
| | | | | | | | | | | | | | | general cleanup of the API. The entire API now consists of two functions similar to the pre-KSE API. The suser() function takes a thread pointer as its only argument. The td_ucred member of this thread must be valid so the only valid thread pointers are curthread and a few kernel threads such as thread0. The suser_cred() function takes a pointer to a struct ucred as its first argument and an integer flag as its second argument. The flag is currently only used for the PRISON_ROOT flag. Discussed on: smp@ Notes: svn path=/head/; revision=93593
* Add a new mtx_init option "MTX_DUPOK" which allows duplicate acquires of locksJeff Roberson2002-03-271-1/+1
| | | | | | | | | | | | | | with this flag. Remove the dup_list and dup_ok code from subr_witness. Now we just check for the flag instead of doing string compares. Also, switch the process lock, process group lock, and uma per cpu locks over to this interface. The original mechanism did not work well for uma because per cpu lock names are unique to each zone. Approved by: jhb Notes: svn path=/head/; revision=93273
* Remove an unused prototype.Alan Cox2002-03-261-1/+0
| | | | Notes: svn path=/head/; revision=93194
* Reset the cachefree statistics after draining the cache. This fixes a bugJeff Roberson2002-03-241-0/+4
| | | | | | | | | | | | | where a sysctl within 20 seconds of a cache_drain could yield negative "USED" counts. Also, grab the uma_mtx while in the sysctl handler. This hadn't caused problems yet because Giant is held all the time. Reported by: kkenn Notes: svn path=/head/; revision=93089
* Add uma_zone_set_max() to add enforced limits to non vm obj backed zones.Jeff Roberson2002-03-202-10/+25
| | | | Notes: svn path=/head/; revision=92758
* Remove references to vm_zone.h and switch over to the new uma API.Jeff Roberson2002-03-2010-39/+30
| | | | Notes: svn path=/head/; revision=92748
* Remove __P.Alfred Perlstein2002-03-1917-213/+209
| | | | Notes: svn path=/head/; revision=92727
* Quit a warning introduced by UMA. This only occurs on machines whereJeff Roberson2002-03-191-1/+1
| | | | | | | | | vm_size_t != unsigned long. Reviewed by: phk Notes: svn path=/head/; revision=92692
* Fix a gcc-3.1+ warning.Peter Wemm2002-03-191-0/+1
| | | | | | | | | | | | | | warning: deprecated use of label at end of compound statement ie: you cannot do this anymore: switch(foo) { .... default: } Notes: svn path=/head/; revision=92666
* This is the first part of the new kernel memory allocator. This replacesJeff Roberson2002-03-1912-94/+2865
| | | | | | | | | malloc(9) and vm_zone with a slab like allocator. Reviewed by: arch@ Notes: svn path=/head/; revision=92654
* Back out the modification of vm_map locks from lockmgr to sx locks. TheBrian Feldman2002-03-186-104/+89
| | | | | | | | | | | | | best path forward now is likely to change the lockmgr locks to simple sleep mutexes, then see if any extra contention it generates is greater than removed overhead of managing local locking state information, cost of extra calls into lockmgr, etc. Additionally, making the vm_map lock a mutex and respecting it properly will put us much closer to not needing Giant magic in vm. Notes: svn path=/head/; revision=92588
* Remove vm_object_count: It's unused, incorrectly maintained and duplicatesAlan Cox2002-03-171-4/+1
| | | | | | | information maintained by the zone allocator. Notes: svn path=/head/; revision=92511
* Undo part of revision 1.57: Now that (o)sendsig() doesn't call useracc(),Alan Cox2002-03-171-13/+3
| | | | | | | | | | | the motivation for saving and restoring the map->hint in useracc() is gone. (The same tests that motivated this change in revision 1.57 now show that there is no performance loss from removing it.) This was really a hack and some day we would have had to add new synchronization here on map->hint to maintain it. Notes: svn path=/head/; revision=92475
* Acquire a read lock on the map inside of vm_map_check_protection() ratherAlan Cox2002-03-172-4/+7
| | | | | | | | than expecting the caller to do so. This (1) eliminates duplicated code in kernacc() and useracc() and (2) fixes missing synchronization in munmap(). Notes: svn path=/head/; revision=92466
* Convert all pmap_kenter/pmap_kremove pairs in MI code to use pmap_qenter/Jake Burkholder2002-03-172-3/+4
| | | | | | | | | | | | | | | | | | pmap_qremove. pmap_kenter is not safe to use in MI code because it is not guaranteed to flush the mapping from the tlb on all cpus. If the process in question is preempted and migrates cpus between the call to pmap_kenter and pmap_kremove, the original cpu will be left with stale mappings in its tlb. This is currently not a problem for i386 because we do not use PG_G on SMP, and thus all mappings are flushed from the tlb on context switches, not just user mappings. This is not the case on all architectures, and if PG_G is to be used with SMP on i386 it will be a problem. This was committed by peter earlier as part of his fine grained tlb shootdown work for i386, which was backed out for other reasons. Reviewed by: peter Notes: svn path=/head/; revision=92461
* Introduce the new 64-bit size disk block, daddr64_t. ChangeKirk McKusick2002-03-151-2/+2
| | | | | | | | | | | | | | | the bio and buffer structures to have daddr64_t bio_pblkno, b_blkno, and b_lblkno fields which allows access to disks larger than a Terabyte in size. This change also requires that the VOP_BMAP vnode operation accept and return daddr64_t blocks. This delta should not affect system operation in any way. It merely sets up the necessary interfaces to allow the development of disk drivers that work with these larger disk block addresses. It also allows for the development of UFS2 which will use 64-bit block addresses. Notes: svn path=/head/; revision=92363
* Document faultstate.lookup_still_valid more than none.Brian Feldman2002-03-141-10/+14
| | | | | | | Requested by: alfred Notes: svn path=/head/; revision=92256
* Rename SI_SUB_MUTEX to SI_SUB_MTX_POOL to make the name at all accurate.Brian Feldman2002-03-136-84/+95
| | | | | | | | | | | | | | | | | | | While doing this, move it earlier in the sysinit boot process so that the VM system can use it. After that, the system is now able to use sx locks instead of lockmgr locks in the VM system. To accomplish this, some of the more questionable uses of the locks (such as testing whether they are owned or not, as well as allowing shared+exclusive recursion) are removed, and simpler logic throughout is used so locks should also be easier to understand. This has been tested on my laptop for months, and has not shown any problems on SMP systems, either, so appears quite safe. One more user of lockmgr down, many more to go :) Notes: svn path=/head/; revision=92246