aboutsummaryrefslogtreecommitdiff
path: root/sys/vm/phys_pager.c
Commit message (Collapse)AuthorAgeFilesLines
* Remove unneeded includes of <sys/linker_set.h>. Other headers that useJohn Baldwin2011-01-111-1/+0
| | | | | | | | | it internally contain nested includes. Reviewed by: bde Notes: svn path=/head/; revision=217265
* Handle busy status of the page in a way expected for pager_getpage().Konstantin Belousov2010-05-031-4/+4
| | | | | | | | | | Flush requested page, unbusy other pages, do not clear m->busy. Reviewed by: alc MFC after: 1 week Notes: svn path=/head/; revision=207580
* Implement global and per-uid accounting of the anonymous memory. AddKonstantin Belousov2009-06-231-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved for the uid. The accounting information (charge) is associated with either map entry, or vm object backing the entry, assuming the object is the first one in the shadow chain and entry does not require COW. Charge is moved from entry to object on allocation of the object, e.g. during the mmap, assuming the object is allocated, or on the first page fault on the entry. It moves back to the entry on forks due to COW setup. The per-entry granularity of accounting makes the charge process fair for processes that change uid during lifetime, and decrements charge for proper uid when region is unmapped. The interface of vm_pager_allocate(9) is extended by adding struct ucred *, that is used to charge appropriate uid when allocation if performed by kernel, e.g. md(4). Several syscalls, among them is fork(2), may now return ENOMEM when global or per-uid limits are enforced. In collaboration with: pho Reviewed by: alc Approved by: re (kensmith) Notes: svn path=/head/; revision=194766
* Eliminate an unnecessary clearing of a page's dirty bits inAlan Cox2009-06-131-1/+2
| | | | | | | phys_pager_getpages(). Notes: svn path=/head/; revision=194126
* Correct a copy and paste'o in phys_pager.c, we are talking about phys hereRemko Lodder2007-10-301-1/+1
| | | | | | | | | | and not about devices. PR: 93755 Approved by: imp (mentor, implicit when re-assigning the ticket to me). Notes: svn path=/head/; revision=173180
* Fix the phys_pager in the way similar to the rev. 1.83 of theKonstantin Belousov2007-08-181-22/+25
| | | | | | | | | | | | | | | | | sys/vm/device_pager.c: Protect the creation of the phys pager with non-NULL handle with the phys_pager_mtx. Lookup of phys pager in the pagers list by handle is now synchronized with its removal from the list, and phys_pager_mtx is put before vm object lock in lock order. Dispose the phys_pager_alloc_lock and tsleep calls, together with acquiring Giant, since phys_pager_mtx now covers the same block. Reviewed by: alc Approved by: re (kensmith) Notes: svn path=/head/; revision=171887
* Consider a scenario in which one processor, call it Pt, is performingAlan Cox2007-08-051-4/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | vm_object_terminate() on a device-backed object at the same time that another processor, call it Pa, is performing dev_pager_alloc() on the same device. The problem is that vm_pager_object_lookup() should not be allowed to return a doomed object, i.e., an object with OBJ_DEAD set, but it does. In detail, the unfortunate sequence of events is: Pt in vm_object_terminate() holds the doomed object's lock and sets OBJ_DEAD on the object. Pa in dev_pager_alloc() holds dev_pager_sx and calls vm_pager_object_lookup(), which returns the doomed object. Next, Pa calls vm_object_reference(), which requires the doomed object's lock, so Pa waits for Pt to release the doomed object's lock. Pt proceeds to the point in vm_object_terminate() where it releases the doomed object's lock. Pa is now able to complete vm_object_reference() because it can now complete the acquisition of the doomed object's lock. So, now the doomed object has a reference count of one! Pa releases dev_pager_sx and returns the doomed object from dev_pager_alloc(). Pt now acquires dev_pager_mtx, removes the doomed object from dev_pager_object_list, releases dev_pager_mtx, and finally calls uma_zfree with the doomed object. However, the doomed object is still in use by Pa. Repeating my key point, vm_pager_object_lookup() must not return a doomed object. Moreover, the test for the object's state, i.e., doomed or not, and the increment of the object's reference count should be carried out atomically. Reviewed by: kib Approved by: re (kensmith) MFC after: 3 weeks Notes: svn path=/head/; revision=171737
* Minor typo fix, noticed while I was going through *_pager.c files.Giorgos Keramidas2007-04-101-1/+1
| | | | Notes: svn path=/head/; revision=168581
* Change the way that unmanaged pages are created. Specifically,Alan Cox2007-02-251-6/+0
| | | | | | | | | | | | | | | | | immediately flag any page that is allocated to a OBJT_PHYS object as unmanaged in vm_page_alloc() rather than waiting for a later call to vm_page_unmanage(). This allows for the elimination of some uses of the page queues lock. Change the type of the kernel and kmem objects from OBJT_DEFAULT to OBJT_PHYS. This allows us to take advantage of the above change to simplify the allocation of unmanaged pages in kmem_alloc() and kmem_malloc(). Remove vm_page_unmanage(). It is no longer used. Notes: svn path=/head/; revision=166964
* Replace PG_BUSY with VPO_BUSY. In other words, changes to the page'sAlan Cox2006-10-221-1/+1
| | | | | | | | busy flag, i.e., VPO_BUSY, are now synchronized by the per-vm object lock instead of the global page queues lock. Notes: svn path=/head/; revision=163604
* /* -> /*- for license, minor formatting changesWarner Losh2005-01-071-1/+1
| | | | Notes: svn path=/head/; revision=139825
* Zero the physical page only if it is invalid and not prezeroed.Alan Cox2004-04-251-7/+9
| | | | Notes: svn path=/head/; revision=128633
* Add a VM_OBJECT_LOCK_ASSERT() call. Remove splvm() and splx() calls. MoveAlan Cox2004-04-241-7/+5
| | | | | | | a comment. Notes: svn path=/head/; revision=128620
* Simplify the various pager allocation routines by computing the desiredAlan Cox2004-01-041-7/+6
| | | | | | | object size once and assigning that value to a local variable. Notes: svn path=/head/; revision=124133
* Use sparse struct initializations for struct pagerops.Poul-Henning Kamp2003-08-051-7/+6
| | | | | | | This makes grepping for which pagers implement which methods easier. Notes: svn path=/head/; revision=118466
* Use __FBSDID().David E. O'Brien2003-06-111-2/+3
| | | | Notes: svn path=/head/; revision=116226
* Increase the scope of the page queues lock in phys_pager_getpages().Alan Cox2002-12-271-4/+7
| | | | Notes: svn path=/head/; revision=108306
* Hold the page queues lock when performing vm_page_flag_set().Alan Cox2002-12-171-1/+1
| | | | Notes: svn path=/head/; revision=107989
* o Retire vm_page_zero_fill() and vm_page_zero_fill_area(). Ever sinceAlan Cox2002-08-251-1/+1
| | | | | | | | | pmap_zero_page() and pmap_zero_page_area() were modified to accept a struct vm_page * instead of a physical address, vm_page_zero_fill() and vm_page_zero_fill_area() have served no purpose. Notes: svn path=/head/; revision=102382
* o Lock page queue accesses by vm_page_unmanage().Alan Cox2002-07-131-0/+2
| | | | | | | o Assert that the page queues lock is held in vm_page_unmanage(). Notes: svn path=/head/; revision=99934
* o Remove GIANT_REQUIRED from phys_pager_alloc(). If handle isn't NULL,Alan Cox2002-06-221-3/+8
| | | | | | | | acquire and release Giant. If handle is NULL, Giant isn't needed. o Annotate phys_pager_alloc() and phys_pager_dealloc() as MPSAFE. Notes: svn path=/head/; revision=98605
* Change callers of mtx_init() to pass in an appropriate lock type name. InJohn Baldwin2002-04-041-1/+1
| | | | | | | | | | most cases NULL is passed, but in some cases such as network driver locks (which use the MTX_NETWORK_LOCK macro) and UMA zone locks, a name is used. Tested on: i386, alpha, sparc64 Notes: svn path=/head/; revision=93818
* Remove references to vm_zone.h and switch over to the new uma API.Jeff Roberson2002-03-201-1/+0
| | | | Notes: svn path=/head/; revision=92748
* With Alfred's permission, remove vm_mtx in favor of a fine-grained approachMatthew Dillon2001-07-041-1/+4
| | | | | | | | | | | | (this commit is just the first stage). Also add various GIANT_ macros to formalize the removal of Giant, making it easy to test in a more piecemeal fashion. These macros will allow us to test fine-grained locks to a degree before removing Giant, and also after, and to remove Giant in a piecemeal fashion via sysctl's on those subsystems which the authors believe can operate without Giant. Notes: svn path=/head/; revision=79224
* Set the phys_pager_alloc_lock to 1 when it is acquired so that it isJohn Baldwin2001-05-231-1/+2
| | | | | | | actually locked. Notes: svn path=/head/; revision=77062
* Introduce a global lock for the vm subsystem (vm_mtx).Alfred Perlstein2001-05-191-6/+10
| | | | | | | | | | | | | | | | | | | | | | vm_mtx does not recurse and is required for most low level vm operations. faults can not be taken without holding Giant. Memory subsystems can now call the base page allocators safely. Almost all atomic ops were removed as they are covered under the vm mutex. Alpha and ia64 now need to catch up to i386's trap handlers. FFS and NFS have been tested, other filesystems will need minor changes (grabbing the vm lock when twiddling page properties). Reviewed (partially) by: jake, jhb Notes: svn path=/head/; revision=76827
* Undo part of the tangle of having sys/lock.h and sys/mutex.h included inMark Murray2001-05-011-0/+2
| | | | | | | | | | | | | | other "system" header files. Also help the deprecation of lockmgr.h by making it a sub-include of sys/lock.h and removing sys/lockmgr.h form kernel .c files. Sort sys/*.h includes where possible in affected files. OK'ed by: bde (with reservations) Notes: svn path=/head/; revision=76166
* Protect pager object creation with sx locks.Alfred Perlstein2001-04-181-12/+16
| | | | | | | | | | | Protect pager object list manipulation with a mutex. It doesn't look possible to combine them under a single sx lock because creation may block and we can't have the object list manipulation block on anything other than a mutex because of interrupt requests. Notes: svn path=/head/; revision=75675
* Really fix phys_pager:Alfred Perlstein2000-12-061-31/+36
| | | | | | | | | | | | | | | | | | | | | Backout the previous delta (rev 1.4), it didn't make any difference. If the requested handle is NULL then don't add it to the list of objects, to be found by handle. The problem is that when asking for a NULL handle you are implying you want a new object. Because objects with NULL handles were being added to the list, any further requests for phys backed objects with NULL handles would return a reference to the initial NULL handle object after finding it on the list. Basically one couldn't have more than one phys backed object without a handle in the entire system without this fix. If you did more than one shared memory allocation using the phys pager it would give you your initial allocation again. Notes: svn path=/head/; revision=69687
* need to adjust allocation size to properly deal with non PAGE_SIZEAlfred Perlstein2000-12-051-1/+1
| | | | | | | | allocations, specifically with allocations < PAGE_SIZE when the code doesn't work properly Notes: svn path=/head/; revision=69641
* Minor cleanups:Peter Wemm2000-07-281-44/+19
| | | | | | | | | - remove unused variables (fix warnings) - use a more consistant ansi style rather than a mixture - remove dead #if 0 code and declarations Notes: svn path=/head/; revision=63973
* This is a cleanup patch to Peter's new OBJT_PHYS VM object typeMatthew Dillon2000-05-291-18/+3
| | | | | | | | | | | | | | | | | | | | and sysv shared memory support for it. It implements a new PG_UNMANAGED flag that has slightly different characteristics from PG_FICTICIOUS. A new sysctl, kern.ipc.shm_use_phys has been added to enable the use of physically-backed sysv shared memory rather then swap-backed. Physically backed shm segments are not tracked with PV entries, allowing programs which use a large shm segment as a rendezvous point to operate without eating an insane amount of KVM in the PV entry management. Read: Oracle. Peter's OBJT_PHYS object will also allow us to eventually implement page-table sharing and/or 4MB physical page support for such segments. We're half way there. Notes: svn path=/head/; revision=61081
* Checkpoint of a new physical memory backed object type, that does notPeter Wemm2000-05-211-0/+222
have pv_entries. This is intended for very special circumstances, eg: a certain database that has a 1GB shm segment mapped into 300 processes. That would consume 2GB of kvm just to hold the pv_entries alone. This would not be used on systems unless the physical ram was available, as it's not pageable. This is a work-in-progress, but is a useful and functional checkpoint. Matt has got some more fixes for it that will be committed soon. Reviewed by: dillon Notes: svn path=/head/; revision=60757