aboutsummaryrefslogtreecommitdiff
path: root/sys/vm
Commit message (Collapse)AuthorAgeFilesLines
* Simplify entry to vm_pageout_clean(). Expect the page to be locked.Alan Cox2010-06-301-8/+4
| | | | | | | | | | | | Previously, the caller unlocked the page, and vm_pageout_clean() immediately reacquired the page lock. Also, assert rather than test that the page is neither busy nor held. Since vm_pageout_clean() is called with the object and page locked, the page can't have changed state since the caller verified that the page is neither busy nor held. Notes: svn path=/head/; revision=209610
* Introduce vm_page_next() and vm_page_prev(), and use them inAlan Cox2010-06-213-13/+46
| | | | | | | | | | | | vm_pageout_clean(). When iterating over a range of pages, these functions can be cheaper than vm_page_lookup() because their implementation takes advantage of the vm_object's memq being ordered. Reviewed by: kib@ MFC after: 3 weeks Notes: svn path=/head/; revision=209407
* Add a new column to the output of vmstat -z to indicate the numberSean Bruno2010-06-153-10/+19
| | | | | | | | | | | | | | | | of times the system was forced to sleep when requesting a new allocation. Expand the debugger hook, db_show_uma, to display these results as well. This has proven to be very useful in out of memory situations when it is not known why systems have become sluggish or fail in odd ways. Reviewed by: rwatson alc Approved by: scottl (mentor) peter Obtained from: Yahoo Inc. Notes: svn path=/head/; revision=209215
* Eliminate checks for a page having a NULL object in vm_pageout_scan()Alan Cox2010-06-142-42/+53
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | and vm_pageout_page_stats(). These checks were recently introduced by the first page locking commit, r207410, but they are not needed. At the same time, eliminate some redundant accesses to the page's object field. (These accesses should have neen eliminated by r207410.) Make the assertion in vm_page_flag_set() stricter. Specifically, only managed pages should have PG_WRITEABLE set. Add a comment documenting an assertion to vm_page_flag_clear(). It has long been the case that fictitious pages have their wire count permanently set to one. Add comments to vm_page_wire() and vm_page_unwire() documenting this. Add assertions to these functions as well. Update the comment describing vm_page_unwire(). Much of the old comment had little to do with vm_page_unwire(), but a lot to do with _vm_page_deactivate(). Move relevant parts of the old comment to _vm_page_deactivate(). Only pages that belong to an object can be paged out. Therefore, it is pointless for vm_page_unwire() to acquire the page queues lock and enqueue such pages in one of the paging queues. Generally speaking, such pages are immediately freed after the call to vm_page_unwire(). Previously, it was the call to vm_page_free() that reacquired the page queues lock and removed these pages from the paging queues. Now, we will never acquire the page queues lock for this case. (It is also worth noting that since both vm_page_unwire() and vm_page_free() occurred with the page locked, the page daemon never saw the page with its object field set to NULL.) Change the panic with vm_page_unwire() to provide a more precise message. Reviewed by: kib@ Notes: svn path=/head/; revision=209173
* Update several places that iterate over CPUs to use CPU_FOREACH().John Baldwin2010-06-111-9/+3
| | | | Notes: svn path=/head/; revision=209059
* Reduce the scope of the page queues lock and the number ofAlan Cox2010-06-103-45/+42
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | PG_REFERENCED changes in vm_pageout_object_deactivate_pages(). Simplify this function's inner loop using TAILQ_FOREACH(), and shorten some of its overly long lines. Update a stale comment. Assert that PG_REFERENCED may be cleared only if the object containing the page is locked. Add a comment documenting this. Assert that a caller to vm_page_requeue() holds the page queues lock, and assert that the page is on a page queue. Push down the page queues lock into pmap_ts_referenced() and pmap_page_exists_quick(). (As of now, there are no longer any pmap functions that expect to be called with the page queues lock held.) Neither pmap_ts_referenced() nor pmap_page_exists_quick() should ever be passed an unmanaged page. Assert this rather than returning "0" and "FALSE" respectively. ARM: Simplify pmap_page_exists_quick() by switching to TAILQ_FOREACH(). Push down the page queues lock inside of pmap_clearbit(), simplifying pmap_clear_modify(), pmap_clear_reference(), and pmap_remove_write(). Additionally, this allows for avoiding the acquisition of the page queues lock in some cases. PowerPC/AIM: moea*_page_exits_quick() and moea*_page_wired_mappings() will never be called before pmap initialization is complete. Therefore, the check for moea_initialized can be eliminated. Push down the page queues lock inside of moea*_clear_bit(), simplifying moea*_clear_modify() and moea*_clear_reference(). The last parameter to moea*_clear_bit() is never used. Eliminate it. PowerPC/BookE: Simplify mmu_booke_page_exists_quick()'s control flow. Reviewed by: kib@ Notes: svn path=/head/; revision=208990
* Make vm_contig_grow_cache() extern, and use it when vm_phys_alloc_contig()Jayachandran C.2010-06-042-9/+13
| | | | | | | | | | | | | | | | | | | | | | | fails to allocate MIPS page table pages. The current usage of VM_WAIT in case of vm_phys_alloc_contig() failure is not correct, because: "There is no guarantee that any of the available free (or cached) pages after the VM_WAIT will fall within the range of suitable physical addresses. Every time this function sleeps and a single page is freed (or cached) by someone else, this function will be reawakened. With a little bad luck, you could spin indefinitely." We also add low and high parameters to vm_contig_grow_cache() and vm_contig_launder() so that we restrict vm_contig_launder() to the range of pages we are interested in. Reported by: alc Reviewed by: alc Approved by: rrs (mentor) Notes: svn path=/head/; revision=208794
* Do not leak vm page lock in vm_contig_launder(), vm_pageout_page_lock()Konstantin Belousov2010-06-031-1/+3
| | | | | | | | | | always returns with the page locked. Submitted by: alc Pointy hat to: kib Notes: svn path=/head/; revision=208791
* Add assertion and comment in vm_page_flag_set() describing the expectationsKonstantin Belousov2010-06-031-0/+8
| | | | | | | | | when the PG_WRITEABLE flag is set. Reviewed by: alc Notes: svn path=/head/; revision=208772
* Maintain the pretense that we support 32KB pages for the sake of the ia64Alan Cox2010-06-031-1/+1
| | | | | | | LINT build. Notes: svn path=/head/; revision=208764
* Minimize the use of the page queues lock for synchronizing access to theAlan Cox2010-06-022-12/+47
| | | | | | | | page's dirty field. With the exception of one case, access to this field is now synchronized by the object lock. Notes: svn path=/head/; revision=208745
* When I pushed down the page queues lock into pmap_is_modified(), I createdAlan Cox2010-05-291-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | an ordering dependence: A pmap operation that clears PG_WRITEABLE and calls vm_page_dirty() must perform the call first. Otherwise, pmap_is_modified() could return FALSE without acquiring the page queues lock because the page is not (currently) writeable, and the caller to pmap_is_modified() might believe that the page's dirty field is clear because it has not seen the effect of the vm_page_dirty() call. When I pushed down the page queues lock into pmap_is_modified(), I overlooked one place where this ordering dependence is violated: pmap_enter(). In a rare situation pmap_enter() can be called to replace a dirty mapping to one page with a mapping to another page. (I say rare because replacements generally occur as a result of a copy-on-write fault, and so the old page is not dirty.) This change delays clearing PG_WRITEABLE until after vm_page_dirty() has been called. Fixing the ordering dependency also makes it easy to introduce a small optimization: When pmap_enter() used to replace a mapping to one page with a mapping to another page, it freed the pv entry for the first mapping and later called the pv entry allocator for the new mapping. Now, pmap_enter() attempts to recycle the old pv entry, saving two calls to the pv entry allocator. There is no point in setting PG_WRITEABLE on unmanaged pages, so don't. Update a comment to reflect this. Tidy up the variable declarations at the start of pmap_enter(). Notes: svn path=/head/; revision=208645
* Push down page queues lock acquisition in pmap_enter_object() andAlan Cox2010-05-264-38/+45
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | pmap_is_referenced(). Eliminate the corresponding page queues lock acquisitions from vm_map_pmap_enter() and mincore(), respectively. In mincore(), this allows some additional cases to complete without ever acquiring the page queues lock. Assert that the page is managed in pmap_is_referenced(). On powerpc/aim, push down the page queues lock acquisition from moea*_is_modified() and moea*_is_referenced() into moea*_query_bit(). Again, this will allow some additional cases to complete without ever acquiring the page queues lock. Reorder a few statements in vm_page_dontneed() so that a race can't lead to an old reference persisting. This scenario is described in detail by a comment. Correct a spelling error in vm_page_dontneed(). Assert that the object is locked in vm_page_clear_dirty(), and restrict the page queues lock assertion to just those cases in which the page is currently writeable. Add object locking to vnode_pager_generic_putpages(). This was the one and only place where vm_page_clear_dirty() was being called without the object being locked. Eliminate an unnecessary vm_page_lock() around vnode_pager_setsize()'s call to vm_page_clear_dirty(). Change vnode_pager_generic_putpages() to the modern-style of function definition. Also, change the name of one of the parameters to follow virtual memory system naming conventions. Reviewed by: kib Notes: svn path=/head/; revision=208574
* Eliminate the acquisition and release of the page queues lock fromAlan Cox2010-05-251-1/+0
| | | | | | | | | vfs_busy_pages(). It is no longer needed. Submitted by: kib Notes: svn path=/head/; revision=208524
* Roughly half of a typical pmap_mincore() implementation is machine-Alan Cox2010-05-246-67/+83
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | independent code. Move this code into mincore(), and eliminate the page queues lock from pmap_mincore(). Push down the page queues lock into pmap_clear_modify(), pmap_clear_reference(), and pmap_is_modified(). Assert that these functions are never passed an unmanaged page. Eliminate an inaccurate comment from powerpc/powerpc/mmu_if.m: Contrary to what the comment says, pmap_mincore() is not simply an optimization. Without a complete pmap_mincore() implementation, mincore() cannot return either MINCORE_MODIFIED or MINCORE_REFERENCED because only the pmap can provide this information. Eliminate the page queues lock from vfs_setdirty_locked_object(), vm_pageout_clean(), vm_object_page_collect_flush(), and vm_object_page_clean(). Generally speaking, these are all accesses to the page's dirty field, which are synchronized by the containing vm object's lock. Reduce the scope of the page queues lock in vm_object_madvise() and vm_page_dontneed(). Reviewed by: kib (an earlier version) Notes: svn path=/head/; revision=208504
* When waiting for the busy page, do not unlock the object unless unlockKonstantin Belousov2010-05-201-3/+6
| | | | | | | | | | cannot be avoided. Reviewed by: alc MFC after: 1 week Notes: svn path=/head/; revision=208340
* The page queues lock is no longer required by vm_page_set_invalid(), soAlan Cox2010-05-181-3/+7
| | | | | | | | | | | | eliminate it. Assert that the object containing the page is locked in vm_page_test_dirty(). Perform some style clean up while I'm here. Reviewed by: kib Notes: svn path=/head/; revision=208264
* On entry to pmap_enter(), assert that the page is busy. While I'mAlan Cox2010-05-162-3/+5
| | | | | | | | | | | | | | | | | | | | | | | here, make the style of assertion used by pmap_enter() consistent across all architectures. On entry to pmap_remove_write(), assert that the page is neither unmanaged nor fictitious, since we cannot remove write access to either kind of page. With the push down of the page queues lock, pmap_remove_write() cannot condition its behavior on the state of the PG_WRITEABLE flag if the page is busy. Assert that the object containing the page is locked. This allows us to know that the page will neither become busy nor will PG_WRITEABLE be set on it while pmap_remove_write() is running. Correct a long-standing bug in vm_page_cowsetup(). We cannot possibly do copy-on-write-based zero-copy transmit on unmanaged or fictitious pages, so don't even try. Previously, the call to pmap_remove_write() would have failed silently. Notes: svn path=/head/; revision=208175
* Correct an error of omission in r202897: Now that amd64 uses the direct mapAlan Cox2010-05-161-0/+14
| | | | | | | | | | to access the message buffer, we must explicitly request that the underlying physical pages are included in a crash dump. Reported by: Benjamin Kaduk Notes: svn path=/head/; revision=208164
* Add a comment about the proper use of vm_object_page_remove().Alan Cox2010-05-161-1/+2
| | | | | | | MFC after: 1 week Notes: svn path=/head/; revision=208159
* Update synchronization annotations for struct vm_page. Add a commentAlan Cox2010-05-111-5/+8
| | | | | | | explaining how the setting of PG_WRITEABLE is synchronized. Notes: svn path=/head/; revision=207905
* Continue cleaning the queue instead of moving to the next queue orKonstantin Belousov2010-05-101-4/+2
| | | | | | | | | | bailing out if acquisition of page lock caused page position in the queue to change. Pointed out by: alc Notes: svn path=/head/; revision=207846
* Push down the acquisition of the page queues lock into vm_pageq_remove().Alan Cox2010-05-092-27/+41
| | | | | | | | | (This eliminates a surprising number of page queues lock acquisitions by vm_fault() because the page's queue is PQ_NONE and thus the page queues lock is not needed to remove the page from a queue.) Notes: svn path=/head/; revision=207823
* Call vm_page_deactivate() rather than vm_page_dontneed() inAlan Cox2010-05-091-4/+2
| | | | | | | | | | | | | swp_pager_force_pagein(). By dirtying the page, swp_pager_force_pagein() forces vm_page_dontneed() to insert the page at the head of the inactive queue, just like vm_page_deactivate() does. Moreover, because the page was invalid, it can't have been mapped, and thus the other effect of vm_page_dontneed(), clearing the page's reference bits has no effect. In summary, there is no reason to call vm_page_dontneed() since its effect will be identical to calling the simpler vm_page_deactivate(). Notes: svn path=/head/; revision=207822
* Remove the page queues lock around a call to vm_page_activate(). Make theAlan Cox2010-05-091-3/+1
| | | | | | | page dirty before adding it to the active queue. Notes: svn path=/head/; revision=207806
* Minimize the scope of the page queues lock in vm_fault().Alan Cox2010-05-082-5/+3
| | | | Notes: svn path=/head/; revision=207798
* Push down the page queues into vm_page_cache(), vm_page_try_to_cache(), andAlan Cox2010-05-085-66/+34
| | | | | | | | | | | | | | vm_page_try_to_free(). Consequently, push down the page queues lock into pmap_enter_quick(), pmap_page_wired_mapped(), pmap_remove_all(), and pmap_remove_write(). Push down the page queues lock into Xen's pmap_page_is_mapped(). (I overlooked the Xen pmap in r207702.) Switch to a per-processor counter for the total number of pages cached. Notes: svn path=/head/; revision=207796
* Fix a typo in the previous commit.Jung-uk Kim2010-05-071-1/+1
| | | | Notes: svn path=/head/; revision=207759
* One more use for vm_pageout_init_marker().Konstantin Belousov2010-05-071-8/+1
| | | | | | | Reviewed by: alc Notes: svn path=/head/; revision=207752
* Eliminate unnecessary page queues locking.Alan Cox2010-05-071-4/+0
| | | | Notes: svn path=/head/; revision=207747
* Push down the page queues lock into vm_page_activate().Alan Cox2010-05-073-16/+17
| | | | Notes: svn path=/head/; revision=207746
* Update the synchronization requirements for the page usage count.Alan Cox2010-05-071-1/+1
| | | | Notes: svn path=/head/; revision=207740
* Eliminate acquisitions of the page queues lock that are no longer needed.Alan Cox2010-05-071-9/+2
| | | | | | | | Switch to a per-processor counter for the number of pages freed during process termination. Notes: svn path=/head/; revision=207739
* Push down the page queues lock into vm_page_deactivate(). Eliminate anAlan Cox2010-05-072-7/+10
| | | | | | | incorrect comment. Notes: svn path=/head/; revision=207738
* Eliminate page queues locking around most calls to vm_page_free().Alan Cox2010-05-066-41/+1
| | | | Notes: svn path=/head/; revision=207728
* Update a comment to say that access to a page's wire count is nowAlan Cox2010-05-061-1/+1
| | | | | | | synchronized by the page lock. Notes: svn path=/head/; revision=207706
* Push down the page queues lock inside of vm_page_free_toq() andAlan Cox2010-05-062-11/+14
| | | | | | | | | | | | | pmap_page_is_mapped() in preparation for removing page queues locking around calls to vm_page_free(). Setting aside the assertion that calls pmap_page_is_mapped(), vm_page_free_toq() now acquires and holds the page queues lock just long enough to actually add or remove the page from the paging queues. Update vm_page_unhold() to reflect the above change. Notes: svn path=/head/; revision=207702
* Add a helper function vm_pageout_page_lock(), similar to tegge'Konstantin Belousov2010-05-063-14/+65
| | | | | | | | | | | | | | | | vm_pageout_fallback_object_lock(), to obtain the page lock while having page queue lock locked, and still maintain the page position in a queue. Use the helper to lock the page in the pageout daemon and contig launder iterators instead of skipping the page if its lock is contested. Skipping locked pages easily causes pagedaemon or launder to not make a progress with page cleaning. Proposed and reviewed by: alc Notes: svn path=/head/; revision=207694
* Acquire the page lock around all remaining calls to vm_page_free() onAlan Cox2010-05-054-11/+9
| | | | | | | | | | | | | | | | managed pages that didn't already have that lock held. (Freeing an unmanaged page, such as the various pmaps use, doesn't require the page lock.) This allows a change in vm_page_remove()'s locking requirements. It now expects the page lock to be held instead of the page queues lock. Consequently, the page queues lock is no longer required at all by callers to vm_page_rename(). Discussed with: kib Notes: svn path=/head/; revision=207669
* Push down the acquisition of the page queues lock into vm_page_unwire().Alan Cox2010-05-053-19/+13
| | | | | | | | | | Update the comment describing which lock should be held on entry to vm_page_wire(). Reviewed by: kib Notes: svn path=/head/; revision=207644
* Add page locking to the vm_page_cow* functions.Alan Cox2010-05-042-13/+13
| | | | | | | | | | Push down the acquisition and release of the page queues lock into vm_page_wire(). Reviewed by: kib Notes: svn path=/head/; revision=207617
* Add lock assertions.Alan Cox2010-05-041-1/+7
| | | | Notes: svn path=/head/; revision=207601
* Handle busy status of the page in a way expected for pager_getpage().Konstantin Belousov2010-05-031-4/+4
| | | | | | | | | | Flush requested page, unbusy other pages, do not clear m->busy. Reviewed by: alc MFC after: 1 week Notes: svn path=/head/; revision=207580
* Acquire the page lock around vm_page_wire() in vm_page_grab().Alan Cox2010-05-031-0/+3
| | | | | | | Assert that the page lock is held in vm_page_wire(). Notes: svn path=/head/; revision=207577
* It makes more sense for the object-based backend allocator to use OBJT_PHYSAlan Cox2010-05-031-10/+4
| | | | | | | | | | | objects instead of OBJT_DEFAULT objects because we never reclaim or pageout the allocated pages. Moreover, they are mapped with pmap_qenter(), which creates unmanaged mappings. Reviewed by: kib Notes: svn path=/head/; revision=207576
* The pages allocated by kmem_alloc_attr() and kmem_malloc() are unmanaged.Alan Cox2010-05-032-8/+0
| | | | | | | | Consequently, neither the page lock nor the page queues lock is needed to unwire and free them. Notes: svn path=/head/; revision=207552
* Assert that the page queues lock is held in vm_page_remove() andAlan Cox2010-05-031-2/+4
| | | | | | | vm_page_unwire() only if the page is managed, i.e., pageable. Notes: svn path=/head/; revision=207551
* Add page lock assertions where we access the page's hold_count.Alan Cox2010-05-021-0/+3
| | | | Notes: svn path=/head/; revision=207544
* Eliminate an assignment that was made redundant by r207410.Alan Cox2010-05-021-2/+0
| | | | Notes: svn path=/head/; revision=207541
* Defer the acquisition of the page and page queues locks inAlan Cox2010-05-021-8/+8
| | | | | | | vm_pageout_object_deactivate_pages(). Notes: svn path=/head/; revision=207540