aboutsummaryrefslogtreecommitdiff
path: root/sys/vm/vm_page.h
Commit message (Collapse)AuthorAgeFilesLines
* Refactor the code that performs physically contiguous memory allocation,Alan Cox2011-11-161-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | yielding a new public interface, vm_page_alloc_contig(). This new function addresses some of the limitations of the current interfaces, contigmalloc() and kmem_alloc_contig(). For example, the physically contiguous memory that is allocated with those interfaces can only be allocated to the kernel vm object and must be mapped into the kernel virtual address space. It also provides functionality that vm_phys_alloc_contig() doesn't, such as wiring the returned pages. Moreover, unlike that function, it respects the low water marks on the paging queues and wakes up the page daemon when necessary. That said, at present, this new function can't be applied to all types of vm objects. However, that restriction will be eliminated in the coming weeks. From a design standpoint, this change also addresses an inconsistency between vm_phys_alloc_contig() and the other vm_phys_alloc*() functions. Specifically, vm_phys_alloc_contig() manipulated vm_page fields that other functions in vm/vm_phys.c didn't. Moreover, vm_phys_alloc_contig() knew about vnodes and reservations. Now, vm_page_alloc_contig() is responsible for these things. Reviewed by: kib Discussed with: jhb Notes: svn path=/head/; revision=227568
* Remove redundand definitions. The chunk was missed from r227102.Konstantin Belousov2011-11-051-10/+0
| | | | | | | MFC after: 2 weeks Notes: svn path=/head/; revision=227103
* Provide typedefs for the type of bit mask for the page bits.Konstantin Belousov2011-11-051-15/+17
| | | | | | | | | | | | Use the defined types instead of int when manipulating masks. Supposedly, it could fix support for 32KB page size in the machine-independend VM layer. Reviewed by: alc MFC after: 2 weeks Notes: svn path=/head/; revision=227102
* Fix grammar.Konstantin Belousov2011-09-281-1/+1
| | | | | | | | Submitted by: bf MFC after: 2 weeks Notes: svn path=/head/; revision=225843
* Use the trick of performing the atomic operation on the contained alignedKonstantin Belousov2011-09-281-13/+13
| | | | | | | | | | | | | | word to handle the dirty mask updates in vm_page_clear_dirty_mask(). Remove the vm page queue lock around vm_page_dirty() call in vm_fault_hold() the sole purpose of which was to protect dirty on architectures which does not provide short or byte-wide atomics. Reviewed by: alc, attilio Tested by: flo (sparc64) MFC after: 2 weeks Notes: svn path=/head/; revision=225840
* Use the explicitly-sized types for the dirty and valid masks.Konstantin Belousov2011-09-281-8/+8
| | | | | | | | | Requested by: attilio Reviewed by: alc MFC after: 2 weeks Notes: svn path=/head/; revision=225838
* Split the vm_page flags PG_WRITEABLE and PG_REFERENCED into atomicKonstantin Belousov2011-09-061-15/+25
| | | | | | | | | | | | | | | | | | | | flags field. Updates to the atomic flags are performed using the atomic ops on the containing word, do not require any vm lock to be held, and are non-blocking. The vm_page_aflag_set(9) and vm_page_aflag_clear(9) functions are provided to modify afalgs. Document the changes to flags field to only require the page lock. Introduce vm_page_reference(9) function to provide a stable KPI and KBI for filesystems like tmpfs and zfs which need to mark a page as referenced. Reviewed by: alc, attilio Tested by: marius, flo (sparc64); andreast (powerpc, powerpc64) Approved by: re (bz) Notes: svn path=/head/; revision=225418
* - Move the PG_UNMANAGED flag from m->flags to m->oflags, renaming the flagKonstantin Belousov2011-08-091-8/+9
| | | | | | | | | | | | | | | | | to VPO_UNMANAGED (and also making the flag protected by the vm object lock, instead of vm page queue lock). - Mark the fake pages with both PG_FICTITIOUS (as it is now) and VPO_UNMANAGED. As a consequence, pmap code now can use use just VPO_UNMANAGED to decide whether the page is unmanaged. Reviewed by: alc Tested by: pho (x86, previous version), marius (sparc64), marcel (arm, ia64, powerpc), ray (mips) Sponsored by: The FreeBSD Foundation Approved by: re (bz) Notes: svn path=/head/; revision=224746
* Precisely document the synchronization rules for the page's dirty field.Alan Cox2011-06-191-8/+25
| | | | | | | | | | | | | | | | (Saying that the lock on the object that the page belongs to must be held only represents one aspect of the rules.) Eliminate the use of the page queues lock for atomically performing read- modify-write operations on the dirty field when the underlying architecture supports atomic operations on char and short types. Document the fact that 32KB pages aren't really supported. Reviewed by: attilio, kib Notes: svn path=/head/; revision=223307
* Assert that page is VPO_BUSY or page owner object is locked inKonstantin Belousov2011-06-111-0/+9
| | | | | | | | | | | vm_page_undirty(). The assert is not precise due to VPO_BUSY owner to tracked, so assertion does not catch the case when VPO_BUSY is owned by other thread. Reviewed by: alc Notes: svn path=/head/; revision=222992
* Eliminate duplication of the fake page code and zone by the device and sgAlan Cox2011-03-111-0/+3
| | | | | | | | | pagers. Reviewed by: jhb Notes: svn path=/head/; revision=219476
* Explicitly initialize the page's queue field to PQ_NONE instead of relyingAlan Cox2011-01-171-5/+5
| | | | | | | | | | | | on PQ_NONE being zero. Redefine PQ_NONE and PQ_COUNT so that a page queue isn't allocated for PQ_NONE. Reviewed by: kib@ Notes: svn path=/head/; revision=217508
* Update a lock annotation on the page structure.Alan Cox2011-01-161-1/+1
| | | | Notes: svn path=/head/; revision=217479
* Shift responsibility for synchronizing access to the page's act_countAlan Cox2011-01-161-1/+1
| | | | | | | | | field to the object's lock. Reviewed by: kib@ Notes: svn path=/head/; revision=217478
* Implement and use a single optimized function for unholding a set of pages.Alan Cox2010-12-171-0/+1
| | | | | | | Reviewed by: kib@ Notes: svn path=/head/; revision=216511
* Fix issue noted by alc while reviewing r215938:Jayachandran C.2010-11-281-1/+1
| | | | | | | | | | | The current implementation of vm_page_alloc_freelist() does not handle order > 0 correctly. Remove order parameter to the function and use it only for order 0 pages. Submitted by: alc Notes: svn path=/head/; revision=215973
* Redo the page table page allocation on MIPS, as suggested byJayachandran C.2010-07-211-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | alc@. The UMA zone based allocation is replaced by a scheme that creates a new free page list for the KSEG0 region, and a new function in sys/vm that allocates pages from a specific free page list. This also fixes a race condition introduced by the UMA based page table page allocation code. Dropping the page queue and pmap locks before the call to uma_zfree, and re-acquiring them afterwards will introduce a race condtion(noted by alc@). The changes are : - Revert the earlier changes in MIPS pmap.c that added UMA zone for page table pages. - Add a new freelist VM_FREELIST_HIGHMEM to MIPS vmparam.h for memory that is not directly mapped (in 32bit kernel). Normal page allocations will first try the HIGHMEM freelist and then the default(direct mapped) freelist. - Add a new function 'vm_page_t vm_page_alloc_freelist(int flind, int order, int req)' to vm/vm_page.c to allocate a page from a specified freelist. The MIPS page table pages will be allocated using this function from the freelist containing direct mapped pages. - Move the page initialization code from vm_phys_alloc_contig() to a new function vm_page_alloc_init(), and use this function to initialize pages in vm_page_alloc_freelist() too. - Split the function vm_phys_alloc_pages(int pool, int order) to create vm_phys_alloc_freelist_pages(int flind, int pool, int order), and use this function from both vm_page_alloc_freelist() and vm_phys_alloc_pages(). Reviewed by: alc Notes: svn path=/head/; revision=210327
* Add support for the VM_ALLOC_COUNT() hint to vm_page_alloc(). Consequently,Alan Cox2010-07-091-1/+0
| | | | | | | | | | | | | | | the maintenance of vm_pageout_deficit can be localized to just two places: vm_page_alloc() and vm_pageout_scan(). This change also corrects an off-by-one error in the maintenance of vm_pageout_deficit. Historically, the buffer cache functions, allocbuf() and vm_hold_load_pages(), have not taken into account that vm_page_alloc() already increments vm_pageout_deficit by one. Reviewed by: kib Notes: svn path=/head/; revision=209861
* Make VM_ALLOC_RETRY flag mandatory for vm_page_grab(). Assert that theKonstantin Belousov2010-07-081-1/+1
| | | | | | | | | | | | flag is always provided, and unconditionally retry after sleep for the busy page or failed allocation. The intent is to remove VM_ALLOC_RETRY eventually. Proposed and reviewed by: alc Notes: svn path=/head/; revision=209792
* Add the ability for the allocflag argument of the vm_page_grab() toKonstantin Belousov2010-07-051-0/+5
| | | | | | | | | | | | specify the increment of vm_pageout_deficit when sleeping due to page shortage. Then, in allocbuf(), the code to allocate pages when extending vmio buffer can be replaced by a call to vm_page_grab(). Suggested and reviewed by: alc MFC after: 2 weeks Notes: svn path=/head/; revision=209713
* Reimplement vm_object_page_clean(), using the fact that vm object memqKonstantin Belousov2010-07-041-1/+0
| | | | | | | | | | | | | | | | is ordered by page index. This greatly simplifies the implementation, since we no longer need to mark the pages with VPO_CLEANCHK to denote the progress. It is enough to remember the current position by index before dropping the object lock. Remove VPO_CLEANCHK and VM_PAGER_IGNORE_CLEANCHK as unused. Garbage-collect vm.msync_flush_flags sysctl. Suggested and reviewed by: alc Tested by: pho Notes: svn path=/head/; revision=209686
* Introduce a helper function vm_page_find_least(). Use it in several places,Konstantin Belousov2010-07-041-0/+1
| | | | | | | | | | | which inline the function. Reviewed by: alc Tested by: pho MFC after: 1 week Notes: svn path=/head/; revision=209685
* With the demise of page coloring, the page queue macros no longer serve anyAlan Cox2010-07-021-12/+0
| | | | | | | | | useful purpose. Eliminate them. Reviewed by: kib Notes: svn path=/head/; revision=209647
* Introduce vm_page_next() and vm_page_prev(), and use them inAlan Cox2010-06-211-0/+2
| | | | | | | | | | | | vm_pageout_clean(). When iterating over a range of pages, these functions can be cheaper than vm_page_lookup() because their implementation takes advantage of the vm_object's memq being ordered. Reviewed by: kib@ MFC after: 3 weeks Notes: svn path=/head/; revision=209407
* Reduce the scope of the page queues lock and the number ofAlan Cox2010-06-101-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | PG_REFERENCED changes in vm_pageout_object_deactivate_pages(). Simplify this function's inner loop using TAILQ_FOREACH(), and shorten some of its overly long lines. Update a stale comment. Assert that PG_REFERENCED may be cleared only if the object containing the page is locked. Add a comment documenting this. Assert that a caller to vm_page_requeue() holds the page queues lock, and assert that the page is on a page queue. Push down the page queues lock into pmap_ts_referenced() and pmap_page_exists_quick(). (As of now, there are no longer any pmap functions that expect to be called with the page queues lock held.) Neither pmap_ts_referenced() nor pmap_page_exists_quick() should ever be passed an unmanaged page. Assert this rather than returning "0" and "FALSE" respectively. ARM: Simplify pmap_page_exists_quick() by switching to TAILQ_FOREACH(). Push down the page queues lock inside of pmap_clearbit(), simplifying pmap_clear_modify(), pmap_clear_reference(), and pmap_remove_write(). Additionally, this allows for avoiding the acquisition of the page queues lock in some cases. PowerPC/AIM: moea*_page_exits_quick() and moea*_page_wired_mappings() will never be called before pmap initialization is complete. Therefore, the check for moea_initialized can be eliminated. Push down the page queues lock inside of moea*_clear_bit(), simplifying moea*_clear_modify() and moea*_clear_reference(). The last parameter to moea*_clear_bit() is never used. Eliminate it. PowerPC/BookE: Simplify mmu_booke_page_exists_quick()'s control flow. Reviewed by: kib@ Notes: svn path=/head/; revision=208990
* When I pushed down the page queues lock into pmap_is_modified(), I createdAlan Cox2010-05-291-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | an ordering dependence: A pmap operation that clears PG_WRITEABLE and calls vm_page_dirty() must perform the call first. Otherwise, pmap_is_modified() could return FALSE without acquiring the page queues lock because the page is not (currently) writeable, and the caller to pmap_is_modified() might believe that the page's dirty field is clear because it has not seen the effect of the vm_page_dirty() call. When I pushed down the page queues lock into pmap_is_modified(), I overlooked one place where this ordering dependence is violated: pmap_enter(). In a rare situation pmap_enter() can be called to replace a dirty mapping to one page with a mapping to another page. (I say rare because replacements generally occur as a result of a copy-on-write fault, and so the old page is not dirty.) This change delays clearing PG_WRITEABLE until after vm_page_dirty() has been called. Fixing the ordering dependency also makes it easy to introduce a small optimization: When pmap_enter() used to replace a mapping to one page with a mapping to another page, it freed the pv entry for the first mapping and later called the pv entry allocator for the new mapping. Now, pmap_enter() attempts to recycle the old pv entry, saving two calls to the pv entry allocator. There is no point in setting PG_WRITEABLE on unmanaged pages, so don't. Update a comment to reflect this. Tidy up the variable declarations at the start of pmap_enter(). Notes: svn path=/head/; revision=208645
* Roughly half of a typical pmap_mincore() implementation is machine-Alan Cox2010-05-241-2/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | independent code. Move this code into mincore(), and eliminate the page queues lock from pmap_mincore(). Push down the page queues lock into pmap_clear_modify(), pmap_clear_reference(), and pmap_is_modified(). Assert that these functions are never passed an unmanaged page. Eliminate an inaccurate comment from powerpc/powerpc/mmu_if.m: Contrary to what the comment says, pmap_mincore() is not simply an optimization. Without a complete pmap_mincore() implementation, mincore() cannot return either MINCORE_MODIFIED or MINCORE_REFERENCED because only the pmap can provide this information. Eliminate the page queues lock from vfs_setdirty_locked_object(), vm_pageout_clean(), vm_object_page_collect_flush(), and vm_object_page_clean(). Generally speaking, these are all accesses to the page's dirty field, which are synchronized by the containing vm object's lock. Reduce the scope of the page queues lock in vm_object_madvise() and vm_page_dontneed(). Reviewed by: kib (an earlier version) Notes: svn path=/head/; revision=208504
* On entry to pmap_enter(), assert that the page is busy. While I'mAlan Cox2010-05-161-2/+2
| | | | | | | | | | | | | | | | | | | | | | | here, make the style of assertion used by pmap_enter() consistent across all architectures. On entry to pmap_remove_write(), assert that the page is neither unmanaged nor fictitious, since we cannot remove write access to either kind of page. With the push down of the page queues lock, pmap_remove_write() cannot condition its behavior on the state of the PG_WRITEABLE flag if the page is busy. Assert that the object containing the page is locked. This allows us to know that the page will neither become busy nor will PG_WRITEABLE be set on it while pmap_remove_write() is running. Correct a long-standing bug in vm_page_cowsetup(). We cannot possibly do copy-on-write-based zero-copy transmit on unmanaged or fictitious pages, so don't even try. Previously, the call to pmap_remove_write() would have failed silently. Notes: svn path=/head/; revision=208175
* Update synchronization annotations for struct vm_page. Add a commentAlan Cox2010-05-111-5/+8
| | | | | | | explaining how the setting of PG_WRITEABLE is synchronized. Notes: svn path=/head/; revision=207905
* Update the synchronization requirements for the page usage count.Alan Cox2010-05-071-1/+1
| | | | Notes: svn path=/head/; revision=207740
* Update a comment to say that access to a page's wire count is nowAlan Cox2010-05-061-1/+1
| | | | | | | synchronized by the page lock. Notes: svn path=/head/; revision=207706
* Push down the page queues lock inside of vm_page_free_toq() andAlan Cox2010-05-061-1/+1
| | | | | | | | | | | | | pmap_page_is_mapped() in preparation for removing page queues locking around calls to vm_page_free(). Setting aside the assertion that calls pmap_page_is_mapped(), vm_page_free_toq() now acquires and holds the page queues lock just long enough to actually add or remove the page from the paging queues. Update vm_page_unhold() to reflect the above change. Notes: svn path=/head/; revision=207702
* Acquire the page lock around all remaining calls to vm_page_free() onAlan Cox2010-05-051-1/+1
| | | | | | | | | | | | | | | | managed pages that didn't already have that lock held. (Freeing an unmanaged page, such as the various pmaps use, doesn't require the page lock.) This allows a change in vm_page_remove()'s locking requirements. It now expects the page lock to be held instead of the page queues lock. Consequently, the page queues lock is no longer required at all by callers to vm_page_rename(). Discussed with: kib Notes: svn path=/head/; revision=207669
* Update locking comment above vm_page:Kip Macy2010-05-011-9/+10
| | | | | | | | | | - re-assign page queue lock "Q" - assign page lock "P" - update several uncommented fields - observe that hold_count is now protected by the page lock "P" Notes: svn path=/head/; revision=207460
* On Alan's advice, rather than do a wholesale conversion on a singleKip Macy2010-04-301-1/+28
| | | | | | | | | | | | | | | architecture from page queue lock to a hashed array of page locks (based on a patch by Jeff Roberson), I've implemented page lock support in the MI code and have only moved vm_page's hold_count out from under page queue mutex to page lock. This changes pmap_extract_and_hold on all pmaps. Supported by: Bitgravity Inc. Discussed with: alc, jeffr, and kib Notes: svn path=/head/; revision=207410
* Align and pad the page queue and free page queue locks so that the linkerAlan Cox2009-10-041-2/+12
| | | | | | | | | can't possibly place them together within the same cache line. MFC after: 3 weeks Notes: svn path=/head/; revision=197750
* Eliminate a stale comment and the two remaining uses of the "register"Alan Cox2009-05-301-6/+2
| | | | | | | keyword in this file. Notes: svn path=/head/; revision=193126
* Eliminate page queues locking from bufdone_finish() through theAlan Cox2009-05-131-0/+1
| | | | | | | | | | | | | | | | | | | | | | | following changes: Rename vfs_page_set_valid() to vfs_page_set_validclean() to reflect what this function actually does. Suggested by: tegge Introduce a new version of vfs_page_set_valid() that does no more than what the function's name implies. Specifically, it does not update the page's dirty mask, and thus it does not require the page queues lock to be held. Update two of the three callers to the old vfs_page_set_valid() to call vfs_page_set_validclean() instead because they actually require the page's dirty mask to be cleared. Introduce vm_page_set_valid(). Reviewed by: tegge Notes: svn path=/head/; revision=192034
* Extend the struct vm_page wire_count to u_int to avoid the overflowKonstantin Belousov2009-01-031-4/+4
| | | | | | | | | | | | | | | | | | | of the counter, that may happen when too many sendfile(2) calls are being executed with this vnode [1]. To keep the size of the struct vm_page and offsets of the fields accessed by out-of-tree modules, swap the types and locations of the wire_count and cow fields. Add safety checks to detect cow overflow and force fallback to the normal copy code for zero-copy sockets. [2] Reported by: Anton Yuzhaninov <citrin citrin ru> [1] Suggested by: alc [2] Reviewed by: alc MFC after: 2 weeks Notes: svn path=/head/; revision=186719
* Move CTASSERT from header file to source file, per implementation note nowEd Maste2008-09-261-7/+0
| | | | | | | in the CTASSERT man page. Notes: svn path=/head/; revision=183389
* Rename vm_pageq_requeue() to vm_page_requeue() on account of its recentAlan Cox2008-03-191-1/+1
| | | | | | | migration to vm/vm_page.c. Notes: svn path=/head/; revision=177414
* Almost seven years ago, vm/vm_page.c was split into three parts:Alan Cox2008-03-181-4/+1
| | | | | | | | | | | | | | | vm/vm_contig.c, vm/vm_page.c, and vm/vm_pageq.c. Today, vm/vm_pageq.c has withered to the point that it contains only four short functions, two of which are only used by vm/vm_page.c. Since I can't foresee any reason for vm/vm_pageq.c to grow, it is time to fold the remaining contents of vm/vm_pageq.c back into vm/vm_page.c. Add some comments. Rename one of the functions, vm_pageq_enqueue(), that is now static within vm/vm_page.c to vm_page_enqueue(). Eliminate PQ_MAXCOUNT as it no longer serves any purpose. Notes: svn path=/head/; revision=177342
* Correct an error of omission in the reimplementation of the pageAlan Cox2007-09-271-1/+1
| | | | | | | | | | | | | | | | | cache: vm_object_page_remove() should convert any cached pages that fall with the specified range to free pages. Otherwise, there could be a problem if a file is first truncated and then regrown. Specifically, some old data from prior to the truncation might reappear. Generalize vm_page_cache_free() to support the conversion of either a subset or the entirety of an object's cached pages. Reported by: tegge Reviewed by: tegge Approved by: re (kensmith) Notes: svn path=/head/; revision=172341
* Change the management of cached pages (PQ_CACHE) in two fundamentalAlan Cox2007-09-251-15/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ways: (1) Cached pages are no longer kept in the object's resident page splay tree and memq. Instead, they are kept in a separate per-object splay tree of cached pages. However, access to this new per-object splay tree is synchronized by the _free_ page queues lock, not to be confused with the heavily contended page queues lock. Consequently, a cached page can be reclaimed by vm_page_alloc(9) without acquiring the object's lock or the page queues lock. This solves a problem independently reported by tegge@ and Isilon. Specifically, they observed the page daemon consuming a great deal of CPU time because of pages bouncing back and forth between the cache queue (PQ_CACHE) and the inactive queue (PQ_INACTIVE). The source of this problem turned out to be a deadlock avoidance strategy employed when selecting a cached page to reclaim in vm_page_select_cache(). However, the root cause was really that reclaiming a cached page required the acquisition of an object lock while the page queues lock was already held. Thus, this change addresses the problem at its root, by eliminating the need to acquire the object's lock. Moreover, keeping cached pages in the object's primary splay tree and memq was, in effect, optimizing for the uncommon case. Cached pages are reclaimed far, far more often than they are reactivated. Instead, this change makes reclamation cheaper, especially in terms of synchronization overhead, and reactivation more expensive, because reactivated pages will have to be reentered into the object's primary splay tree and memq. (2) Cached pages are now stored alongside free pages in the physical memory allocator's buddy queues, increasing the likelihood that large allocations of contiguous physical memory (i.e., superpages) will succeed. Finally, as a result of this change long-standing restrictions on when and where a cached page can be reclaimed and returned by vm_page_alloc(9) are eliminated. Specifically, calls to vm_page_alloc(9) specifying VM_ALLOC_INTERRUPT can now reclaim and return a formerly cached page. Consequently, a call to malloc(9) specifying M_NOWAIT is less likely to fail. Discussed with: many over the course of the summer, including jeff@, Justin Husted @ Isilon, peter@, tegge@ Tested by: an earlier version by kris@ Approved by: re (kensmith) Notes: svn path=/head/; revision=172317
* Update a comment describing the page queues.Alan Cox2007-07-131-6/+7
| | | | | | | Approved by: re (hrs) Notes: svn path=/head/; revision=171420
* Enable the new physical memory allocator.Alan Cox2007-06-161-45/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This allocator uses a binary buddy system with a twist. First and foremost, this allocator is required to support the implementation of superpages. As a side effect, it enables a more robust implementation of contigmalloc(9). Moreover, this reimplementation of contigmalloc(9) eliminates the acquisition of Giant by contigmalloc(..., M_NOWAIT, ...). The twist is that this allocator tries to reduce the number of TLB misses incurred by accesses through a direct map to small, UMA-managed objects and page table pages. Roughly speaking, the physical pages that are allocated for such purposes are clustered together in the physical address space. The performance benefits vary. In the most extreme case, a uniprocessor kernel running on an Opteron, I measured an 18% reduction in system time during a buildworld. This allocator does not implement page coloring. The reason is that superpages have much the same effect. The contiguous physical memory allocation necessary for a superpage is inherently colored. Finally, the one caveat is that this allocator does not effectively support prezeroed pages. I hope this is temporary. On i386, this is a slight pessimization. However, on amd64, the beneficial effects of the direct-map optimization outweigh the ill effects. I speculate that this is true in general of machines with a direct map. Approved by: re Notes: svn path=/head/; revision=170816
* Define every architecture as either VM_PHYSSEG_DENSE orAlan Cox2007-05-051-2/+20
| | | | | | | | | | | | | | | | | | | | | | | VM_PHYSSEG_SPARSE depending on whether the physical address space is densely or sparsely populated with memory. The effect of this definition is to determine which of two implementations of vm_page_array and PHYS_TO_VM_PAGE() is used. The legacy implementation is obtained by defining VM_PHYSSEG_DENSE, and a new implementation that trades off time for space is obtained by defining VM_PHYSSEG_SPARSE. For now, all architectures except for ia64 and sparc64 define VM_PHYSSEG_DENSE. Defining VM_PHYSSEG_SPARSE on ia64 allows the entirety of my Itanium 2's memory to be used. Previously, only the first 1 GB could be used. Defining VM_PHYSSEG_SPARSE on sparc64 allows USIIIi-based systems to boot without crashing. This change is a combination of Nathan Whitehorn's patch and my own work in perforce. Discussed with: kmacy, marius, Nathan Whitehorn PR: 112194 Notes: svn path=/head/; revision=169291
* Change the way that unmanaged pages are created. Specifically,Alan Cox2007-02-251-1/+0
| | | | | | | | | | | | | | | | | immediately flag any page that is allocated to a OBJT_PHYS object as unmanaged in vm_page_alloc() rather than waiting for a later call to vm_page_unmanage(). This allows for the elimination of some uses of the page queues lock. Change the type of the kernel and kmem objects from OBJT_DEFAULT to OBJT_PHYS. This allows us to take advantage of the above change to simplify the allocation of unmanaged pages in kmem_alloc() and kmem_malloc(). Remove vm_page_unmanage(). It is no longer used. Notes: svn path=/head/; revision=166964
* Change the page's CLEANCHK flag from being a page queue mutex synchronizedAlan Cox2007-02-221-1/+1
| | | | | | | flag to a vm object mutex synchronized flag. Notes: svn path=/head/; revision=166882
* Replace PG_BUSY with VPO_BUSY. In other words, changes to the page'sAlan Cox2006-10-221-3/+3
| | | | | | | | busy flag, i.e., VPO_BUSY, are now synchronized by the per-vm object lock instead of the global page queues lock. Notes: svn path=/head/; revision=163604