aboutsummaryrefslogtreecommitdiff
path: root/sys/vm/vm_kern.c
diff options
context:
space:
mode:
authorPeter Wemm <peter@FreeBSD.org>2000-05-21 12:50:18 +0000
committerPeter Wemm <peter@FreeBSD.org>2000-05-21 12:50:18 +0000
commit0385347c1aa38a1402192d53fc91313dadc37cec (patch)
tree6184d41f83be8a3984813b76e0b83b7606d5322b /sys/vm/vm_kern.c
parent4f91f96d9057ed01674b8d37366903e85af27306 (diff)
downloadsrc-0385347c1aa38a1402192d53fc91313dadc37cec.tar.gz
src-0385347c1aa38a1402192d53fc91313dadc37cec.zip
Implement an optimization of the VM<->pmap API. Pass vm_page_t's directly
to various pmap_*() functions instead of looking up the physical address and passing that. In many cases, the first thing the pmap code was doing was going to a lot of trouble to get back the original vm_page_t, or it's shadow pv_table entry. Inspired by: John Dyson's 1998 patches. Also: Eliminate pv_table as a seperate thing and build it into a machine dependent part of vm_page_t. This eliminates having a seperate set of structions that shadow each other in a 1:1 fashion that we often went to a lot of trouble to translate from one to the other. (see above) This happens to save 4 bytes of physical memory for each page in the system. (8 bytes on the Alpha). Eliminate the use of the phys_avail[] array to determine if a page is managed (ie: it has pv_entries etc). Store this information in a flag. Things like device_pager set it because they create vm_page_t's on the fly that do not have pv_entries. This makes it easier to "unmanage" a page of physical memory (this will be taken advantage of in subsequent commits). Add a function to add a new page to the freelist. This could be used for reclaiming the previously wasted pages left over from preloaded loader(8) files. Reviewed by: dillon
Notes
Notes: svn path=/head/; revision=60755
Diffstat (limited to 'sys/vm/vm_kern.c')
-rw-r--r--sys/vm/vm_kern.c3
1 files changed, 1 insertions, 2 deletions
diff --git a/sys/vm/vm_kern.c b/sys/vm/vm_kern.c
index 9b8584cb3d98..ee9e7e424295 100644
--- a/sys/vm/vm_kern.c
+++ b/sys/vm/vm_kern.c
@@ -399,8 +399,7 @@ retry:
/*
* Because this is kernel_pmap, this call will not block.
*/
- pmap_enter(kernel_pmap, addr + i, VM_PAGE_TO_PHYS(m),
- VM_PROT_ALL, 1);
+ pmap_enter(kernel_pmap, addr + i, m, VM_PROT_ALL, 1);
vm_page_flag_set(m, PG_MAPPED | PG_WRITEABLE | PG_REFERENCED);
}
vm_map_unlock(map);