aboutsummaryrefslogtreecommitdiff
path: root/sys/vm
diff options
context:
space:
mode:
authorMark Johnston <markj@FreeBSD.org>2021-08-10 20:52:36 +0000
committerMark Johnston <markj@FreeBSD.org>2021-08-11 01:27:53 +0000
commit8978608832c28572bbf5adadb9cfb077e8f15255 (patch)
tree3f085afcbbee03e9f73ae749eba50c36bdbbfbf3 /sys/vm
parent5dda15adbcf7b650fb69b5259090b16c66d1cf1a (diff)
downloadsrc-8978608832c28572bbf5adadb9cfb077e8f15255.tar.gz
src-8978608832c28572bbf5adadb9cfb077e8f15255.zip
amd64: Populate the KMSAN shadow maps and integrate with the VM
- During boot, allocate PDP pages for the shadow maps. The region above KERNBASE is currently not shadowed. - Create a dummy shadow for the vm page array. For now, this array is not protected by the shadow map to help reduce kernel memory usage. - Grow shadows when growing the kernel map. - Increase the default kernel stack size when KMSAN is enabled. As with KASAN, sanitizer instrumentation appears to create stack frames large enough that the default value is not sufficient. - Disable UMA's use of the direct map when KMSAN is configured. KMSAN cannot validate the direct map. - Disable unmapped I/O when KMSAN configured. - Lower the limit on paging buffers when KMSAN is configured. Each buffer has a static MAXPHYS-sized allocation of KVA, which in turn eats 2*MAXPHYS of space in the shadow map. Reviewed by: alc, kib Sponsored by: The FreeBSD Foundation Differential Revision: https://reviews.freebsd.org/D31295
Diffstat (limited to 'sys/vm')
-rw-r--r--sys/vm/vm_pager.c9
1 files changed, 9 insertions, 0 deletions
diff --git a/sys/vm/vm_pager.c b/sys/vm/vm_pager.c
index 640e3d977e99..69f0a2dc2bbb 100644
--- a/sys/vm/vm_pager.c
+++ b/sys/vm/vm_pager.c
@@ -217,6 +217,15 @@ pbuf_zsecond_create(const char *name, int max)
zone = uma_zsecond_create(name, pbuf_ctor, pbuf_dtor, NULL, NULL,
pbuf_zone);
+
+#ifdef KMSAN
+ /*
+ * Shrink the size of the pbuf pools if KMSAN is enabled, otherwise the
+ * shadows of the large KVA allocations eat up too much memory.
+ */
+ max /= 3;
+#endif
+
/*
* uma_prealloc() rounds up to items per slab. If we would prealloc
* immediately on every pbuf_zsecond_create(), we may accumulate too