| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
| |
- Remove kick_proc0().
- Make the return type of sleepq_broadcast(), sleepq_signal(), etc.,
void.
- Fix up callers.
Tested by: pho
Reviewed by: kib
Differential Revision: https://reviews.freebsd.org/D46128
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- The kernel stack objects do not need to be pageable, so use OBJT_PHYS
objects instead. The main difference is that mappings do not require
PV entries.
- Make some externally visible functions, relating to kernel thread
stack internals, private to vm_glue.c, as their external consumers are
now gone.
Tested by: pho
Reviewed by: alc, kib
Differential Revision: https://reviews.freebsd.org/D46119
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
After mi_startup() finishes, thread0 becomes the "swapper", whose
responsibility is to swap threads back in on demand. Now that threads
can't be swapped out, there is no use for this thread. Just sleep
forever once sysinits are finished; thread_exit() doesn't work because
thread0 is allocated statically. The thread could be repurposed if that
would be useful.
Tested by: pho
Reviewed by: alc, imp, kib
Differential Revision: https://reviews.freebsd.org/D46113
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Update pagesizes[] to include the L3 ATTR_CONTIGUOUS (L3C) page size,
which is 64KB when the base page size is 4KB and 2MB when the base page
size is 16KB.
Add support for L3C pages to shm_create_largepage().
Add support for creating L3C page mappings to pmap_enter(psind=1).
Add support for reporting L3C page mappings to mincore(2) and
procstat(8).
Update vm_fault_soft_fast() and vm_fault_populate() to handle multiple
superpage sizes.
Declare arm64 as supporting two superpage reservation sizes, and
simulate two superpage reservation sizes, updating the vm_page's psind
field to reflect the correct page size from pagesizes[]. (The next
patch in this series will replace this simulation. This patch is
already big enough.)
Co-authored-by: Eliot Solomon <ehs3@rice.edu>
Reviewed by: kib
Differential Revision: https://reviews.freebsd.org/D45766
|
|
|
|
|
| |
Reviewed by: kib
Differential Revision: https://reviews.freebsd.org/D45156
|
|
|
|
|
|
|
|
|
|
|
|
| |
vm_object_page_remove() wants to busy the page, but that won't work
here. (Kernel stack pages are always busy.)
Make the error handling path look more like vm_thread_stack_dispose().
Reported by: pho
Reviewed by: kib, bnovkov
Fixes: 7a79d0669761 ("vm: improve kstack_object pindex calculation to avoid pindex holes")
Differential Revision: https://reviews.freebsd.org/D45019
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
fork() may allocate a new thread in one of two ways: from UMA, or cached
in a freed proc that was just allocated from UMA. In either case, KASAN
and KMSAN need to initialize some state; in particular they need to
initialize the shadow mapping of the new thread's stack.
This is done differently between KASAN and KMSAN, which is confusing.
This patch improves things a bit:
- Add a new thread_recycle() function, which moves all kernel stack
handling out of kern_fork.c, since it doesn't really belong there.
- Then, thread_alloc_stack() has only one local caller, so just inline
it.
- Avoid redundant shadow stack initialization: thread_alloc()
initializes the KMSAN shadow stack (via kmsan_thread_alloc()) even
through vm_thread_new() already did that.
- Add kasan_thread_alloc(), for consistency with kmsan_thread_alloc().
No functional change intended.
Reviewed by: khng
MFC after: 1 week
Differential Revision: https://reviews.freebsd.org/D44891
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit replaces the linear transformation of kernel virtual
addresses to kstack_object pindex values with a non-linear
scheme that circumvents physical memory fragmentation caused by
kernel stack guard pages. The new mapping scheme is used to
effectively "skip" guard pages and assign pindices for
non-guard pages in a contiguous fashion.
The new allocation scheme requires that all default-sized kstack KVAs
come from a separate, specially aligned region of the KVA space.
For this to work, this commited introduces a dedicated per-domain
kstack KVA arena used to allocate kernel stacks of default size.
The behaviour on 32-bit platforms remains unchanged due to a
significatly smaller KVA space.
Aside from fullfilling the requirements imposed by the new scheme, a
separate kstack KVA arena facilitates superpage promotion in the rest
of kernel and causes most kstacks to have guard pages at both ends.
Reviewed by: alc, kib, markj
Tested by: markj
Approved by: markj (mentor)
Differential Revision: https://reviews.freebsd.org/D38852
|
|
|
|
|
|
|
|
| |
Remove ancient SCCS tags from the tree, automated scripting, with two
minor fixup to keep things compiling. All the common forms in the tree
were removed with a perl script.
Sponsored by: Netflix
|
|
|
|
| |
Remove /^[\s*]*__FBSDID\("\$FreeBSD\$"\);?\s*\n/
|
|
|
|
| |
Sponsored by: Rubicon Communications, LLC ("Netgate")
|
|
|
|
|
|
|
|
|
| |
The arch required two-pages alignment due to single TLB entry caching
two consequtive mappings.
Reviewed by: imp
Sponsored by: The FreeBSD Foundation
Differential revision: https://reviews.freebsd.org/D33763
|
|
|
|
|
|
| |
- s/maxiumum/maximum/
MFC after: 3 days
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For now, just hook the allocation path: upon allocation, items are
marked as initialized (absent M_ZERO). Some zones are exempted from
this when it would otherwise raise false positives.
Use kmsan_orig() to update the origin map for UMA and malloc(9)
allocations. This allows KMSAN to print the return address when an
uninitialized UMA item is implicated in a report. For example:
panic: MSan: Uninitialized UMA memory from m_getm2+0x7fe
Sponsored by: The FreeBSD Foundation
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Otherwise, a multithreaded parent process may trigger races in
vm_forkproc() if one thread calls rfork() with RFMEM set and another
calls rfork() without RFMEM.
Also simplify vm_forkproc() a bit, vmspace_unshare() already checks to
see if the address space is shared.
Reported by: syzbot+0aa7c2bec74c4066c36f@syzkaller.appspotmail.com
Reported by: syzbot+ea84cb06937afeae609d@syzkaller.appspotmail.com
Reviewed by: kib
MFC after: 1 week
Sponsored by: The FreeBSD Foundation
Differential Revision: https://reviews.freebsd.org/D30220
|
|
|
|
|
|
|
|
|
| |
We allocate kernel stacks using a UMA cache zone. Cache zones have
KASAN disabled by default, but in this case it makes sense to enable it.
Reviewed by: andrew
MFC after: 2 weeks
Differential Revision: https://reviews.freebsd.org/D29457
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is mostly mechanical except for vmspace_exit(). There, use the new
refcount_release_if_last() to avoid switching to vmspace0 unless other
processes are sharing the vmspace. In that case, upon switching to
vmspace0 we can unconditionally release the reference.
Remove the volatile qualifier from vm_refcnt now that accesses are
protected using refcount(9) KPIs.
Reviewed by: alc, kib, mmel
MFC after: 1 month
Sponsored by: The FreeBSD Foundation
Differential Revision: https://reviews.freebsd.org/D27057
Notes:
svn path=/head/; revision=367334
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Also add msleep flags argument to vm_wait_doms(9).
Reviewed by: markj
Tested by: pho
Sponsored by: The FreeBSD Foundation
MFC after: 1 week
Differential revision: https://reviews.freebsd.org/D24652
Notes:
svn path=/head/; revision=365484
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously we allocated a separate VM object for each kernel stack.
However, fully constructed kernel stacks are cached by UMA, so there is
no harm in using a single global object for all stacks. This reduces
memory consumption and makes it easier to define a memory allocation
policy for kernel stack pages, with the aim of reducing physical memory
fragmentation.
Add a global kstack_object, and use the stack KVA address to index into
the object like we do with kernel_object.
Reviewed by: kib
Tested by: pho
Sponsored by: The FreeBSD Foundation
Differential Revision: https://reviews.freebsd.org/D24473
Notes:
svn path=/head/; revision=360354
|
|
|
|
|
|
|
|
| |
Reviewed by: kib, markj
Differential Revision: https://reviews.freebsd.org/D23847
Notes:
svn path=/head/; revision=358447
|
|
|
|
|
|
|
|
|
|
|
| |
be able to guarantee that they can be racquired without blocking.
Reviewed by: kib
Discussed with: markj
Differential Revision: https://reviews.freebsd.org/D23506
Notes:
svn path=/head/; revision=358098
|
|
|
|
|
|
|
|
|
|
| |
directly. This improves API compliance, asserts, etc.
Reviewed by: kib, markj
Differential Revision: https://reviews.freebsd.org/D23283
Notes:
svn path=/head/; revision=357017
|
|
|
|
|
|
|
| |
This covers all vm/* files.
Notes:
svn path=/head/; revision=356653
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
more consistent with other NUMA features as UMA_ZONE_FIRSTTOUCH and
UMA_ZONE_ROUNDROBIN. The system will now pick a select a default depending
on kernel configuration. API users need only specify one if they want to
override the default.
Remove the UMA_XDOMAIN and UMA_FIRSTTOUCH kernel options and key only off
of NUMA. XDOMAIN is now fast enough in all cases to enable whenever NUMA
is.
Reviewed by: markj
Discussed with: rlibby
Differential Revision: https://reviews.freebsd.org/D22831
Notes:
svn path=/head/; revision=356351
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Cache gets resized correctly, but sysctl reports the wrong number:
# sysctl vm.kstack_cache_size=512
vm.kstack_cache_size: 128 -> 128
patched:
vm.kstack_cache_size: 128 -> 512
Reviewed by: markj
Differential Revision: https://reviews.freebsd.org/D22717
Fixes: r355002 "Revise the page cache size policy."
Notes:
svn path=/head/; revision=355495
|
|
|
|
|
|
|
|
|
|
|
| |
tightening constraints on busy as a precursor to lockless page lookup and
should largely be a NOP for these cases.
Reviewed by: alc, kib, markj
Differential Revision: https://reviews.freebsd.org/D22611
Notes:
svn path=/head/; revision=355314
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In r353734 the use of the page caches was limited to systems with a
relatively large amount of RAM per CPU. This was to mitigate some
issues reported with the system not able to keep up with memory pressure
in cases where it had been able to do so prior to the addition of the
direct free pool cache. This change re-enables those caches.
The change modifies uma_zone_set_maxcache(), which was introduced
specifically for the page cache zones. Rather than using it to limit
only the full bucket cache, have it also set uz_count_max to provide an
upper bound on the per-CPU cache size that is consistent with the number
of items requested. Remove its return value since it has no use.
Enable the page cache zones unconditionally, and limit them to 0.1% of
the domain's pages. The limit can be overridden by the
vm.pgcache_zone_max tunable as before.
Change the item size parameter passed to uma_zcache_create() to the
correct size, and stop setting UMA_ZONE_MAXBUCKET. This allows the page
cache buckets to be adaptively sized, like the rest of UMA's caches.
This also causes the initial bucket size to be small, so only systems
which benefit from large caches will get them.
Reviewed by: gallatin, jeff
MFC after: 2 weeks
Sponsored by: The FreeBSD Foundation
Differential Revision: https://reviews.freebsd.org/D22393
Notes:
svn path=/head/; revision=355002
|
|
|
|
|
|
|
|
|
|
|
| |
Use __func__ to avoid this issue in the future.
Submitted by: Wuyang Chung <wuyang.chung1@gmail.com>
Reviewed by: markj, emaste
Obtained from: https://github.com/freebsd/freebsd/pull/410
Notes:
svn path=/head/; revision=352504
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- VM_ALLOC_NOCREAT will grab without creating a page.
- vm_page_grab_valid() will grab and page in if necessary.
- vm_page_busy_acquire() automates some busy acquire loops.
Discussed with: alc, kib, markj
Tested by: pho (part of larger branch)
Sponsored by: Netflix
Differential Revision: https://reviews.freebsd.org/D21546
Notes:
svn path=/head/; revision=352176
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There are several mechanisms by which a vm_page reference is held,
preventing the page from being freed back to the page allocator. In
particular, holding the page's object lock is sufficient to prevent the
page from being freed; holding the busy lock or a wiring is sufficent as
well. These references are protected by the page lock, which must
therefore be acquired for many per-page operations. This results in
false sharing since the page locks are external to the vm_page
structures themselves and each lock protects multiple structures.
Transition to using an atomically updated per-page reference counter.
The object's reference is counted using a flag bit in the counter. A
second flag bit is used to atomically block new references via
pmap_extract_and_hold() while removing managed mappings of a page.
Thus, the reference count of a page is guaranteed not to increase if the
page is unbusied, unmapped, and the object's write lock is held. As
a consequence of this, the page lock no longer protects a page's
identity; operations which move pages between objects are now
synchronized solely by the objects' locks.
The vm_page_wire() and vm_page_unwire() KPIs are changed. The former
requires that either the object lock or the busy lock is held. The
latter no longer has a return value and may free the page if it releases
the last reference to that page. vm_page_unwire_noq() behaves the same
as before; the caller is responsible for checking its return value and
freeing or enqueuing the page as appropriate. vm_page_wire_mapped() is
introduced for use in pmap_extract_and_hold(). It fails if the page is
concurrently being unmapped, typically triggering a fallback to the
fault handler. vm_page_wire() no longer requires the page lock and
vm_page_unwire() now internally acquires the page lock when releasing
the last wiring of a page (since the page lock still protects a page's
queue state). In particular, synchronization details are no longer
leaked into the caller.
The change excises the page lock from several frequently executed code
paths. In particular, vm_object_terminate() no longer bounces between
page locks as it releases an object's pages, and direct I/O and
sendfile(SF_NOCACHE) completions no longer require the page lock. In
these latter cases we now get linear scalability in the common scenario
where different threads are operating on different files.
__FreeBSD_version is bumped. The DRM ports have been updated to
accomodate the KPI changes.
Reviewed by: jeff (earlier version)
Tested by: gallatin (earlier version), pho
Sponsored by: Netflix
Differential Revision: https://reviews.freebsd.org/D20486
Notes:
svn path=/head/; revision=352110
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
uiomove_object_page() and exec_map_first_page() would previously wire a
page after having grabbed it. Ask vm_page_grab() to perform the wiring
instead: this removes some redundant code, and is cheaper in the case
where the requested page is not resident since the page allocator can be
asked to initialize the page as wired, whereas a separate vm_page_wire()
call requires the page lock.
In vm_imgact_hold_page(), use vm_page_unwire_noq() instead of
vm_page_unwire(PQ_NONE). The latter ensures that the page is dequeued
before returning, but this is unnecessary since vm_page_free() will
trigger a batched dequeue of the page.
Reviewed by: alc, kib
Tested by: pho (part of a larger patch)
MFC after: 1 week
Sponsored by: Netflix
Differential Revision: https://reviews.freebsd.org/D21440
Notes:
svn path=/head/; revision=351569
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The kernel thread stack zone performs first-touch allocations by
default, and must handle the case where the local memory domain
is empty. For most UMA zones this is handled in the keg layer,
but cache zones currently must implement a policy for this case.
Simply use a round-robin policy if UMA_ANYDOMAIN is passed.
Reported and tested by: bcran
Reviewed by: kib
Sponsored by: The FreeBSD Foundation
Notes:
svn path=/head/; revision=351496
|
|
|
|
|
|
|
|
|
|
|
|
| |
and more statistics.
Reviewed by: kib, markj
Tested by: pho
Sponsored by: Netflix
Differential Revision: https://reviews.freebsd.org/D20931
Notes:
svn path=/head/; revision=350663
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The hold_count and wire_count fields of struct vm_page are separate
reference counters with similar semantics. The remaining essential
differences are that holds are not counted as a reference with respect
to LRU, and holds have an implicit free-on-last unhold semantic whereas
vm_page_unwire() callers must explicitly determine whether to free the
page once the last reference to the page is released.
This change removes the KPIs which directly manipulate hold_count.
Functions such as vm_fault_quick_hold_pages() now return wired pages
instead. Since r328977 the overhead of maintaining LRU for wired pages
is lower, and in many cases vm_fault_quick_hold_pages() callers would
swap holds for wirings on the returned pages anyway, so with this change
we remove a number of page lock acquisitions.
No functional change is intended. __FreeBSD_version is bumped.
Reviewed by: alc, kib
Discussed with: jeff
Discussed with: jhb, np (cxgbe)
Tested by: pho (previous version)
Sponsored by: Netflix
Differential Revision: https://reviews.freebsd.org/D19247
Notes:
svn path=/head/; revision=349846
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
These calls are not the same in general: the former will dequeue the
page if it is enqueued, while the latter will just leave it alone. But,
all existing uses of the former apply to unmanaged pages, which are
never enqueued in the first place. No functional change intended.
Reviewed by: kib
MFC after: 1 week
Sponsored by: Netflix
Differential Revision: https://reviews.freebsd.org/D20470
Notes:
svn path=/head/; revision=348785
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Historically we have not distinguished between kernel wirings and user
wirings for accounting purposes. User wirings (via mlock(2)) were
subject to a global limit on the number of wired pages, so if large
swaths of physical memory were wired by the kernel, as happens with
the ZFS ARC among other things, the limit could be exceeded, causing
user wirings to fail.
The change adds a new counter, v_user_wire_count, which counts the
number of virtual pages wired by user processes via mlock(2) and
mlockall(2). Only user-wired pages are subject to the system-wide
limit which helps provide some safety against deadlocks. In
particular, while sources of kernel wirings typically support some
backpressure mechanism, there is no way to reclaim user-wired pages
shorting of killing the wiring process. The limit is exported as
vm.max_user_wired, renamed from vm.max_wired, and changed from u_int
to u_long.
The choice to count virtual user-wired pages rather than physical
pages was done for simplicity. There are mechanisms that can cause
user-wired mappings to be destroyed while maintaining a wiring of
the backing physical page; these make it difficult to accurately
track user wirings at the physical page layer.
The change also closes some holes which allowed user wirings to succeed
even when they would cause the system limit to be exceeded. For
instance, mmap() may now fail with ENOMEM in a process that has called
mlockall(MCL_FUTURE) if the new mapping would cause the user wiring
limit to be exceeded.
Note that bhyve -S is subject to the user wiring limit, which defaults
to 1/3 of physical RAM. Users that wish to exceed the limit must tune
vm.max_user_wired.
Reviewed by: kib, ngie (mlock() test changes)
Tested by: pho (earlier version)
MFC after: 45 days
Sponsored by: Netflix
Differential Revision: https://reviews.freebsd.org/D19908
Notes:
svn path=/head/; revision=347532
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Before this change we had two flavours of vm_domainset iterators: "page"
and "malloc". The latter was only used for kmem_*() and hard-coded its
behaviour based on kernel_object's policy. Moreover, its use contained
a race similar to that fixed by r338755 since the kernel_object's
iterator was being run without the object lock.
In some cases it is useful to be able to explicitly specify a policy
(domainset) or policy+iterator (domainset_ref) when performing memory
allocations. To that end, refactor the vm_dominset_* KPI to permit
this, and get rid of the "malloc" domainset_iter KPI in the process.
Reviewed by: jeff (previous version)
Tested by: pho (part of a larger patch)
MFC after: 2 weeks
Sponsored by: The FreeBSD Foundation
Differential Revision: https://reviews.freebsd.org/D17417
Notes:
svn path=/head/; revision=339661
|
|
|
|
|
|
|
|
|
|
| |
I committed some patches out of order and didn't build-test one of them.
Reported by: Jenkins, O. Hartmann <ohartmann@walstatt.org>
X-MFC with: r339601
Notes:
svn path=/head/; revision=339603
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On NUMA systems, we would not swap in processes unless all domains
had some free pages. This is too conservative in general. Instead,
permit swapins so long as at least one domain has free pages, and add
a kernel stack NUMA policy which ensures that we will try to allocate
kernel stack pages from any domain.
Reported and tested by: pho, Jan Bramkamp <crest@bultmann.eu>
Reviewed by: alc, kib
Discussed with: jeff
MFC after: 3 days
Sponsored by: The FreeBSD Foundation
Differential Revision: https://reviews.freebsd.org/D17304
Notes:
svn path=/head/; revision=339601
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The current cache logic checks the total number of stacks in the kernel,
which even on small boxes significantly exceeds the 128 limit (e.g. an
8-way box with zfs has almost 800 stacks allocated).
Stacks are cached earlier for each main thread.
As a result the code is rarely executed, but when it is then (on boxes like
the above) it always fails. Since there are no provisions made for NUMA and
release time is approaching, just do a quick check to avoid acquiring the
lock.
Approved by: re (kib)
Notes:
svn path=/head/; revision=338802
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
other allowed domains if the requested domain is below the minimum paging
threshold. Block in fork only if all domains available to the forking
thread are below the severe threshold rather than any.
Submitted by: jeff
Reported by: mjg
Reviewed by: alc, kib, markj
Approved by: re (rgrimes)
Differential Revision: https://reviews.freebsd.org/D16191
Notes:
svn path=/head/; revision=338507
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Exposing max_offset and min_offset defines in public headers is
causing clashes with variable names, for example when building QEMU.
Based on the submission by: royger
Reviewed by: alc, markj (previous version)
Sponsored by: The FreeBSD Foundation (kib)
MFC after: 1 week
Approved by: re (marius)
Differential revision: https://reviews.freebsd.org/D16881
Notes:
svn path=/head/; revision=338370
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Assert that all such memory is unwired on return to usermode.
The count of the wired memory will be used to detect the copyout mode.
Tested by: pho (as part of the larger patch)
Sponsored by: The FreeBSD Foundation
MFC after: 1 week
Notes:
svn path=/head/; revision=331490
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
significant source of cache line contention from vm_page_alloc(). Use
accessors and vm_page_unwire_noq() so that the mechanism can be easily
changed in the future.
Reviewed by: markj
Discussed with: kib, glebius
Tested by: pho (earlier version)
Sponsored by: Netflix, Dell/EMC Isilon
Differential Revision: https://reviews.freebsd.org/D14273
Notes:
svn path=/head/; revision=329187
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
global to per-domain state. Protect reservations with the free lock
from the domain that they belong to. Refactor to make vm domains more
of a first class object.
Reviewed by: markj, kib, gallatin
Tested by: pho
Sponsored by: Netflix, Dell/EMC Isilon
Differential Revision: https://reviews.freebsd.org/D14000
Notes:
svn path=/head/; revision=328954
|
|
|
|
| |
Notes:
svn path=/head/; revision=327860
|
|
|
|
|
|
|
| |
Interesting cases, most likely from CMU Mach sources.
Notes:
svn path=/head/; revision=326403
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Mainly focus on files that use BSD 3-Clause license.
The Software Package Data Exchange (SPDX) group provides a specification
to make it easier for automated tools to detect and summarize well known
opensource licenses. We are gradually adopting the specification, noting
that the tags are considered only advisory and do not, in any way,
superceed or replace the license texts.
Special thanks to Wind River for providing access to "The Duke of
Highlander" tool: an older (2014) run over FreeBSD tree was useful as a
starting point.
Notes:
svn path=/head/; revision=326023
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There is no NO_SWAPPING #ifdef left in the code.
Requested by: alc
Reviewed by: alc, markj
Sponsored by: The FreeBSD Foundation
MFC after: 3 weeks
Differential revision: https://reviews.freebsd.org/D12663
Notes:
svn path=/head/; revision=324795
|
|
|
|
|
|
|
|
|
|
|
| |
This will allow its use in sendfile_swapin().
Reviewed by: alc, kib
MFC after: 2 weeks
Differential Revision: https://reviews.freebsd.org/D11942
Notes:
svn path=/head/; revision=322405
|