From a045941bd2a73aafd6755461e61cd6a6e963f670 Mon Sep 17 00:00:00 2001 From: Mateusz Guzik Date: Sun, 8 Apr 2018 16:34:10 +0000 Subject: locks: tweak backoff a little bit Previous limits were chosen when locking primitives had spurious lock accesses. Flipping the starting point to 1 (or rather 2 as the first call shifts it) provides a modest win when mild contention is seen while not hurting worse cases. Tested on a bunch of one, two and four socket old and new systems (Westmere, Skylake, Threadreaper and others) by doing concurrent page faults, buildkernel/buildworld and other stuff (although not all systems got all the tests). Another thing is the upper limit. It is semi-arbitrarily chosen as it was getting out of hand for slightly less small systems (e.g. a 128-thread one). Note that backoff is fundamentally a speculative bandaid and this change just makes it fit a little bit better. It remains completely oblivious to the hardware topology or the contention pattern. This is being experimented with. --- sys/kern/subr_lock.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) (limited to 'sys/kern/subr_lock.c') diff --git a/sys/kern/subr_lock.c b/sys/kern/subr_lock.c index 01b82fe23d48..4ac4e6bf00d6 100644 --- a/sys/kern/subr_lock.c +++ b/sys/kern/subr_lock.c @@ -156,8 +156,10 @@ void lock_delay_default_init(struct lock_delay_config *lc) { - lc->base = lock_roundup_2(mp_ncpus) / 4; - lc->max = lc->base * 1024; + lc->base = 1; + lc->max = lock_roundup_2(mp_ncpus) * 256; + if (lc->max > 32678) + lc->max = 32678; } #ifdef DDB -- cgit v1.2.3