From f183910b97a54940f78db43341b3954ba8a2e32d Mon Sep 17 00:00:00 2001 From: Kip Macy Date: Tue, 27 Feb 2007 06:42:05 +0000 Subject: Further improvements to LOCK_PROFILING: - Fix missing initialization in kern_rwlock.c causing bogus times to be collected - Move updates to the lock hash to after the lock is released for spin mutexes, sleep mutexes, and sx locks - Add new kernel build option LOCK_PROFILE_FAST - only update lock profiling statistics when an acquisition is contended. This reduces the overhead of LOCK_PROFILING to increasing system time by 20%-25% which on "make -j8 kernel-toolchain" on a dual woodcrest is unmeasurable in terms of wall-clock time. Contrast this to enabling lock profiling without LOCK_PROFILE_FAST and I see a 5x-6x slowdown in wall-clock time. --- sys/kern/kern_rwlock.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) (limited to 'sys/kern/kern_rwlock.c') diff --git a/sys/kern/kern_rwlock.c b/sys/kern/kern_rwlock.c index 18e9a544153c..9c7d4f8d8875 100644 --- a/sys/kern/kern_rwlock.c +++ b/sys/kern/kern_rwlock.c @@ -150,8 +150,8 @@ _rw_rlock(struct rwlock *rw, const char *file, int line) #ifdef SMP volatile struct thread *owner; #endif - uint64_t waitstart; - int contested; + uint64_t waitstart = 0; + int contested = 0; uintptr_t x; KASSERT(rw_wowner(rw) != curthread, -- cgit v1.2.3