| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
| |
existing uses. Rename sysctl_handle_quad() to sysctl_handle_64().
Notes:
svn path=/head/; revision=217616
|
|
|
|
|
|
|
|
|
|
| |
are unnecessary but I'm leaving them in for the sake of avoiding confusion
(I confuse easily).
Submitted by: bde
Notes:
svn path=/head/; revision=215732
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
more than 1s earlier. Prior to this commit, the computation of
th_scale * delta (which produces a 64-bit value equal to the time since
the last tc_windup call in units of 2^(-64) seconds) would overflow and
any complete seconds would be lost.
We fix this by repeatedly converting tc_frequency units of timecounter
to one seconds; this is not exactly correct, since it loses the NTP
adjustment, but if we find ourselves going more than 1s at a time between
clock interrupts, losing a few seconds worth of NTP adjustments is the
least of our problems...
Notes:
svn path=/head/; revision=215665
|
|
|
|
| |
Notes:
svn path=/head/; revision=215304
|
|
|
|
|
|
|
| |
MFC after: 1 week
Notes:
svn path=/head/; revision=215283
|
|
|
|
|
|
|
|
|
| |
PR: kern/148710
Tested by: Chip Camden <sterling at camdensoftware.com>
MFC after: 1 week
Notes:
svn path=/head/; revision=215281
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
is running on "dummy" time counter. But to function properly in one-shot
mode, event timer management code requires working time counter. Slow
moving "dummy" time counter delays first hardclock() call by few seconds
on my systems, even though timer interrupts were correctly kicking kernel.
That causes few seconds delay during boot with one-shot mode enabled.
To break this loop, explicitly call tc_windup() first time during
initialization process to let it switch to some real time counter.
Notes:
svn path=/head/; revision=212958
|
|
|
|
|
|
|
|
|
| |
to handle current timecounter wraps. Make kern_clocksource.c to honor that
requirement, scheduling sleeps on first CPU for no more then specified
period. Allow other CPUs to sleep up to 1/4 second (for any case).
Notes:
svn path=/head/; revision=212603
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The main goal of this is to generate timer interrupts only when there is
some work to do. When CPU is busy interrupts are generating at full rate
of hz + stathz to fullfill scheduler and timekeeping requirements. But
when CPU is idle, only minimum set of interrupts (down to 8 interrupts per
second per CPU now), needed to handle scheduled callouts is executed.
This allows significantly increase idle CPU sleep time, increasing effect
of static power-saving technologies. Also it should reduce host CPU load
on virtualized systems, when guest system is idle.
There is set of tunables, also available as writable sysctls, allowing to
control wanted event timer subsystem behavior:
kern.eventtimer.timer - allows to choose event timer hardware to use.
On x86 there is up to 4 different kinds of timers. Depending on whether
chosen timer is per-CPU, behavior of other options slightly differs.
kern.eventtimer.periodic - allows to choose periodic and one-shot
operation mode. In periodic mode, current timer hardware taken as the only
source of time for time events. This mode is quite alike to previous kernel
behavior. One-shot mode instead uses currently selected time counter
hardware to schedule all needed events one by one and program timer to
generate interrupt exactly in specified time. Default value depends of
chosen timer capabilities, but one-shot mode is preferred, until other is
forced by user or hardware.
kern.eventtimer.singlemul - in periodic mode specifies how much times
higher timer frequency should be, to not strictly alias hardclock() and
statclock() events. Default values are 2 and 4, but could be reduced to 1
if extra interrupts are unwanted.
kern.eventtimer.idletick - makes each CPU to receive every timer interrupt
independently of whether they busy or not. By default this options is
disabled. If chosen timer is per-CPU and runs in periodic mode, this option
has no effect - all interrupts are generating.
As soon as this patch modifies cpu_idle() on some platforms, I have also
refactored one on x86. Now it makes use of MONITOR/MWAIT instrunctions
(if supported) under high sleep/wakeup rate, as fast alternative to other
methods. It allows SMP scheduler to wake up sleeping CPUs much faster
without using IPI, significantly increasing performance on some highly
task-switching loads.
Tested by: many (on i386, amd64, sparc64 and powerc)
H/W donated by: Gheorghe Ardelean
Sponsored by: iXsystems, Inc.
Notes:
svn path=/head/; revision=212541
|
|
|
|
|
|
|
|
|
|
| |
thing; it's also used to indicate that the comment should not be automatically
rewrapped.
Explained by: cperciva@
Notes:
svn path=/head/; revision=210226
|
|
|
|
|
|
|
| |
occurences from sys/sys/ and sys/kern/.
Notes:
svn path=/head/; revision=210225
|
|
|
|
|
|
|
|
|
|
|
| |
was needed at preliminary version of the patch, where number of CPU ticks
was divided strictly on 16 seconds. Final code instead uses real interval
duration, so precise interval should not be important. Same time aliasing
issues around second boundary causes false positives, periodically logging
useless "t_delta ... too long/short" messages when HZ set below 256.
Notes:
svn path=/head/; revision=209900
|
|
|
|
|
|
|
|
|
| |
There are only about 100 occurences of the BSD-specific u_int*_t
datatypes in sys/kern. The ISO C99 integer types are used here more
often.
Notes:
svn path=/head/; revision=209390
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Allow setting format, resolution and accuracy of BPF time stamps per
listener. Previously, we were only able to use microtime(9). Now we can
set various resolutions and accuracies with ioctl(2) BIOCSTSTAMP command.
Similarly, we can get the current resolution and accuracy with BIOCGTSTAMP
command. Document all supported options in bpf(4) and their uses.
- Introduce new time stamp 'struct bpf_ts' and header 'struct bpf_xhdr'.
The new time stamp has both 64-bit second and fractional parts. bpf_xhdr
has this time stamp instead of 'struct timeval' for bh_tstamp. The new
structures let us use bh_tstamp of same size on both 32-bit and 64-bit
platforms without adding additional shims for 32-bit binaries. On 64-bit
platforms, size of BPF header does not change compared to bpf_hdr as its
members are already all 64-bit long. On 32-bit platforms, the size may
increase by 8 bytes. For backward compatibility, struct bpf_hdr with
struct timeval is still the default header unless new time stamp format is
explicitly requested. However, the behaviour may change in the future and
all relevant code is wrapped around "#ifdef BURN_BRIDGES" for now.
- Add experimental support for tagging mbufs with time stamps from a lower
layer, e.g., device driver. Currently, mbuf_tags(9) is used to tag mbufs.
The time stamps must be uptime in 'struct bintime' format as binuptime(9)
and getbinuptime(9) do.
Reviewed by: net@
Notes:
svn path=/head/; revision=209216
|
|
|
|
|
|
|
|
|
|
|
| |
DTrace, kernel profiling, etc, can provide this information without
the overhead.
MFC after: 3 days
Suggested by: bde
Notes:
svn path=/head/; revision=190947
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
query functions in the kernel, as these effectively serialize
parallel calls to the gettimeofday(2) system call, as well as
other kernel services that use timestamps.
Use the NetBSD version of the fix (kern_tc.c:1.32 by ad@) as
they have picked up our timecounter code and also ran into the
same problem.
Reported by: kris
Obtained from: NetBSD
MFC after: 3 days
Notes:
svn path=/head/; revision=189545
|
|
|
|
|
|
|
|
|
|
|
|
| |
after each SYSINIT() macro invocation. This makes a number of
lightweight C parsers much happier with the FreeBSD kernel
source, including cflow's prcc and lxr.
MFC after: 1 month
Discussed with: imp, rink
Notes:
svn path=/head/; revision=177253
|
|
|
|
| |
Notes:
svn path=/head/; revision=176351
|
|
|
|
| |
Notes:
svn path=/head/; revision=175058
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
sysctl_handle_int is not sizeof the int type you want to export.
The type must always be an int or an unsigned int.
Remove the instances where a sizeof(variable) is passed to stop
people accidently cut and pasting these examples.
In a few places this was sysctl_handle_int was being used on 64 bit
types, which would truncate the value to be exported. In these
cases use sysctl_handle_quad to export them and change the format
to Q so that sysctl(1) can still print them.
Notes:
svn path=/head/; revision=170289
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This change affects documentation and comments only,
no real code involved.
PR: misc/101245
Submitted by: Darren Pilgrim <darren pilgrim bitfreak org>
Tested by: md5(1)
MFC after: 1 week
Notes:
svn path=/head/; revision=160964
|
|
|
|
|
|
|
|
|
|
|
|
| |
frequency, quality and current value of each available time counter.
At the moment all of these are read-only, but it might make sense to
make some of these read-write in the future.
MFC after: 3 months
Notes:
svn path=/head/; revision=159669
|
|
|
|
| |
Notes:
svn path=/head/; revision=156753
|
|
|
|
| |
Notes:
svn path=/head/; revision=156485
|
|
|
|
|
|
|
| |
when the frequency increases.
Notes:
svn path=/head/; revision=156483
|
|
|
|
| |
Notes:
svn path=/head/; revision=156413
|
|
|
|
| |
Notes:
svn path=/head/; revision=156271
|
|
|
|
| |
Notes:
svn path=/head/; revision=156270
|
|
|
|
|
|
|
| |
cpu_ticks to the low side of PPM.
Notes:
svn path=/head/; revision=156205
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Keep accounting time (in per-cpu) cputicks and the statistics counts
in the thread and summarize into struct proc when at context switch.
Don't reach across CPUs in calcru().
Add code to calibrate the top speed of cpu_tickrate() for variable
cpu_tick hardware (like TSC on power managed machines).
Don't enforce monotonicity (at least for now) in calcru. While the
calibrated cpu_tickrate ramps up it may not be true.
Use 27MHz counter on i386/Geode.
Use TSC on amd64 & i386 if present.
Use tick counter on sparc64
Notes:
svn path=/head/; revision=155534
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Keep track of time spent by the cpu in various contexts in units of
"cputicks" and scale to real-world microsec^H^H^H^H^H^H^H^Hclock_t
only when somebody wants to inspect the numbers.
For now "cputicks" are still derived from the current timecounter
and therefore things should by definition remain sensible also on
SMP machines. (The main reason for this first milestone commit is
to verify that hypothesis.)
On slower machines, the avoided multiplications to normalize timestams
at every context switch, comes out as a 5-7% better score on the
unixbench/context1 microbenchmark. On more modern hardware no change
in performance is seen.
Notes:
svn path=/head/; revision=155444
|
|
|
|
|
|
|
| |
Discussed with: phk
Notes:
svn path=/head/; revision=150348
|
|
|
|
|
|
|
| |
GCC can properly handle forward static declarations, do this properly.
Notes:
svn path=/head/; revision=149848
|
|
|
|
| |
Notes:
svn path=/head/; revision=144152
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
sysctl routines and state. Add some code to use it for signalling the need
to downconvert a data structure to 32 bits on a 64 bit OS when requested by
a 32 bit app.
I tried to do this in a generic abi wrapper that intercepted the sysctl
oid's, or looked up the format string etc, but it was a real can of worms
that turned into a fragile mess before I even got it partially working.
With this, we can now run 'sysctl -a' on a 32 bit sysctl binary and have
it not abort. Things like netstat, ps, etc have a long way to go.
This also fixes a bug in the kern.ps_strings and kern.usrstack hacks.
These do matter very much because they are used by libc_r and other things.
Notes:
svn path=/head/; revision=136404
|
|
|
|
| |
Notes:
svn path=/head/; revision=133714
|
|
|
|
|
|
|
| |
after each other doesn't mean that nothing happened.
Notes:
svn path=/head/; revision=126600
|
|
|
|
|
|
|
| |
"Always print time_t as %jd, you never know what width it has"
Notes:
svn path=/head/; revision=124842
|
|
|
|
|
|
|
| |
if the clock is stepped.
Notes:
svn path=/head/; revision=124812
|
|
|
|
|
|
|
|
|
|
|
| |
Give the HZ/overflow check a 10% margin.
Eliminate bogus newline.
If timecounters have equal quality, prefer higher frequency.
Some inspiration from: bde
Notes:
svn path=/head/; revision=122610
|
|
|
|
| |
Notes:
svn path=/head/; revision=119716
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
represents the pruely stylistic changes and should have no net impact
on the rest of the code.
bde's more substantive changes will follow in a separate commit once
we've come to closure on them.
Submitted by: bde
Notes:
svn path=/head/; revision=119183
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
ntp_update_second twice when we have a large step in case that step
goes across a scheduled leap second. The only way this could happen
would be if we didn't call tc_windup over the end of day on the day of
a leap second, which would only happen if timeouts were delayed for
seconds. While it is an edge case, it is an important one to get
right for my employer.
Sponsored by: Timing Solutions Corporation
Notes:
svn path=/head/; revision=119160
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A timecounter will be selected when registered if its quality is
not negative and no less than the current timecounters.
Add a sysctl to report all available timecounters and their qualities.
Give the dummy timecounter a solid negative quality of minus a million.
Give the i8254 zero and the ACPI 1000.
The TSC gets 800, unless APM or SMP forces it negative.
Other timecounters default to zero quality and thereby retain current
selection behaviour.
Notes:
svn path=/head/; revision=118987
|
|
|
|
| |
Notes:
svn path=/head/; revision=118842
|
|
|
|
| |
Notes:
svn path=/head/; revision=117148
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Before, we would add/subtract the leap second when the system had been
up for an even multiple of days, rather than at the end of the day, as
a leap second is defined (at least wrt ntp). We do this by
calculating the notion of UTC earlier in the loop, and passing that to
get it adjusted. Any adjustments that ntp_update_second makes to this
time are then transferred to boot time. We can't pass it either the
boot time or the uptime because their sum is what determines when a
leap second is needed. This code adds an extra assignment and two
extra compare in the typical case, which is as cheap as I could made
it.
I have confirmed with this code the kernel time does the correct thing
for both positive and negative leap seconds. Since the ntp interface
doesn't allow for +2 or -2, those cases can't be tested (and the folks
in the know here say there will never be a +2s or -2s leap event, but
rather two +1s or -1s leap events).
There will very likely be no leap seconds for a while, given how the
earth is speeding up and slowing down, so there will be plenty of time
for this fix to propigate. UT1-UTC is currently at "about -0.4s" and
decrementing by .1s every 8 months or so. 6 * 8 is 48 months, or 4
years.
-stable has different code, but a similar bug that was introduced
about the time of the last leap second, which is why nobody has
noticed until now.
MFC After: 3 weeks
Reviewed by: phk
"Furthermore, leap seconds must die." -- Cato the Elder
Notes:
svn path=/head/; revision=116841
|
|
|
|
| |
Notes:
svn path=/head/; revision=116756
|
|
|
|
| |
Notes:
svn path=/head/; revision=116182
|
|
|
|
|
|
|
|
| |
%j in printfs, so put a newsted include in <sys/systm.h> where the printf
prototype lives and save everybody else the trouble.
Notes:
svn path=/head/; revision=112367
|