diff options
author | Andre Oppermann <andre@FreeBSD.org> | 2009-06-22 23:08:05 +0000 |
---|---|---|
committer | Andre Oppermann <andre@FreeBSD.org> | 2009-06-22 23:08:05 +0000 |
commit | ef760e6ad2d8ba962f357b41dc1373ad11bc1ffe (patch) | |
tree | f56606825e9225b57340aa08d56a8043bfc65081 /sys/netinet/tcp_usrreq.c | |
parent | 412501cdfd351406631d8428fc5a7819aa873a1a (diff) | |
download | src-ef760e6ad2d8ba962f357b41dc1373ad11bc1ffe.tar.gz src-ef760e6ad2d8ba962f357b41dc1373ad11bc1ffe.zip |
Add soreceive_stream(), an optimized version of soreceive() for
stream (TCP) sockets.
It is functionally identical to generic soreceive() but has a
number stream specific optimizations:
o does only one sockbuf unlock/lock per receive independent of
the length of data to be moved into the uio compared to
soreceive() which unlocks/locks per *mbuf*.
o uses m_mbuftouio() instead of its own copy(out) variant.
o much more compact code flow as a large number of special
cases is removed.
o much improved reability.
It offers significantly reduced CPU usage and lock contention
when receiving fast TCP streams. Additional gains are obtained
when the receiving application is using SO_RCVLOWAT to batch up
some data before a read (and wakeup) is done.
This function was written by "reverse engineering" and is not
just a stripped down variant of soreceive().
It is not yet enabled by default on TCP sockets. Instead it is
commented out in the protocol initialization in tcp_usrreq.c
until more widespread testing has been done.
Testers, especially with 10GigE gear, are welcome.
MFP4: r164817 //depot/user/andre/soreceive_stream/
Notes
Notes:
svn path=/head/; revision=194672
Diffstat (limited to 'sys/netinet/tcp_usrreq.c')
-rw-r--r-- | sys/netinet/tcp_usrreq.c | 6 |
1 files changed, 6 insertions, 0 deletions
diff --git a/sys/netinet/tcp_usrreq.c b/sys/netinet/tcp_usrreq.c index 1005d51e3988..f9217b5e906a 100644 --- a/sys/netinet/tcp_usrreq.c +++ b/sys/netinet/tcp_usrreq.c @@ -1032,6 +1032,9 @@ struct pr_usrreqs tcp_usrreqs = { .pru_send = tcp_usr_send, .pru_shutdown = tcp_usr_shutdown, .pru_sockaddr = in_getsockaddr, +#if 0 + .pru_soreceive = soreceive_stream, +#endif .pru_sosetlabel = in_pcbsosetlabel, .pru_close = tcp_usr_close, }; @@ -1053,6 +1056,9 @@ struct pr_usrreqs tcp6_usrreqs = { .pru_send = tcp_usr_send, .pru_shutdown = tcp_usr_shutdown, .pru_sockaddr = in6_mapped_sockaddr, +#if 0 + .pru_soreceive = soreceive_stream, +#endif .pru_sosetlabel = in_pcbsosetlabel, .pru_close = tcp_usr_close, }; |