diff options
| author | Eric Dumazet <edumazet@google.com> | 2016-04-01 08:52:13 -0700 |
|---|---|---|
| committer | Bruno Martins <bgcngm@gmail.com> | 2022-10-28 15:39:30 +0100 |
| commit | 070f539fb5d7a684701d975e1a9f6645e56ea322 (patch) | |
| tree | a55c1dac5c4ca1f1d8807ca5204d5dd33ebab3c2 /net/compat.c | |
| parent | a32d2ea857c51b8d3f1c265dbbd4e6de500ef369 (diff) | |
udp: no longer use SLAB_DESTROY_BY_RCU
Tom Herbert would like not touching UDP socket refcnt for encapsulated
traffic. For this to happen, we need to use normal RCU rules, with a grace
period before freeing a socket. UDP sockets are not short lived in the
high usage case, so the added cost of call_rcu() should not be a concern.
This actually removes a lot of complexity in UDP stack.
Multicast receives no longer need to hold a bucket spinlock.
Note that ip early demux still needs to take a reference on the socket.
Same remark for functions used by xt_socket and xt_PROXY netfilter modules,
but this might be changed later.
Performance for a single UDP socket receiving flood traffic from
many RX queues/cpus.
Simple udp_rx using simple recvfrom() loop :
438 kpps instead of 374 kpps : 17 % increase of the peak rate.
v2: Addressed Willem de Bruijn feedback in multicast handling
- keep early demux break in __udp4_lib_demux_lookup()
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Tom Herbert <tom@herbertland.com>
Cc: Willem de Bruijn <willemb@google.com>
Tested-by: Tom Herbert <tom@herbertland.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Change-Id: I4a8092b7f3adc34bf6f7303d5d23bb3a3fec7a7f
Diffstat (limited to 'net/compat.c')
0 files changed, 0 insertions, 0 deletions
