diff options
author | Daniel Borkmann <daniel@iogearbox.net> | 2016-07-22 01:19:42 +0200 |
---|---|---|
committer | Michael Bestas <mkbestas@lineageos.org> | 2022-04-19 00:51:47 +0300 |
commit | 997b40bc5c5a68830807f2a98c5ce45fb8b363b3 (patch) | |
tree | 39aa18a24f8cc5690b42b9729ab39f735c495471 /include/linux/perf_event.h | |
parent | c3e6fac2b9003aeab05b0e784e1e76df05231828 (diff) |
bpf, events: fix offset in skb copy handler
This patch fixes the __output_custom() routine we currently use with
bpf_skb_copy(). I missed that when len is larger than the size of the
current handle, we can issue multiple invocations of copy_func, and
__output_custom() advances destination but also source buffer by the
written amount of bytes. When we have __output_custom(), this is actually
wrong since in that case the source buffer points to a non-linear object,
in our case an skb, which the copy_func helper is supposed to walk.
Therefore, since this is non-linear we thus need to pass the offset into
the helper, so that copy_func can use it for extracting the data from
the source object.
Therefore, adjust the callback signatures properly and pass offset
into the skb_header_pointer() invoked from bpf_skb_copy() callback. The
__DEFINE_OUTPUT_COPY_BODY() is adjusted to accommodate for two things:
i) to pass in whether we should advance source buffer or not; this is
a compile-time constant condition, ii) to pass in the offset for
__output_custom(), which we do with help of __VA_ARGS__, so everything
can stay inlined as is currently. Both changes allow for adapting the
__output_* fast-path helpers w/o extra overhead.
Fixes: 555c8a8623a3 ("bpf: avoid stack copy and use skb ctx for event output")
Fixes: 7e3f977edd0b ("perf, events: add non-linear data support for raw records")
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'include/linux/perf_event.h')
-rw-r--r-- | include/linux/perf_event.h | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index ed7da491b13c..a08674a92aff 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -67,7 +67,7 @@ struct perf_callchain_entry_ctx { }; typedef unsigned long (*perf_copy_f)(void *dst, const void *src, - unsigned long len); + unsigned long off, unsigned long len); struct perf_raw_frag { union { |