summaryrefslogtreecommitdiffstats
path: root/tools (follow)
Commit message (Collapse)AuthorAgeFilesLines
* Merge https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextJakub Kicinski2021-12-11118-1220/+3631
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Andrii Nakryiko says: ==================== bpf-next 2021-12-10 v2 We've added 115 non-merge commits during the last 26 day(s) which contain a total of 182 files changed, 5747 insertions(+), 2564 deletions(-). The main changes are: 1) Various samples fixes, from Alexander Lobakin. 2) BPF CO-RE support in kernel and light skeleton, from Alexei Starovoitov. 3) A batch of new unified APIs for libbpf, logging improvements, version querying, etc. Also a batch of old deprecations for old APIs and various bug fixes, in preparation for libbpf 1.0, from Andrii Nakryiko. 4) BPF documentation reorganization and improvements, from Christoph Hellwig and Dave Tucker. 5) Support for declarative initialization of BPF_MAP_TYPE_PROG_ARRAY in libbpf, from Hengqi Chen. 6) Verifier log fixes, from Hou Tao. 7) Runtime-bounded loops support with bpf_loop() helper, from Joanne Koong. 8) Extend branch record capturing to all platforms that support it, from Kajol Jain. 9) Light skeleton codegen improvements, from Kumar Kartikeya Dwivedi. 10) bpftool doc-generating script improvements, from Quentin Monnet. 11) Two libbpf v0.6 bug fixes, from Shuyi Cheng and Vincent Minet. 12) Deprecation warning fix for perf/bpf_counter, from Song Liu. 13) MAX_TAIL_CALL_CNT unification and MIPS build fix for libbpf, from Tiezhu Yang. 14) BTF_KING_TYPE_TAG follow-up fixes, from Yonghong Song. 15) Selftests fixes and improvements, from Ilya Leoshkevich, Jean-Philippe Brucker, Jiri Olsa, Maxim Mikityanskiy, Tirthendu Sarkar, Yucong Sun, and others. * https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (115 commits) libbpf: Add "bool skipped" to struct bpf_map libbpf: Fix typo in btf__dedup@LIBBPF_0.0.2 definition bpftool: Switch bpf_object__load_xattr() to bpf_object__load() selftests/bpf: Remove the only use of deprecated bpf_object__load_xattr() selftests/bpf: Add test for libbpf's custom log_buf behavior selftests/bpf: Replace all uses of bpf_load_btf() with bpf_btf_load() libbpf: Deprecate bpf_object__load_xattr() libbpf: Add per-program log buffer setter and getter libbpf: Preserve kernel error code and remove kprobe prog type guessing libbpf: Improve logging around BPF program loading libbpf: Allow passing user log setting through bpf_object_open_opts libbpf: Allow passing preallocated log_buf when loading BTF into kernel libbpf: Add OPTS-based bpf_btf_load() API libbpf: Fix bpf_prog_load() log_buf logic for log_level 0 samples/bpf: Remove unneeded variable bpf: Remove redundant assignment to pointer t selftests/bpf: Fix a compilation warning perf/bpf_counter: Use bpf_map_create instead of bpf_create_map samples: bpf: Fix 'unknown warning group' build warning on Clang samples: bpf: Fix xdp_sample_user.o linking with Clang ... ==================== Link: https://lore.kernel.org/r/20211210234746.2100561-1-andrii@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
| * libbpf: Add "bool skipped" to struct bpf_mapShuyi Cheng2021-12-111-3/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix error: "failed to pin map: Bad file descriptor, path: /sys/fs/bpf/_rodata_str1_1." In the old kernel, the global data map will not be created, see [0]. So we should skip the pinning of the global data map to avoid bpf_object__pin_maps returning error. Therefore, when the map is not created, we mark “map->skipped" as true and then check during relocation and during pinning. Fixes: 16e0c35c6f7a ("libbpf: Load global data maps lazily on legacy kernels") Signed-off-by: Shuyi Cheng <chengshuyi@linux.alibaba.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
| * libbpf: Fix typo in btf__dedup@LIBBPF_0.0.2 definitionVincent Minet2021-12-111-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | The btf__dedup_deprecated name was misspelled in the definition of the compat symbol for btf__dedup. This leads it to be missing from the shared library. This fixes it. Fixes: 957d350a8b94 ("libbpf: Turn btf_dedup_opts into OPTS-based struct") Signed-off-by: Vincent Minet <vincent@vincent-minet.net> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211210063112.80047-1-vincent@vincent-minet.net
| * bpftool: Switch bpf_object__load_xattr() to bpf_object__load()Andrii Nakryiko2021-12-113-29/+21
| | | | | | | | | | | | | | | | | | | | Switch all the uses of to-be-deprecated bpf_object__load_xattr() into a simple bpf_object__load() calls with optional log_level passed through open_opts.kernel_log_level, if -d option is specified. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211209193840.1248570-13-andrii@kernel.org
| * selftests/bpf: Remove the only use of deprecated bpf_object__load_xattr()Andrii Nakryiko2021-12-111-5/+5
| | | | | | | | | | | | | | | | | | Switch from bpf_object__load_xattr() to bpf_object__load() and kernel_log_level in bpf_object_open_opts. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211209193840.1248570-12-andrii@kernel.org
| * selftests/bpf: Add test for libbpf's custom log_buf behaviorAndrii Nakryiko2021-12-112-0/+300
| | | | | | | | | | | | | | | | | | | | Add a selftest that validates that per-program and per-object log_buf overrides work as expected. Also test same logic for low-level bpf_prog_load() and bpf_btf_load() APIs. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211209193840.1248570-11-andrii@kernel.org
| * selftests/bpf: Replace all uses of bpf_load_btf() with bpf_btf_load()Andrii Nakryiko2021-12-113-22/+32
| | | | | | | | | | | | | | | | | | Switch all selftests uses of to-be-deprecated bpf_load_btf() with equivalent bpf_btf_load() calls. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211209193840.1248570-10-andrii@kernel.org
| * libbpf: Deprecate bpf_object__load_xattr()Andrii Nakryiko2021-12-112-13/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Deprecate non-extensible bpf_object__load_xattr() in v0.8 ([0]). With log_level control through bpf_object_open_opts or bpf_program__set_log_level(), we are finally at the point where bpf_object__load_xattr() doesn't provide any functionality that can't be accessed through other (better) ways. The other feature, target_btf_path, is also controllable through bpf_object_open_opts. [0] Closes: https://github.com/libbpf/libbpf/issues/289 Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211209193840.1248570-9-andrii@kernel.org
| * libbpf: Add per-program log buffer setter and getterAndrii Nakryiko2021-12-113-17/+84
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Allow to set user-provided log buffer on a per-program basis ([0]). This gives great deal of flexibility in terms of which programs are loaded with logging enabled and where corresponding logs go. Log buffer set with bpf_program__set_log_buf() overrides kernel_log_buf and kernel_log_size settings set at bpf_object open time through bpf_object_open_opts, if any. Adjust bpf_object_load_prog_instance() logic to not perform own log buf allocation and load retry if custom log buffer is provided by the user. [0] Closes: https://github.com/libbpf/libbpf/issues/418 Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211209193840.1248570-8-andrii@kernel.org
| * libbpf: Preserve kernel error code and remove kprobe prog type guessingAndrii Nakryiko2021-12-111-17/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Instead of rewriting error code returned by the kernel of prog load with libbpf-sepcific variants pass through the original error. There is now also no need to have a backup generic -LIBBPF_ERRNO__LOAD fallback error as bpf_prog_load() guarantees that errno will be properly set no matter what. Also drop a completely outdated and pretty useless BPF_PROG_TYPE_KPROBE guess logic. It's not necessary and neither it's helpful in modern BPF applications. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211209193840.1248570-7-andrii@kernel.org
| * libbpf: Improve logging around BPF program loadingAndrii Nakryiko2021-12-112-21/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add missing "prog '%s': " prefixes in few places and use consistently markers for beginning and end of program load logs. Here's an example of log output: libbpf: prog 'handler': BPF program load failed: Permission denied libbpf: -- BEGIN PROG LOAD LOG --- arg#0 reference type('UNKNOWN ') size cannot be determined: -22 ; out1 = in1; 0: (18) r1 = 0xffffc9000cdcc000 2: (61) r1 = *(u32 *)(r1 +0) ... 81: (63) *(u32 *)(r4 +0) = r5 R1_w=map_value(id=0,off=16,ks=4,vs=20,imm=0) R4=map_value(id=0,off=400,ks=4,vs=16,imm=0) invalid access to map value, value_size=16 off=400 size=4 R4 min value is outside of the allowed memory range processed 63 insns (limit 1000000) max_states_per_insn 0 total_states 0 peak_states 0 mark_read 0 -- END PROG LOAD LOG -- libbpf: failed to load program 'handler' libbpf: failed to load object 'test_skeleton' The entire verifier log, including BEGIN and END markers are now always youtput during a single print callback call. This should make it much easier to post-process or parse it, if necessary. It's not an explicit API guarantee, but it can be reasonably expected to stay like that. Also __bpf_object__open is renamed to bpf_object_open() as it's always an adventure to find the exact function that implements bpf_object's open phase, so drop the double underscored and use internal libbpf naming convention. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211209193840.1248570-6-andrii@kernel.org
| * libbpf: Allow passing user log setting through bpf_object_open_optsAndrii Nakryiko2021-12-113-3/+65
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Allow users to provide their own custom log_buf, log_size, and log_level at bpf_object level through bpf_object_open_opts. This log_buf will be used during BTF loading. Subsequent patch will use same log_buf during BPF program loading, unless overriden at per-bpf_program level. When such custom log_buf is provided, libbpf won't be attempting retrying loading of BTF to try to provide its own log buffer to capture kernel's error log output. User is responsible to provide big enough buffer, otherwise they run a risk of getting -ENOSPC error from the bpf() syscall. See also comments in bpf_object_open_opts regarding log_level and log_buf interactions. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211209193840.1248570-5-andrii@kernel.org
| * libbpf: Allow passing preallocated log_buf when loading BTF into kernelAndrii Nakryiko2021-12-112-23/+56
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add libbpf-internal btf_load_into_kernel() that allows to pass preallocated log_buf and custom log_level to be passed into kernel during BPF_BTF_LOAD call. When custom log_buf is provided, btf_load_into_kernel() won't attempt an retry with automatically allocated internal temporary buffer to capture BTF validation log. It's important to note the relation between log_buf and log_level, which slightly deviates from stricter kernel logic. From kernel's POV, if log_buf is specified, log_level has to be > 0, and vice versa. While kernel has good reasons to request such "sanity, this, in practice, is a bit unconvenient and restrictive for libbpf's high-level bpf_object APIs. So libbpf will allow to set non-NULL log_buf and log_level == 0. This is fine and means to attempt to load BTF without logging requested, but if it failes, retry the load with custom log_buf and log_level 1. Similar logic will be implemented for program loading. In practice this means that users can provide custom log buffer just in case error happens, but not really request slower verbose logging all the time. This is also consistent with libbpf behavior when custom log_buf is not set: libbpf first tries to load everything with log_level=0, and only if error happens allocates internal log buffer and retries with log_level=1. Also, while at it, make BTF validation log more obvious and follow the log pattern libbpf is using for dumping BPF verifier log during BPF_PROG_LOAD. BTF loading resulting in an error will look like this: libbpf: BTF loading error: -22 libbpf: -- BEGIN BTF LOAD LOG --- magic: 0xeb9f version: 1 flags: 0x0 hdr_len: 24 type_off: 0 type_len: 1040 str_off: 1040 str_len: 2063598257 btf_total_size: 1753 Total section length too long -- END BTF LOAD LOG -- libbpf: Error loading .BTF into kernel: -22. BTF is optional, ignoring. This makes it much easier to find relevant parts in libbpf log output. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211209193840.1248570-4-andrii@kernel.org
| * libbpf: Add OPTS-based bpf_btf_load() APIAndrii Nakryiko2021-12-114-12/+69
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Similar to previous bpf_prog_load() and bpf_map_create() APIs, add bpf_btf_load() API which is taking optional OPTS struct. Schedule bpf_load_btf() for deprecation in v0.8 ([0]). This makes naming consistent with BPF_BTF_LOAD command, sets up an API for extensibility in the future, moves options parameters (log-related fields) into optional options, and also allows to pass log_level directly. It also removes log buffer auto-allocation logic from low-level API (consistent with bpf_prog_load() behavior), but preserves a special treatment of log_level == 0 with non-NULL log_buf, which matches low-level bpf_prog_load() and high-level libbpf APIs for BTF and program loading behaviors. [0] Closes: https://github.com/libbpf/libbpf/issues/419 Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211209193840.1248570-3-andrii@kernel.org
| * libbpf: Fix bpf_prog_load() log_buf logic for log_level 0Andrii Nakryiko2021-12-111-13/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | To unify libbpf APIs behavior w.r.t. log_buf and log_level, fix bpf_prog_load() to follow the same logic as bpf_btf_load() and high-level bpf_object__load() API will follow in the subsequent patches: - if log_level is 0 and non-NULL log_buf is provided by a user, attempt load operation initially with no log_buf and log_level set; - if successful, we are done, return new FD; - on error, retry the load operation with log_level bumped to 1 and log_buf set; this way verbose logging will be requested only when we are sure that there is a failure, but will be fast in the common/expected success case. Of course, user can still specify log_level > 0 from the very beginning to force log collection. Suggested-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211209193840.1248570-2-andrii@kernel.org
| * selftests/bpf: Fix a compilation warningYonghong Song2021-12-091-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The following warning is triggered when I used clang compiler to build the selftest. /.../prog_tests/btf_dedup_split.c:368:6: warning: variable 'btf2' is used uninitialized whenever 'if' condition is true [-Wsometimes-uninitialized] if (!ASSERT_OK(err, "btf_dedup")) ^~~~~~~~~~~~~~~~~~~~~~~~~~~~ /.../prog_tests/btf_dedup_split.c:424:12: note: uninitialized use occurs here btf__free(btf2); ^~~~ /.../prog_tests/btf_dedup_split.c:368:2: note: remove the 'if' if its condition is always false if (!ASSERT_OK(err, "btf_dedup")) ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /.../prog_tests/btf_dedup_split.c:343:25: note: initialize the variable 'btf2' to silence this warning struct btf *btf1, *btf2; ^ = NULL Initialize local variable btf2 = NULL and the warning is gone. Fixes: 9a49afe6f5a5 ("selftests/bpf: Add btf_dedup case with duplicated structs within CU") Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211209050403.1770836-1-yhs@fb.com
| * perf/bpf_counter: Use bpf_map_create instead of bpf_create_mapSong Liu2021-12-081-2/+16
| | | | | | | | | | | | | | | | | | | | | | bpf_create_map is deprecated. Replace it with bpf_map_create. Also add a __weak bpf_map_create() so that when older version of libbpf is linked as a shared library, it falls back to bpf_create_map(). Fixes: 992c4225419a ("libbpf: Unify low-level map creation APIs w/ new bpf_map_create()") Signed-off-by: Song Liu <song@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211207232340.2561471-1-song@kernel.org
| * libbpf: Add doc comments in libbpf.hGrant Seltzer2021-12-061-0/+53
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This adds comments above functions in libbpf.h which document their uses. These comments are of a format that doxygen and sphinx can pick up and render. These are rendered by libbpf.readthedocs.org These doc comments are for: - bpf_object__open_file() - bpf_object__open_mem() - bpf_program__attach_uprobe() - bpf_program__attach_uprobe_opts() Signed-off-by: Grant Seltzer <grantseltzer@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211206203709.332530-1-grantseltzer@gmail.com
| * libbpf: Fix trivial typohuangxuesen2021-12-061-2/+2
| | | | | | | | | | | | | | | | | | Fix typo in comment from 'bpf_skeleton_map' to 'bpf_map_skeleton' and from 'bpf_skeleton_prog' to 'bpf_prog_skeleton'. Signed-off-by: huangxuesen <huangxuesen@kuaishou.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/1638755236-3851199-1-git-send-email-hxseverything@gmail.com
| * bpftool: Add debug mode for gen_loader.Alexei Starovoitov2021-12-051-9/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Make -d flag functional for gen_loader style program loading. For example: $ bpftool prog load -L -d test_d_path.o ... // will print: libbpf: loading ./test_d_path.o libbpf: elf: section(3) fentry/security_inode_getattr, size 280, link 0, flags 6, type=1 ... libbpf: prog 'prog_close': found data map 0 (test_d_p.bss, sec 7, off 0) for insn 30 libbpf: gen: load_btf: size 5376 libbpf: gen: map_create: test_d_p.bss idx 0 type 2 value_type_id 118 libbpf: map 'test_d_p.bss': created successfully, fd=0 libbpf: gen: map_update_elem: idx 0 libbpf: sec 'fentry/filp_close': found 1 CO-RE relocations libbpf: record_relo_core: prog 1 insn[15] struct file 0:1 final insn_idx 15 libbpf: gen: prog_load: type 26 insns_cnt 35 progi_idx 0 libbpf: gen: find_attach_tgt security_inode_getattr 12 libbpf: gen: prog_load: type 26 insns_cnt 37 progi_idx 1 libbpf: gen: find_attach_tgt filp_close 12 libbpf: gen: finish 0 ... // at this point libbpf finished generating loader program 0: (bf) r6 = r1 1: (bf) r1 = r10 2: (07) r1 += -136 3: (b7) r2 = 136 4: (b7) r3 = 0 5: (85) call bpf_probe_read_kernel#113 6: (05) goto pc+104 ... // this is the assembly dump of the loader program 390: (63) *(u32 *)(r6 +44) = r0 391: (18) r1 = map[idx:0]+5584 393: (61) r0 = *(u32 *)(r1 +0) 394: (63) *(u32 *)(r6 +24) = r0 395: (b7) r0 = 0 396: (95) exit err 0 // the loader program was loaded and executed successfully (null) func#0 @0 ... // CO-RE in the kernel logs: CO-RE relocating STRUCT file: found target candidate [500] prog '': relo #0: kind <byte_off> (0), spec is [8] STRUCT file.f_path (0:1 @ offset 16) prog '': relo #0: matching candidate #0 [500] STRUCT file.f_path (0:1 @ offset 16) prog '': relo #0: patched insn #15 (ALU/ALU64) imm 16 -> 16 vmlinux_cand_cache:[11]file(500), module_cand_cache: ... // verifier logs when it was checking test_d_path.o program: R1 type=ctx expected=fp 0: R1=ctx(id=0,off=0,imm=0) R10=fp0 ; int BPF_PROG(prog_close, struct file *file, void *id) 0: (79) r6 = *(u64 *)(r1 +0) func 'filp_close' arg0 has btf_id 500 type STRUCT 'file' 1: R1=ctx(id=0,off=0,imm=0) R6_w=ptr_file(id=0,off=0,imm=0) R10=fp0 ; pid_t pid = bpf_get_current_pid_tgid() >> 32; 1: (85) call bpf_get_current_pid_tgid#14 ... // if there are multiple programs being loaded by the loader program ... // only the last program in the elf file will be printed, since ... // the same verifier log_buf is used for all PROG_LOAD commands. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211204194623.27779-1-alexei.starovoitov@gmail.com
| * bpf: Fix the test_task_vma selftest to support output shorter than 1 kBMaxim Mikityanskiy2021-12-031-2/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The test for bpf_iter_task_vma assumes that the output will be longer than 1 kB, as the comment above the loop says. Due to this assumption, the loop becomes infinite if the output turns to be shorter than 1 kB. The return value of read_fd_into_buffer is 0 when the end of file was reached, and len isn't being increased any more. This commit adds a break on EOF to handle short output correctly. For the reference, this is the contents that I get when running test_progs under vmtest.sh, and it's shorter than 1 kB: 00400000-00401000 r--p 00000000 fe:00 25867 /root/bpf/test_progs 00401000-00674000 r-xp 00001000 fe:00 25867 /root/bpf/test_progs 00674000-0095f000 r--p 00274000 fe:00 25867 /root/bpf/test_progs 0095f000-00983000 r--p 0055e000 fe:00 25867 /root/bpf/test_progs 00983000-00a8a000 rw-p 00582000 fe:00 25867 /root/bpf/test_progs 00a8a000-0484e000 rw-p 00000000 00:00 0 7f6c64000000-7f6c64021000 rw-p 00000000 00:00 0 7f6c64021000-7f6c68000000 ---p 00000000 00:00 0 7f6c6ac8f000-7f6c6ac90000 r--s 00000000 00:0d 8032 anon_inode:bpf-map 7f6c6ac90000-7f6c6ac91000 ---p 00000000 00:00 0 7f6c6ac91000-7f6c6b491000 rw-p 00000000 00:00 0 7f6c6b491000-7f6c6b492000 r--s 00000000 00:0d 8032 anon_inode:bpf-map 7f6c6b492000-7f6c6b493000 rw-s 00000000 00:0d 8032 anon_inode:bpf-map 7ffc1e23d000-7ffc1e25e000 rw-p 00000000 00:00 0 7ffc1e3b8000-7ffc1e3bc000 r--p 00000000 00:00 0 7ffc1e3bc000-7ffc1e3bd000 r-xp 00000000 00:00 0 7fffffffe000-7ffffffff000 --xp 00000000 00:00 0 Fixes: e8168840e16c ("selftests/bpf: Add test for bpf_iter_task_vma") Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20211130181811.594220-1-maximmi@nvidia.com
| * libbpf: Reduce bpf_core_apply_relo_insn() stack usage.Alexei Starovoitov2021-12-033-45/+51
| | | | | | | | | | | | | | | | | | | | Reduce bpf_core_apply_relo_insn() stack usage and bump BPF_CORE_SPEC_MAX_LEN limit back to 64. Fixes: 29db4bea1d10 ("bpf: Prepare relo_core.c for kernel duty.") Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211203182836.16646-1-alexei.starovoitov@gmail.com
| * perf: Mute libbpf API deprecations temporarilyAndrii Nakryiko2021-12-032-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Libbpf development version was bumped to 0.7 in c93faaaf2f67 ("libbpf: Deprecate bpf_prog_load_xattr() API"), activating a bunch of previously scheduled deprecations. Most APIs are pretty straightforward to replace with newer APIs, but perf has a complicated mixed setup with libbpf used both as static and shared configurations, which makes it non-trivial to migrate the APIs. Further, bpf_program__set_prep() needs more involved refactoring, which will require help from Arnaldo and/or Jiri. So for now, mute deprecation warnings and work on migrating perf off of deprecated APIs separately with the input from owners of the perf tool. Fixes: c93faaaf2f67 ("libbpf: Deprecate bpf_prog_load_xattr() API") Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211203004640.2455717-1-andrii@kernel.org
| * libbpf: Deprecate bpf_prog_load_xattr() APIAndrii Nakryiko2021-12-032-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | bpf_prog_load_xattr() is high-level API that's named as a low-level BPF_PROG_LOAD wrapper APIs, but it actually operates on struct bpf_object. It's badly and confusingly misnamed as it will load all the progs insige bpf_object, returning prog_fd of the very first BPF program. It also has a bunch of ad-hoc things like log_level override, map_ifindex auto-setting, etc. All this can be expressed more explicitly and cleanly through existing libbpf APIs. This patch marks bpf_prog_load_xattr() for deprecation in libbpf v0.8 ([0]). [0] Closes: https://github.com/libbpf/libbpf/issues/308 Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211201232824.3166325-10-andrii@kernel.org
| * selftests/bpf: Remove all the uses of deprecated bpf_prog_load_xattr()Andrii Nakryiko2021-12-039-107/+119
| | | | | | | | | | | | | | | | | | | | Migrate all the selftests that were still using bpf_prog_load_xattr(). Few are converted to skeleton, others will use bpf_object__open_file() API. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211201232824.3166325-7-andrii@kernel.org
| * selftests/bpf: Mute xdpxceiver.c's deprecation warningsAndrii Nakryiko2021-12-031-0/+6
| | | | | | | | | | | | | | | | | | | | xdpxceiver.c is using AF_XDP APIs that are deprecated starting from libbpf 0.7. Until we migrate the test to libxdp or solve this issue in some other way, mute deprecation warnings within xdpxceiver.c. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211201232824.3166325-6-andrii@kernel.org
| * selftests/bpf: Remove recently reintroduced legacy btf__dedup() useAndrii Nakryiko2021-12-031-2/+2
| | | | | | | | | | | | | | | | | | We've added one extra patch that added back the use of legacy btf__dedup() variant. Clean that up. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211201232824.3166325-5-andrii@kernel.org
| * bpftool: Migrate off of deprecated bpf_create_map_xattr() APIAndrii Nakryiko2021-12-031-10/+13
| | | | | | | | | | | | | | | | Switch to bpf_map_create() API instead. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211201232824.3166325-4-andrii@kernel.org
| * libbpf: Add API to get/set log_level at per-program levelAndrii Nakryiko2021-12-034-1/+23
| | | | | | | | | | | | | | | | | | | | | | | | Add bpf_program__set_log_level() and bpf_program__log_level() to fetch and adjust log_level sent during BPF_PROG_LOAD command. This allows to selectively request more or less verbose output in BPF verifier log. Also bump libbpf version to 0.7 and make these APIs the first in v0.7. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211201232824.3166325-3-andrii@kernel.org
| * libbpf: Use __u32 fields in bpf_map_create_optsAndrii Nakryiko2021-12-031-4/+4
| | | | | | | | | | | | | | | | | | | | Corresponding Linux UAPI struct uses __u32, not int, so keep it consistent. Fixes: 992c4225419a ("libbpf: Unify low-level map creation APIs w/ new bpf_map_create()") Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211201232824.3166325-2-andrii@kernel.org
| * selftests/bpf: Update test names for xchg and cmpxchgPaul E. McKenney2021-12-021-2/+2
| | | | | | | | | | | | | | | | | | | | The test_cmpxchg() and test_xchg() functions say "test_run add". Therefore, make them say "test_run cmpxchg" and "test_run xchg", respectively. Signed-off-by: Paul E. McKenney <paulmck@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211201005030.GA3071525@paulmck-ThinkPad-P17-Gen-1
| * selftests/bpf: Build testing_helpers.o out of treeJean-Philippe Brucker2021-12-021-18/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add $(OUTPUT) prefix to testing_helpers.o, so it can be built out of tree when necessary. At the moment, in addition to being built in-tree even when out-of-tree is required, testing_helpers.o is not built with the right recipe when cross-building. For consistency the other helpers, cgroup_helpers and trace_helpers, can also be passed as objects instead of source. Use *_HELPERS variable to keep the Makefile readable. Fixes: f87c1930ac29 ("selftests/bpf: Merge test_stub.c into testing_helpers.c") Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211201145101.823159-1-jean-philippe@linaro.org
| * selftests/bpf: Add CO-RE relocations to verifier scale test.Alexei Starovoitov2021-12-021-2/+2
| | | | | | | | | | | | | | | | Add 182 CO-RE relocations to verifier scale test. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211201181040.23337-18-alexei.starovoitov@gmail.com
| * selftests/bpf: Revert CO-RE removal in test_ksyms_weak.Alexei Starovoitov2021-12-021-1/+1
| | | | | | | | | | | | | | | | | | | | | | The commit 087cba799ced ("selftests/bpf: Add weak/typeless ksym test for light skeleton") added test_ksyms_weak to light skeleton testing, but remove CO-RE access. Revert that part of commit, since light skeleton can use CO-RE in the kernel. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211201181040.23337-17-alexei.starovoitov@gmail.com
| * selftests/bpf: Additional test for CO-RE in the kernel.Alexei Starovoitov2021-12-023-1/+119
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Add a test where randmap() function is appended to three different bpf programs. That action checks struct bpf_core_relo replication logic and offset adjustment in gen loader part of libbpf. Fourth bpf program has 360 CO-RE relocations from vmlinux, bpf_testmod, and non-existing type. It tests candidate cache logic. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211201181040.23337-16-alexei.starovoitov@gmail.com
| * selftests/bpf: Convert map_ptr_kern test to use light skeleton.Alexei Starovoitov2021-12-022-10/+9
| | | | | | | | | | | | | | | | | | | | To exercise CO-RE in the kernel further convert map_ptr_kern test to light skeleton. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211201181040.23337-15-alexei.starovoitov@gmail.com
| * selftests/bpf: Improve inner_map test coverage.Alexei Starovoitov2021-12-021-2/+14
| | | | | | | | | | | | | | | | | | Check that hash and array inner maps are properly initialized. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211201181040.23337-14-alexei.starovoitov@gmail.com
| * selftests/bpf: Add lskel version of kfunc test.Alexei Starovoitov2021-12-022-1/+25
| | | | | | | | | | | | | | | | | | Add light skeleton version of kfunc_call_test_subprog test. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211201181040.23337-13-alexei.starovoitov@gmail.com
| * libbpf: Clean gen_loader's attach kind.Alexei Starovoitov2021-12-021-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | The gen_loader has to clear attach_kind otherwise the programs without attach_btf_id will fail load if they follow programs with attach_btf_id. Fixes: 67234743736a ("libbpf: Generate loader program out of BPF ELF file.") Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211201181040.23337-12-alexei.starovoitov@gmail.com
| * libbpf: Support init of inner maps in light skeleton.Alexei Starovoitov2021-12-023-3/+31
| | | | | | | | | | | | | | | | | | Add ability to initialize inner maps in light skeleton. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211201181040.23337-11-alexei.starovoitov@gmail.com
| * libbpf: Use CO-RE in the kernel in light skeleton.Alexei Starovoitov2021-12-023-33/+120
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Without lskel the CO-RE relocations are processed by libbpf before any other work is done. Instead, when lskel is needed, remember relocation as RELO_CORE kind. Then when loader prog is generated for a given bpf program pass CO-RE relos of that program to gen loader via bpf_gen__record_relo_core(). The gen loader will remember them as-is and pass it later as-is into the kernel. The normal libbpf flow is to process CO-RE early before call relos happen. In case of gen_loader the core relos have to be added to other relos to be copied together when bpf static function is appended in different places to other main bpf progs. During the copy the append_subprog_relos() will adjust insn_idx for normal relos and for RELO_CORE kind too. When that is done each struct reloc_desc has good relos for specific main prog. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211201181040.23337-10-alexei.starovoitov@gmail.com
| * libbpf: Cleanup struct bpf_core_cand.Andrii Nakryiko2021-12-022-15/+17
| | | | | | | | | | | | | | | | | | Remove two redundant fields from struct bpf_core_cand. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211201181040.23337-8-alexei.starovoitov@gmail.com
| * bpf: Pass a set of bpf_core_relo-s to prog_load command.Alexei Starovoitov2021-12-022-54/+58
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | struct bpf_core_relo is generated by llvm and processed by libbpf. It's a de-facto uapi. With CO-RE in the kernel the struct bpf_core_relo becomes uapi de-jure. Add an ability to pass a set of 'struct bpf_core_relo' to prog_load command and let the kernel perform CO-RE relocations. Note the struct bpf_line_info and struct bpf_func_info have the same layout when passed from LLVM to libbpf and from libbpf to the kernel except "insn_off" fields means "byte offset" when LLVM generates it. Then libbpf converts it to "insn index" to pass to the kernel. The struct bpf_core_relo's "insn_off" field is always "byte offset". Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211201181040.23337-6-alexei.starovoitov@gmail.com
| * bpf: Define enum bpf_core_relo_kind as uapi.Alexei Starovoitov2021-12-024-60/+63
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | enum bpf_core_relo_kind is generated by llvm and processed by libbpf. It's a de-facto uapi. With CO-RE in the kernel the bpf_core_relo_kind values become uapi de-jure. Also rename them with BPF_CORE_ prefix to distinguish from conflicting names in bpf_core_read.h. The enums bpf_field_info_kind, bpf_type_id_kind, bpf_type_info_kind, bpf_enum_value_kind are passing different values from bpf program into llvm. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211201181040.23337-5-alexei.starovoitov@gmail.com
| * bpf: Prepare relo_core.c for kernel duty.Alexei Starovoitov2021-12-021-11/+65
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Make relo_core.c to be compiled for the kernel and for user space libbpf. Note the patch is reducing BPF_CORE_SPEC_MAX_LEN from 64 to 32. This is the maximum number of nested structs and arrays. For example: struct sample { int a; struct { int b[10]; }; }; struct sample *s = ...; int *y = &s->b[5]; This field access is encoded as "0:1:0:5" and spec len is 4. The follow up patch might bump it back to 64. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211201181040.23337-4-alexei.starovoitov@gmail.com
| * libbpf: Replace btf__type_by_id() with btf_type_by_id().Alexei Starovoitov2021-12-023-13/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | To prepare relo_core.c to be compiled in the kernel and the user space replace btf__type_by_id with btf_type_by_id. In libbpf btf__type_by_id and btf_type_by_id have different behavior. bpf_core_apply_relo_insn() needs behavior of uapi btf__type_by_id vs internal btf_type_by_id, but type_id range check is already done in bpf_core_apply_relo(), so it's safe to replace it everywhere. The kernel btf_type_by_id() does the check anyway. It doesn't hurt. Suggested-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211201181040.23337-2-alexei.starovoitov@gmail.com
| * libbpf: Avoid reload of imm for weak, unresolved, repeating ksymKumar Kartikeya Dwivedi2021-12-011-3/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | Alexei pointed out that we can use BPF_REG_0 which already contains imm from move_blob2blob computation. Note that we now compare the second insn's imm, but this should not matter, since both will be zeroed out for the error case for the insn populated earlier. Suggested-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20211122235733.634914-4-memxor@gmail.com
| * libbpf: Avoid double stores for success/failure case of ksym relocationsKumar Kartikeya Dwivedi2021-12-011-16/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | Instead, jump directly to success case stores in case ret >= 0, else do the default 0 value store and jump over the success case. This is better in terms of readability. Readjust the code for kfunc relocation as well to follow a similar pattern, also leads to easier to follow code now. Suggested-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20211122235733.634914-3-memxor@gmail.com
| * selftest/bpf/benchs: Add bpf_loop benchmarkJoanne Koong2021-11-307-1/+203
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add benchmark to measure the throughput and latency of the bpf_loop call. Testing this on my dev machine on 1 thread, the data is as follows: nr_loops: 10 bpf_loop - throughput: 198.519 ± 0.155 M ops/s, latency: 5.037 ns/op nr_loops: 100 bpf_loop - throughput: 247.448 ± 0.305 M ops/s, latency: 4.041 ns/op nr_loops: 500 bpf_loop - throughput: 260.839 ± 0.380 M ops/s, latency: 3.834 ns/op nr_loops: 1000 bpf_loop - throughput: 262.806 ± 0.629 M ops/s, latency: 3.805 ns/op nr_loops: 5000 bpf_loop - throughput: 264.211 ± 1.508 M ops/s, latency: 3.785 ns/op nr_loops: 10000 bpf_loop - throughput: 265.366 ± 3.054 M ops/s, latency: 3.768 ns/op nr_loops: 50000 bpf_loop - throughput: 235.986 ± 20.205 M ops/s, latency: 4.238 ns/op nr_loops: 100000 bpf_loop - throughput: 264.482 ± 0.279 M ops/s, latency: 3.781 ns/op nr_loops: 500000 bpf_loop - throughput: 309.773 ± 87.713 M ops/s, latency: 3.228 ns/op nr_loops: 1000000 bpf_loop - throughput: 262.818 ± 4.143 M ops/s, latency: 3.805 ns/op >From this data, we can see that the latency per loop decreases as the number of loops increases. On this particular machine, each loop had an overhead of about ~4 ns, and we were able to run ~250 million loops per second. Signed-off-by: Joanne Koong <joannekoong@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211130030622.4131246-5-joannekoong@fb.com
| * selftests/bpf: Measure bpf_loop verifier performanceJoanne Koong2021-11-305-4/+169
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch tests bpf_loop in pyperf and strobemeta, and measures the verifier performance of replacing the traditional for loop with bpf_loop. The results are as follows: ~strobemeta~ Baseline verification time 6808200 usec stack depth 496 processed 554252 insns (limit 1000000) max_states_per_insn 16 total_states 15878 peak_states 13489 mark_read 3110 #192 verif_scale_strobemeta:OK (unrolled loop) Using bpf_loop verification time 31589 usec stack depth 96+400 processed 1513 insns (limit 1000000) max_states_per_insn 2 total_states 106 peak_states 106 mark_read 60 #193 verif_scale_strobemeta_bpf_loop:OK ~pyperf600~ Baseline verification time 29702486 usec stack depth 368 processed 626838 insns (limit 1000000) max_states_per_insn 7 total_states 30368 peak_states 30279 mark_read 748 #182 verif_scale_pyperf600:OK (unrolled loop) Using bpf_loop verification time 148488 usec stack depth 320+40 processed 10518 insns (limit 1000000) max_states_per_insn 10 total_states 705 peak_states 517 mark_read 38 #183 verif_scale_pyperf600_bpf_loop:OK Using the bpf_loop helper led to approximately a 99% decrease in the verification time and in the number of instructions. Signed-off-by: Joanne Koong <joannekoong@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211130030622.4131246-4-joannekoong@fb.com