Skip to content

Wip kasan#11302

Closed
Tropicao wants to merge 11 commits intokernel-patches:bpf-next_basefrom
Tropicao:wip_kasan
Closed

Wip kasan#11302
Tropicao wants to merge 11 commits intokernel-patches:bpf-next_basefrom
Tropicao:wip_kasan

Conversation

@Tropicao
Copy link
Copy Markdown
Contributor

@Tropicao Tropicao commented Mar 5, 2026

CI test run

@kernel-patches-daemon-bpf kernel-patches-daemon-bpf Bot force-pushed the bpf-next_base branch 3 times, most recently from 59120bd to 94aca0b Compare March 9, 2026 01:04
Xu Kuohai and others added 11 commits March 9, 2026 16:33
During the JIT stage, constants blinding rewrites instructions but only
rewrites the private instruction copy of the JITed subprog, leaving the
global instructions and insn_aux_data unchanged. This causes a mismatch
between subprog instructions and the global state, making it difficult
to look up the global insn_aux_data in the JIT.

To avoid this mismatch, and given that all arch-specific JITs already
support constants blinding, move it to the generic verifier code, and
switch to rewrite the global env->insnsi with the global states
adjusted, as other rewrites in the verifier do.

This removes the constant blinding calls in each JIT, which are largely
duplicated code across architectures.

And the prog clone functions and insn_array adjustment for the JIT
constant blinding are no longer needed, remove them too.

Signed-off-by: Xu Kuohai <xukuohai@huawei.com>
Pass bpf_verifier_env to bpf_int_jit_compile(). The follow-up patch will
use env->insn_aux_data in the JIT stage to detect indirect jump targets.

Since bpf_prog_select_runtime() can be called by cbpf and test code
without verifier, introduce helper function __bpf_prog_select_runtime()
to accept the env parameter.

Remove the call to bpf_prog_select_runtime() in bpf_prog_load(), and
switch to call __bpf_prog_select_runtime() in the verifier, with env
variable passed. The original bpf_prog_select_runtime() is preserved for
cbpf and test code, where env is NULL.

Signed-off-by: Xu Kuohai <xukuohai@huawei.com>
Introduce helper bpf_insn_is_indirect_target to check whether a BPF
instruction is an indirect jump target.

Since the verifier knows which instructions are indirect jump targets,
add a new flag indirect_target to struct bpf_insn_aux_data to mark
them. The verifier sets this flag when verifing an indirect jump target
instruction, and the helper checks it to determine whether an
instruction is an indirect jump target.

Signed-off-by: Xu Kuohai <xukuohai@huawei.com>
On CPUs that support CET/IBT, the indirect jump selftest triggers
a kernel panic because the indirect jump targets lack ENDBR
instructions.

To fix it, emit an ENDBR instruction to each indirect jump target. Since
the ENDBR instruction shifts the position of original jited instructions,
fix the instruction address calculation wherever the addresses are used.

For reference, below is a sample panic log.

 Missing ENDBR: bpf_prog_2e5f1c71c13ac3e0_big_jump_table+0x97/0xe1
 ------------[ cut here ]------------
 kernel BUG at arch/x86/kernel/cet.c:133!
 Oops: invalid opcode: 0000 [#1] SMP NOPTI

 ...

  ? 0xffffffffc00fb258
  ? bpf_prog_2e5f1c71c13ac3e0_big_jump_table+0x97/0xe1
  bpf_prog_test_run_syscall+0x110/0x2f0
  ? fdget+0xba/0xe0
  __sys_bpf+0xe4b/0x2590
  ? __kmalloc_node_track_caller_noprof+0x1c7/0x680
  ? bpf_prog_test_run_syscall+0x215/0x2f0
  __x64_sys_bpf+0x21/0x30
  do_syscall_64+0x85/0x620
  ? bpf_prog_test_run_syscall+0x1e2/0x2f0

Fixes: 493d9e0 ("bpf, x86: add support for indirect jumps")
Signed-off-by: Xu Kuohai <xukuohai@huawei.com>
On CPUs that support BTI, the indirect jump selftest triggers a kernel
panic because there is no BTI instructions at the indirect jump targets.

Fix it by emitting a BTI instruction for each indirect jump target.

For reference, below is a sample panic log.

Internal error: Oops - BTI: 0000000036000003 [#1]  SMP
...
Call trace:
 bpf_prog_2e5f1c71c13ac3e0_big_jump_table+0x54/0xf8 (P)
 bpf_prog_run_pin_on_cpu+0x140/0x468
 bpf_prog_test_run_syscall+0x280/0x3b8
 bpf_prog_test_run+0x22c/0x2c0

Fixes: f4a66cf ("bpf: arm64: Add support for indirect jumps")
Signed-off-by: Xu Kuohai <xukuohai@huawei.com>
# Describe the purpose of this series. The information you put here
# will be used by the project maintainer to make a decision whether
# your patches should be reviewed, and in what priority order. Please be
# very detailed and link to any relevant discussions or sites that the
# maintainer can review to better understand your proposed changes. If you
# only have a single patch in your series, the contents of the cover
# letter will be appended to the "under-the-cut" portion of the patch.

# Lines starting with # will be removed from the cover letter. You can
# use them to add notes or reminders to yourself. If you want to use
# markdown headers in your cover letter, start the line with ">#".

# You can add trailers to the cover letter. Any email addresses found in
# these trailers will be added to the addresses specified/generated
# during the b4 send stage. You can also run "b4 prep --auto-to-cc" to
# auto-populate the To: and Cc: trailers based on the code being
# modified.

Signed-off-by: Alexis Lothoré (eBPF Foundation) <alexis.lothore@bootlin.com>

--- b4-submit-tracking ---
# This section is used internally by b4 prep for tracking purposes.
{
  "series": {
    "revision": 1,
    "change-id": "20260126-kasan-fcd68f64cd7b",
    "prefixes": [
      "bpf-next"
    ]
  }
}
In order to prepare to emit KASAN checks, JIT compilers need to be aware
about whether some load/store instructions are targeting the bpf program
stack, as those should not be monitored (we already have guard pages for
that, and it is likely not possible to correctly monitor all kind of
data passed on stack).

To support this need, make the BPF verifier mark the instructions that
access program stack:
- add setters that toggle a boolean flag if the instruction is accessing
  the program stack
- add getters that JIT compiler can use to know whether an instruction
  being JITed is accessing the stack

Signed-off-by: Alexis Lothoré (eBPF Foundation) <alexis.lothore@bootlin.com>
Add a new Kconfig option CONFIG_BPF_JIT_KASAN that automatically enables
KASAN (Kernel Address Sanitizer) memory access checks for JIT-compiled
BPF programs, when both KASAN and JIT compiler are enabled. When
enabled, the JIT compiler will emit shadow memory checks before memory
loads and stores to detect use-after-free, out-of-bounds, and other
memory safety bugs at runtime. The option is gated behind
HAVE_EBPF_JIT_KASAN, as it needs proper arch-specific implementation.
For now, only mark x86 compatible with KASAN for eBPF.

Signed-off-by: Alexis Lothoré (eBPF Foundation) <alexis.lothore@bootlin.com>
Add the emit_kasan_check() function that emits KASAN shadow memory
checks before memory accesses in JIT-compiled BPF programs. The
implementation relies on the existing __asan_{load,store}X function from
KASAN subsystem. The function:
- saves registers. This includes caller-saved registers, but not only: we
  also need to save any register that was possibly used by the
  affected program
- computes the accessed address and stores it in %rdi
- calls the relevant function, depending on the instruction being a load
  or a store, and the size of the access.

Signed-off-by: Alexis Lothoré (eBPF Foundation) <alexis.lothore@bootlin.com>
Insert KASAN shadow memory checks before memory load and store
operations in JIT-compiled BPF programs. This helps detect
memory safety bugs such as use-after-free and out-of-bounds
accesses at runtime.

The main instructions being targeted are BPF_LDX and BPF_STX, but not
all of them are being instrumented:
- if the load/store instruction is in fact accessing the program stack,
  we do not insert the check, as we already have page guards to monitor
  stack accesses. Stack accesses _could_ be monitored more finely by
  adding kasan checks, but it would need JIT compiler to insert red
  zones around any variable on stack, and we likely do not have enough
  info in JIT compiler to do so.
- if the load/store instruction is a BPF_PROBE_MEM instruction, we do
  not instrument it, as the addess could be complete garbage (hence the
  custom fault management with BPF_PROBE_MEME), and so the corresponding
  kasan check could fault as well

Signed-off-by: Alexis Lothoré (eBPF Foundation) <alexis.lothore@bootlin.com>
WIP: add a single test triggering a UAF access in an eBPF program, due
to a kfunc behaving badly. This test is far from enough:
- it does not detect the expected kasan splat programatically
- there are other cases to test, eg OoB accesses
- tests should stress better the code from emit_kasan_check (eg: more
  registers pressure to validate push/pop strategy)

Signed-off-by: Alexis Lothoré (eBPF Foundation) <alexis.lothore@bootlin.com>
@Tropicao Tropicao closed this Mar 9, 2026
@Tropicao Tropicao deleted the wip_kasan branch March 9, 2026 16:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant