/srv/www/pcdoyle.com

Detecting and Mitigating Copy Fail with Go (CVE-2026-31431)

CVE-2026-31431, also known as Copy Fail, came as a real surprise. It was a fairly serious local privilege escalation, and several servers at both my company and lots of my friend’s companies could not be mitigated without rebooting. For critical servers where a reboot was difficult or impossible, mitigation was not always available. Whether a system could be mitigated depended on the distro, kernel configuration, bootloader, reboot availability, and whether algif_aead was available as a loadable module or compiled directly into the kernel.

That decision tree was painful enough on one host. On a fleet of servers, it was a nightmare.

To solve this, we came up with cvecheck, a small Go binary that classifies a Linux host’s exposure and, when needed, installs one of four stop-gap mitigations. The most interesting of those is a pair of kernel-side stop-gaps that intercept socket(AF_ALG, ...) at the syscall boundary and return EAFNOSUPPORT to non-allowlisted callers. One uses an eBPF kprobe; the other uses a tiny native kernel module that does the same thing for kernels that have the eBPF helper gated off. Most of the post will be about those two.

Credits up front: The mitigations would not have been possible without Effie Renard, whose standalone C kprobe is the reference implementation and who co-authored the kmod fallback. Claude AI helped port Effie’s C to Go, write tests, and refactor some of the code. Every change was reviewed by hand.


What the bug actually exposes

The bug lives in the kernel’s algif_aead socket family, which is reached through socket(AF_ALG, ...) and configured with crypto algorithm names like authencesn(hmac(sha256), cbc(aes)). The vulnerable code path requires a sendmsg of crypto data combined with a splice from a page-cache-backed file descriptor. None of that is exotic, and on most distros any unprivileged process can reach the surface.

For a tool that classifies exposure, two things matter:

  1. Is the surface reachable on this host? That answer is socket(AF_ALG, SOCK_SEQPACKET, 0) followed by a bind of an authencesn AEAD. Two syscalls, no exploit.
  2. Does this kernel still contain the bug? That answer comes from uname -r plus a distro changelog grep.

cvecheck runs both, plus a module check (loaded? blacklisted? built into vmlinux?), and combines the signals through a precedence ladder into one verdict: PATCHED, NOT_VULNERABLE, MITIGATED, LIKELY_NOT_EXPLOITABLE, MECHANISM_REACHABLE, VULNERABLE, or INCONCLUSIVE. The exit code maps cleanly so it’s easy to wire into Ansible, Salt, or a shell loop over SSH:

0  safe (PATCHED, NOT_VULNERABLE, MITIGATED, LIKELY_NOT_EXPLOITABLE)
2  exposed (VULNERABLE, MECHANISM_REACHABLE)
3  inconclusive

It’s a small, statically linked binary with no glibc dependency. You can scp it to a machine, run it, and throw it away.


How the detection signals are built

There are five sub-probes wired together by internal/check/check.go. Four of them are mechanical: parse uname -r against the upstream fix list, read /etc/os-release to pick a package manager (apt, rpm, apk), grep the kernel package’s changelog for the CVE ID (with a five-step disk-first fallback for Debian signed kernels, where apt changelog is famously fragile), and inspect /proc/modules plus /etc/modprobe.d. None are surprising on their own, but each has enough edge cases to make testing more work than you’d expect.

The two that are worth their own subsection are the module-state check, because it splits the world in half, and the AF_ALG mechanism probe, because it’s the part that gets the most security-review attention.

Module state (internal/module/)

Inspect looks at four signals:

  1. /proc/modules to see if algif_aead is currently loaded.
  2. /etc/modprobe.d/*.conf for a recognised blacklist.
  3. /lib/modules/$(uname -r)/ for .ko* files on disk, so we know whether the module could be loaded.
  4. /boot/config-$(uname -r) or /proc/config.gz, scanning for CONFIG_CRYPTO_USER_API_AEAD=y. If that line is present, algif_aead is built directly into vmlinux and no modprobe.d blacklist will help you. This single detail is what splits the world into “loadable module hosts” and “built in hosts,” and it determines which mitigation is even possible.

AF_ALG mechanism probe (internal/mechanism/)

The probe issues exactly two syscalls:

fd, err := unix.Socket(unix.AF_ALG, unix.SOCK_SEQPACKET, 0)
err = unix.Bind(fd, &unix.SockaddrALG{
    Type: "aead",
    Name: "authencesn(hmac(sha256), cbc(aes))",
})
unix.Close(fd)

There is no setsockopt for a key, no accept, no sendmsg, no splice, no pipe. The vulnerable code path requires sendmsg of crypto data combined with splice from a page cache backed fd, and we never get anywhere close to it. A successful probe means the surface is reachable. Whether the bug is still there is decided by the kernel version and changelog signals.

One side effect: a successful bind autoloads algif_aead via the kernel module autoloader. The module check runs before the mechanism probe so the loaded-state report is pre-probe. To suppress the autoload entirely, blacklist the module first.


Four mitigations, four audiences

Once we know a host is exposed, the question becomes which stop-gap to install. The shape of algif_aead on the host decides:

Strategy Type Reboot Use on
modprobe Passive (file only) no Loadable module hosts (Debian, Ubuntu, SUSE, Alpine, Arch)
grubby Passive (boot arg) yes Built-in hosts on RHEL family where reboot is acceptable
kprobe Active (eBPF supervisor) no Built-in hosts that cannot reboot, kernel ships full BPF kprobe-override (RHEL 9, Oracle 9, Amazon 2023, CloudLinux 9)
kmod Active (native kernel module) no Built-in hosts that cannot reboot, kernel has FUNCTION_ERROR_INJECTION but BPF_KPROBE_OVERRIDE is off (RHEL/CentOS 8)

If you only read one paragraph: pick modprobe if your distro ships algif_aead as a loadable module, pick grubby if it’s built into vmlinux and you can reboot, pick kprobe if it’s built in and you can’t reboot, and fall back to kmod if kprobe install aborts because the BPF helper bpf_override_return is gated off on your kernel. cvecheck mitigation list will print the comparison if you’d rather have the tool tell you.

The first two are short. The other two share a primitive and most of the engineering, so we’re going to walk through the kprobe in depth and then show what changes for the kmod fallback.

modprobe blacklist

cvecheck mitigation modprobe install writes /etc/modprobe.d/cve-2026-31431.conf containing a single line:

install algif_aead /bin/false

That install directive (rather than the more familiar blacklist) is important. A plain blacklist line still allows kernel autoloading to bring the module up when something binds an AF_ALG socket. install ... /bin/false replaces the kernel autoloader’s load command with a process that immediately exits non-zero, so the load fails. After writing the conf, the installer calls rmmod algif_aead to clear any current load, and the module stays gone.

This is inert on built-in hosts. If CONFIG_CRYPTO_USER_API_AEAD=y, the symbol is already inside vmlinux. There’s nothing to blacklist.

grubby boot arg

On RHEL family hosts that ship algif_aead built in but can tolerate a reboot, cvecheck mitigation grubby install runs:

grubby --update-kernel=ALL --args="initcall_blacklist=algif_aead_init"

This is Red Hat’s documented mitigation for the CVE. The boot arg tells the kernel to skip the algif_aead_init initcall on the next boot, so the family is never registered. Persistent across reboots, but the running boot is unaffected until the host actually reboots.

Both of these are simple file edits. Reasoning was the hard part, not code. The other two mitigations are where the real systems coding took place.


The kprobe mitigation: a stop-gap that runs in kernel space

This one targets the painful case: built-in algif_aead, no reboot window. Some of the most-used distros build it directly into their kernels (RHEL, Oracle Linux, Amazon Linux). Modprobe won’t work because the module isn’t a module. Grubby is the right answer if there’s no kernel upgrade yet, but you have to schedule a reboot. In between, we needed something that takes effect now.

A kprobe is a kernel debugging hook that lets you run code right before (or after) a chosen kernel function. Pair it with bpf_override_return and you can do more than observe the function: you can skip its body and substitute your own return value. So the trick is to attach an eBPF program at __x64_sys_socket and have it return EAFNOSUPPORT for any non-root caller passing AF_ALG, before the syscall body ever runs.

Why a kprobe and not LSM, or seccomp

A few alternatives were considered:

  • seccomp filters per-process, not host-wide. You’d have to inject it into every existing and future process, which is unreasonable for a stop-gap.
  • LSM hooks would work, but writing a custom LSM module is heavyweight, requires a kernel module build, and is not portable.
  • kprobe with bpf_override_return lets us intercept the syscall in a single attach point, host-wide, with no kernel module, no kernel rebuild, and no reboot.

The catch is that bpf_override_return is gated on CONFIG_FUNCTION_ERROR_INJECTION=y and CONFIG_BPF_KPROBE_OVERRIDE=y, and the target function must carry the ALLOW_ERROR_INJECTION annotation. Lucky for us, __x64_sys_socket carries it. Less lucky for us, Debian and Ubuntu mainline kernels ship with FUNCTION_ERROR_INJECTION=n. That’s one reason this mitigation targets RHEL family kernels: theirs ship it on.

The eBPF program

The whole BPF program is short. The full source is in internal/mitigation/kprobe/bpf/block_alg.bpf.c. Here’s the heart of it:

#define AF_ALG       38
#define EAFNOSUPPORT 97

SEC("kprobe/__x64_sys_socket")
int BPF_KPROBE(block_af_alg, struct pt_regs *regs)
{
    /* x86_64 syscall wrapper: regs is the user's pt_regs; di = arg1 (family) */
    int family = (int)BPF_CORE_READ(regs, di);

    if (family != AF_ALG)
        return 0;

    /* One uid_gid call drives both the allowlist lookup and the event
     * payload below. bpf_get_current_uid_gid is cheap but not free.
     */
    __u64 ug  = bpf_get_current_uid_gid();
    __u32 uid = (__u32)ug;

    if (bpf_map_lookup_elem(&allowed_uids, &uid))
        return 0;

    struct event *e = bpf_ringbuf_reserve(&events, sizeof(*e), 0);
    if (e) {
        e->pid    = bpf_get_current_pid_tgid() >> 32;
        e->uid    = uid;
        e->gid    = (__u32)(ug >> 32);
        e->family = family;
        bpf_get_current_comm(&e->comm, sizeof(e->comm));
        bpf_ringbuf_submit(e, 0);
    }

    bpf_override_return(ctx, -EAFNOSUPPORT);
    return 0;
}

A few things in this snippet are worth pulling apart, because they’re probably the most interesting, and the most difficult part of this tool.

Reading the syscall family argument. On x86_64, the syscall entry wrapper passes a pointer to the user’s pt_regs, and the first syscall argument lives in regs->di. The natural-looking thing to do is use libbpf’s BPF_KPROBE_SYSCALL macro, which unwraps that for you. We don’t. On RHEL, Oracle, Amazon, and CloudLinux kernels, the macro’s inner pt_regs CO-RE reads either fail verification or return garbage for the unwrapped argument, so the family != AF_ALG check silently never matches and the kprobe becomes a no-op. Effie’s standalone C version of this kprobe uses the direct BPF_CORE_READ(regs, di) pattern, and we kept it for the embedded version. This is the kind of detail that doesn’t show up until you test on five distros.

Allowlisting root. UID 0 is in the allowed_uids BPF hash map and is passed through. Without this, you can’t run cvecheck itself, you can’t run iptables -m policy, and any future legitimate AF_ALG user is blocked. The allowlist is a hash map rather than a hardcoded constant so the supervisor can populate it at startup with whatever set is in effect. Today the set is {0} and there is no install flag to extend it. Since this is a mitigation, we opted not to make an option to expand the UID options but we might still add it.

Logging via ringbuf. Every blocked call gets a fixed-size record on a BPF_MAP_TYPE_RINGBUF map: pid, uid, gid, comm, family. The userspace supervisor pulls events off the ringbuf and writes one line per event to its own stdout, which under systemd is journald. Operators get a real audit trail of who tried what.

The override itself. bpf_override_return(ctx, -EAFNOSUPPORT) is the whole point of the exercise. The kernel reads it, shrugs, returns that value to userspace, and never runs the syscall body. From the caller’s perspective, the kernel forgot how to make AF_ALG sockets.

The Go side: load, attach, supervise

The eBPF object is compiled at build time and embedded into the Go binary. The wonderful bpf2go generates typed Go bindings (internal/mitigation/kprobe/bpf/blockalg_x86_bpfel.go) and a //go:embed of the compiled .o. A clean clone builds without clang installed, because the compiled object is committed.

The supervisor in internal/mitigation/kprobe/runner.go is small enough to summarise in four lines:

loader, _ := Load()                       // load + attach + open ringbuf
loader.AllowUID(0)                        // seed the allowlist
WriteSentinel(DefaultAllowedUIDs)         // /run/cvecheck/mitigation.json
for { writeEvent(logw, loader.Read()) }   // ringbuf poll loop

Four things happen here:

  1. Load() in loader_linux.go removes the memlock rlimit, loads the embedded BPF objects, attaches the kprobe at __x64_sys_socket, and opens a ringbuf reader. All three steps go through cilium/ebpf, the maintained Go binding for libbpf.

  2. AllowUID(0) seeds the BPF hash map. The Go side keeps the canonical list (DefaultAllowedUIDs = []uint32{0}), and that list is also written to the sentinel so a separate cvecheck mitigation kprobe status invocation can report exactly which UIDs are passing through.

  3. WriteSentinel writes a JSON liveness file to /run/cvecheck/mitigation.json. The schema captures pid, started timestamp, allowed UIDs, version, and arch. The write is atomic (tmp file plus rename), so a separate cvecheck reading the file mid-update never sees a partial blob. This sentinel is how the main cvecheck detection probe flips its verdict to MITIGATED when the kprobe supervisor is live.

  4. The poll loop reads events from the ringbuf and writes one line per event, formatted to match Effie’s standalone C version’s log line so journalctl output is identical:

    2026-05-01T17:04:11Z pid=12345 uid=1000 gid=1000 comm=python3 family=38 (AF_ALG) -> EAFNOSUPPORT
    

Liveness verification: defending against pid recycling

The sentinel isn’t enough on its own, because pids get recycled. A separate Probe function reads the sentinel and walks /proc/<pid>/comm to verify two things: the pid is still running, and comm is cvecheck. If comm is anything else, the pid was recycled and we treat the sentinel as stale. We also read /proc/<pid>/status and treat zombies as not active. This is the kind of detail that gets you bit on a long-running fleet, where the sentinel sits on disk for months and the original pid number gets re-used by a totally unrelated process.

The probe is also fakeroot-friendly. It takes an fs.FS rather than reading from / directly. That’s what lets us write testing/fstest-based unit tests that cover missing, active, stale, zombie, malformed, and pid-recycled cases without spinning up actual kernels.

Preflight: refuse to install on a kernel that can’t run the probe

Install runs Preflight first. Six gates, each with its own actionable message:

checks = append(checks, checkArch())             // amd64 only
checks = append(checks, checkRoot())             // CAP_BPF + CAP_SYS_ADMIN
checks = append(checks, checkBTF())              // /sys/kernel/btf/vmlinux
checks = append(checks, checkErrorInjection())   // FUNCTION_ERROR_INJECTION=y
checks = append(checks, checkSystemd())          // /run/systemd/system
checks = append(checks, checkKprobeAttach())     // dry-run attach + detach

The most useful one is checkKprobeAttach. We briefly attach the real BPF object and then detach it. That’s the only way to be sure that bpf_override_return plus ALLOW_ERROR_INJECTION actually work on this kernel. The verifier rejection messages from cilium/ebpf are translated into actionable hints rather than raw libbpf errors. If preflight fails, install aborts non-zero. There’s no fallback to a half-working install, because a half-working mitigation is worse than a missing one.

systemd hardening

The supervisor runs as a systemd unit (internal/mitigation/kprobe/service.go). The interesting lines are the ones that aren’t there:

AmbientCapabilities=CAP_BPF CAP_PERFMON CAP_SYS_ADMIN
NoNewPrivileges=yes
ProtectSystem=strict
RuntimeDirectory=cvecheck
# MemoryDenyWriteExecute  -- NOT set: BPF JIT needs W+X pages
# ProtectKernelTunables   -- NOT set: BPF map ops touch kernel state
# ProtectKernelModules    -- NOT set: kprobe attach touches kernel state

Capabilities are scoped to exactly what’s needed to load and attach the kprobe with override. RuntimeDirectory=cvecheck makes systemd create /run/cvecheck with mode 0755 owned by root before the supervisor starts, which is where the sentinel lives. The full unit (Restart, RestartSec, ProtectHome, ProtectKernelLogs, PrivateTmp, RestrictNamespaces, RestrictRealtime, LockPersonality) is in the repo.

Ergonomics: the install is one command

sudo cvecheck mitigation kprobe install

That command runs preflight, copies /proc/self/exe to /usr/local/sbin/cvecheck unless the destination is already byte-identical, verified by SHA-256, writes the unit file unless it is already current, runs systemctl daemon-reload, and runs systemctl enable --now. It is safe to re-run because unchanged files are skipped and the service enable/start step can be repeated without causing a duplicate install. Killing the supervisor or running cvecheck mitigation kprobe uninstall reverses the install.

After install, re-running cvecheck with no flags now flips the verdict to MITIGATED with reason "AF_ALG kprobe mitigation active (... pid N)".


The kmod mitigation: same primitive, no BPF gate

The kprobe mitigation is great when it works. The catch is that it relies on a BPF helper called bpf_override_return, and that helper is only available when the kernel was built with two config options on:

  • CONFIG_FUNCTION_ERROR_INJECTION=y (the underlying machinery that lets a kprobe substitute a return value).
  • CONFIG_BPF_KPROBE_OVERRIDE=y (the gate that exposes the helper to BPF programs).

RHEL 9, Oracle 9, Amazon 2023, and CloudLinux 9 ship both. RHEL/CentOS 8 and a handful of derivatives ship the first one and not the second. On those kernels, cvecheck mitigation kprobe install fails preflight with a verifier error:

unknown func bpf_override_return

The machinery exists. The BPF entry point doesn’t. We’re looking through a window at exactly the primitive we need, and someone has closed the storm shutters.

The kmod mitigation reopens the shutters. Instead of calling bpf_override_return from a BPF program, we register an ordinary in-kernel kprobe from a tiny native module and call override_function_with_return ourselves. Same hook point (__x64_sys_socket), same return value (-EAFNOSUPPORT), same allowlist (UID 0 passes through). No BPF helper involved.

The whole module is short. The handler that does the work:

static int handler_pre(struct kprobe *p, struct pt_regs *regs)
{
    struct pt_regs *user_regs;
    int family;

    /* On x86_64 the syscall wrapper takes a pointer to the saved
     * pt_regs as its single argument (rdi). Dereference to read the
     * user's first syscall arg, the address family.
     */
    user_regs = (struct pt_regs *)regs->di;
    if (!user_regs)
        return 0;
    family = (int)user_regs->di;

    if (family != AF_ALG_FAMILY)
        return 0;

    /* Allowlist: root passes through (parity with the BPF default). */
    if (uid_eq(current_uid(), GLOBAL_ROOT_UID))
        return 0;

    pr_info_ratelimited(
        "cvecheck: blocked AF_ALG socket() pid=%d uid=%u comm=%s\n",
        task_pid_nr(current),
        from_kuid(&init_user_ns, current_uid()),
        current->comm);

    regs_set_return_value(regs, -EAFNOSUPPORT);
    regs->ip = (unsigned long)&cvecheck_just_return;
    return 1; /* skip the original function body */
}

If you read this side by side with the BPF version, the structure is identical: read the syscall family, bail out fast on the common case, allowlist UID 0, log the blocked attempt, set the return value to -EAFNOSUPPORT, and skip the function body. The only difference is how the override happens.

A couple of details worth pulling out:

cvecheck_just_return is a tiny inline-asm trampoline. override_function_with_return is the documented helper that should make the syscall return without running its body. It’s exported by the kernel symbol table on most builds. On a subset of RHEL 9 kernels it’s compiled in (CONFIG_FUNCTION_ERROR_INJECTION=y) but not exported, so modpost refuses to link the module. The workaround is a hand-written function that just returns:

asm(
    ".text\n"
    ".type cvecheck_just_return, @function\n"
    "cvecheck_just_return:\n"Credits up front. The mitigations would not have been possible without Effie Renard, whose standalone C kprobe is the reference implementation and who co-authored the kmod fallback. Claude AI helped port Effie's C to Go, write tests, and refactor some of the code. Every change was reviewed by hand.
    "    endbr64\n"
    "    ret\n"
    ".size cvecheck_just_return, .-cvecheck_just_return\n"
);

We point regs->ip at it and let the kernel’s normal return path do the rest. The endbr64 instruction is a no-op on CPUs without Intel CET-IBT and a valid indirect-branch landing pad on those that have it, so the same module works on both.

The module is built on the host, not embedded as a .ko. Kernel modules are tightly coupled to the exact kernel they’ll load into, so we can’t ship a prebuilt .ko. Instead we embed the C source plus a kbuild Makefile inside the cvecheck binary using //go:embed, stage them under /var/lib/cvecheck/kmod-build/ at install time, and run make against /lib/modules/$(uname -r)/build. Preflight requires make and gcc installed, as well as the matching kernel-devel headers.

Persistence is a one-liner. cvecheck mitigation kmod install writes /etc/modules-load.d/cvecheck-kmod.conf containing the module name. systemd-modules-load.service reads it on boot and modprobes the module before anything has a chance to call socket(AF_ALG, ...). After a kernel upgrade you re-run kmod install to rebuild against the new headers; the auto-load conf doesn’t change.

The whole install flow:

sudo cvecheck mitigation kmod install     # preflight + build .ko + insmod + auto-load conf
sudo cvecheck mitigation kmod status      # module loaded? .ko installed? auto-load conf? effective?
dmesg -wT | grep cvecheck                 # live blocked-call audit log
sudo cvecheck mitigation kmod uninstall   # rmmod + remove .ko + remove auto-load conf

After install, re-running cvecheck flips the verdict to MITIGATED exactly the way the kprobe variant does. The remediation block in the verdict knows about both: if the host has CONFIG_FUNCTION_ERROR_INJECTION=y, it now suggests kmod install alongside kprobe install so you don’t have to discover the BPF gate the hard way.

A note on how the kmod fallback came to exist: it was added after we found that several still-supported kernels ship the underlying error-injection machinery but not the BPF helper that exposes it. The fix was less invasive than expected because Effie’s original C kprobe already showed the shape; once we had a hook that worked from a BPF program, lifting it into a tiny native module was mostly bookkeeping (build system, embedding the source, persistence). If you’re designing a kprobe-based mitigation today, plan for the native fallback from day one. The two delivery shapes share most of the same design and none of the same build pipeline.


Running the tool

The simplest run is no flags at all:

/tmp/cvecheck-linux-x86_64

That executes every check, picks lipgloss-styled output if stdout is a TTY, and exits with a code that maps cleanly to “safe” / “exposed” / “inconclusive”. For SSH remote-command shape, it defaults to plain text:

ssh host /tmp/cvecheck-linux-x86_64
ssh host /tmp/cvecheck-linux-x86_64 --format=json   # Syslog/SIEM Friendly

For containers and chroots, mount the host root somewhere readable and pass --root:

docker run --rm -v /:/host:ro alpine /tmp/cvecheck --root /host

--root controls all on-disk lookups. The kernel-version and mechanism probes still touch the running kernel through uname(2) and socket(AF_ALG, ...).

Mitigation install commands are always explicit:

sudo cvecheck mitigation modprobe install     # loadable module hosts
sudo cvecheck mitigation grubby   install     # RHEL family, reboot ok
sudo cvecheck mitigation kprobe   install     # built in, no reboot, full BPF override
sudo cvecheck mitigation kmod     install     # built in, no reboot, BPF override gated off
journalctl -u cvecheck-mitigation -f          # live blocked-call audit log (kprobe)
dmesg -wT | grep cvecheck                     # live blocked-call audit log (kmod)

cvecheck mitigation list prints the side-by-side comparison if you’d rather have the tool tell you which one to pick. cvecheck mitigation status prints an aggregate report covering all four strategies in one shot, which is handy across a fleet.


What I would do differently

If I had to do it again, I would have started with the kprobe path first instead of last. The detection logic looks like the obvious foundation, but the kprobe is the part that decides which kernel features your stop-gap can actually depend on, and that constraint feeds back into how the detection signals matter. Building the detection in isolation produced clean code that almost matched what the kprobe needed but not quite, and a couple of refactor passes happened because of it.

I would also have written or generated more fakeroot tests sooner. The fs.FS plus testing/fstest pattern that the kprobe Probe uses turned out to be the easiest way to cover edge cases like pid recycling and zombie detection. Most of the rest of the code base picked up the same pattern after the fact.


Try it on your own host

The repo lives at https://github.com/pcdoyle/copy-fail-cve-2026-31431. The install one-liner is:

curl -fsSL https://copyfail.pcdoyle.dev/install.sh | sh

The script verifies the binary’s SHA-256 against the published SHA256SUMS and exits non-zero on mismatch. If you don’t want to feed a script from the internet directly into your shell, every release has manual download instructions in the README.

If you find a kernel where the kprobe preflight reports something we’re not catching cleanly, or a distro where the changelog grep misses, please email [email protected] with the --format=json output.


Authors and contributors: Patrick Doyle (author/maintainer), Effie Renard (co-author; original C kprobe and design partner on the kmod fallback), Chris Z. (support, code review, testing).

: : :

← all posts  ·  reply by mail