WIP: Implement firewall rules #32

Draft
albert wants to merge 20 commits from u/albert/4/firewall into main
Owner

Fixes #4

Fixes #4
Replaces the Wayland-required shell-based firewall test
(scripts/integration-tests/test-firewall.sh + run-activity.sh) with two
#[ignore] tests in crates/shepherd-e2e/tests/firewall.rs that run in CI
under the existing e2e job:

  - firewall_unsupported_path_runs_activity: points
    SHEPHERD_FIREWALL_HELPER at a nonexistent path to force the probe to
    Unsupported, then verifies activities with [entries.firewall] still
    launch (regression guard against the silent-no-op bug fixed in the
    "Make firewall enforcement failures explicit" commit).
  - firewall_supported_path_invokes_helper_with_expected_argv: drops
    stub pkcheck/pkexec/shepherd-firewall-helper executables in PATH so
    the chain runs unprivileged in CI, then asserts the recorded helper
    argv contains apply-process, --scope-name, --uid/--gid,
    --default deny, --allow 127.0.0.0/8, --allow ::1/128, and the
    activity command after `--`.

Adds HarnessBuilder::shepherdd_env so a test can inject env vars
(SHEPHERD_FIREWALL_HELPER + an augmented PATH) past the harness's
env_clear(). No other test changes.

Real BPF cgroup filter enforcement still requires CAP_NET_ADMIN, the
system systemd manager, and a working polkit, none of which exist in
CI. Manual validation continues to use ./scripts/integration-tests/setup-firewall-dev.sh.
For hosts that have run setup-firewall-dev.sh and re-logged in: launches
an activity through the real shepherd-firewall-helper / pkexec / system
systemd-run chain, has a probe script connect to one allowed and one
denied TCP target via bash /dev/tcp, and asserts the BPF address filter
is actually attached (allow=OPEN, deny=BLOCKED).

  - crates/shepherd-e2e/tests/firewall_real.rs is a separate test binary
    so it can run in isolation. It self-skips with [SKIP] on hosts where
    the helper isn't installed or polkit doesn't grant, so CI's
    --include-ignored sweep stays a no-op pass.
  - scripts/integration-tests/run-firewall-probe.sh is the inside-activity
    probe (atomic log write so the orchestrator never reads a partial
    file).
  - scripts/integration-tests/test-firewall.sh pre-checks preconditions,
    builds, then execs the cargo test with --nocapture.

Verified on a configured host: allow=OPEN, deny=BLOCKED, real BPF filter
attached. This is the regression guard against shepherdd ever drifting
back to the silent-no-op `systemd-run --user --scope` path.
Replaces the silent-no-op `systemctl --user --runtime set-property`
path (per-user systemd lacks CAP_NET_ADMIN/CAP_BPF, can't attach
cgroup_skb) with a new `apply-cgroup` helper subcommand that loads a
cgroup_skb/egress program via aya and attaches it directly to the
runtime's scope cgroup using the legacy bpf(BPF_PROG_ATTACH) syscall
(so the program persists past helper exit until cgroup destruction).

  - crates/shepherd-firewall-bpf/ is a new out-of-workspace crate
    targeting bpfel-unknown-none on nightly. ~110 lines: 4 LPM tries
    (v4/v6 × allow/deny), a DEFAULT array, one cgroup_skb program with
    deny-then-allow-then-default match order matching systemd's.
  - shepherd-firewall-helper grows aya as a dep + build.rs that
    compiles the BPF crate via `rustup run nightly cargo build` (env
    scrubbed so the parent cargo doesn't override the BPF crate's
    pinned toolchain). The .o is embedded via include_bytes!.
  - The new apply-cgroup subcommand validates the cgroup path is under
    /sys/fs/cgroup/user.slice/user-<PKEXEC_UID>.slice/user@<UID>.service/,
    loads + populates maps via aya, then BPF_PROG_ATTACH manually so
    we get the legacy "attach lives until cgroup dies" semantics
    instead of aya's link-fd-bound attach.
  - adapter.rs's apply_firewall_to_existing_scope now invokes the new
    subcommand via pkexec.
  - New manual test crates/shepherd-e2e/tests/firewall_real_snap.rs
    + scripts/integration-tests/test-firewall-snap.sh that
    `snap try --classic`-installs a tiny probe snap, launches it
    through the API, and asserts allow=OPEN deny=BLOCKED.

Verified on a configured host: real BPF filter attached to the snap's
runtime-created scope cgroup, loopback reachable, 8.8.8.8:53 dropped.

Caveats documented in
docs/ai/history/2026-05-02 004 firewall snap bpf via aya.md, notably:
CI doesn't yet have the bpf-linker / nightly / LLVM-dev toolchain so
this build won't pass CI as-is; flatpak parity is still pending.
CI: install BPF authoring toolchain so the helper builds
Some checks failed
CI / Build (pull_request) Successful in 8m33s
CI / E2E (pull_request) Successful in 9m19s
CI / Test (pull_request) Successful in 9m35s
CI / ShellCheck (pull_request) Successful in 7s
CI / Clippy (pull_request) Failing after 1m54s
CI / Rustfmt (pull_request) Failing after 3m26s
9bc470feb5
After "Implement firewall enforcement for Snap entries via aya BPF
cgroup attach", `cargo build` for shepherd-firewall-helper triggers
the sibling shepherd-firewall-bpf crate's compile, which needs:

  - LLVM dev libs to compile bpf-linker
  - Rust nightly with rust-src for `-Zbuild-std=core`
  - bpf-linker on PATH

Adds clang/llvm-20-dev/libpolly-20-dev to scripts/deps/build.pkgs and
a new install_bpf_toolchain() in scripts/lib/deps.sh that gates on
existing installs, called automatically for `build` and `dev` sets.
First CI run rebuilds bpf-linker (~1-2 min); the cache change keeps
~/.cargo/bin and ~/.rustup/toolchains across runs so subsequent runs
are fast.
Implement firewall enforcement for Flatpak entries
Some checks failed
CI / Build (pull_request) Failing after 2m33s
CI / E2E (pull_request) Failing after 3m5s
CI / Test (pull_request) Failing after 9m55s
CI / ShellCheck (pull_request) Successful in 8s
CI / Rustfmt (pull_request) Successful in 9m13s
CI / Clippy (pull_request) Successful in 10m42s
fa1afbe5ea
Wire `[entries.firewall]` end-to-end through the flatpak path of
`apply_firewall_to_existing_scope`. The helper's `apply-cgroup`
subcommand is already generic over cgroup paths, and the scope-prefix
pattern `app-flatpak-<app_id>-` was already in adapter.rs, so the
daemon-side change is just one fix:

`flatpak run` strips most env vars before exec'ing the sandboxed app,
so `[entries.kind.env]` settings never reached flatpak entries. Mirror
the snap path's behavior by emitting `--env=KEY=VAL` flags between
`flatpak run` and `<app-id>`, sorted for determinism.

Adds a manual #[ignore] e2e test (firewall_real_flatpak.rs +
test-firewall-flatpak.sh). The orchestrator builds a tiny test
flatpak via flatpak-builder against the standard Platform//24.08
runtime, installs --user, runs the cargo test, uninstalls on exit.
The test self-skips on hosts without flatpak/the test app, so a
default `cargo test --include-ignored` stays a no-op.

Verified on a configured host: real BPF program attached at
app-flatpak-org.shepherd.firewall.Probe-<n>.scope, allow=OPEN,
deny=BLOCKED. Snap and Process tests still pass after the adapter
env-passing change.

Caveats and follow-ups in
docs/ai/history/2026-05-02 005 firewall flatpak.md, notably:
the test sets XDG_DATA_HOME via [entries.kind.env] to override the
harness's tempdir override (so flatpak finds the user-installed app),
which then leaks into the sandboxed probe via --env=. Harmless here;
cleaner long-term to set it only on the outer process.
CI: pre-bake a deps image instead of installing from scratch every job
Some checks failed
CI / CI image (pull_request) Failing after 4s
CI / Build (pull_request) Has been skipped
CI / Test (pull_request) Has been skipped
CI / E2E (pull_request) Has been skipped
CI / Clippy (pull_request) Has been skipped
CI / Rustfmt (pull_request) Has been skipped
CI / ShellCheck (pull_request) Successful in 6s
09d076c01c
Run #32 of this branch (and #31's Clippy/Rustfmt) failed with SIGKILL
during `apt-get install` — never reaching cargo. Five jobs (Build,
Test, E2E, Clippy, Rustfmt) each spin up a fresh ubuntu:25.10
container and install the same dep set in parallel. After the BPF
toolchain (clang + llvm-20-dev + libpolly-20-dev) landed, that set
unpacks to ~1 GB per container; five concurrent unpackings exhaust
runner host memory and the OOM-killer takes apt down.

Pre-bake a CI image with all of that already installed and host it
in Forgejo's container registry, so subsequent jobs skip setup
entirely.

  - .ci/Dockerfile: FROM ubuntu:25.10, runs `./scripts/shepherd deps
    install build/run/test` so the image stays in lockstep with what
    a real host gets, then `rustup component add clippy rustfmt`
    (install_rust uses --profile minimal). PATH exports cargo's bin.

  - .github/workflows/ci.yml: new `image` job runs first. It computes
    `tag = <isoyear>w<isoweek>-<sha256-12>` over the strict input set
    (.ci/Dockerfile, scripts/deps/*.pkgs, scripts/lib/deps.sh), logs
    in to git.armeafamily.com with secrets.GITHUB_TOKEN, and
    `docker manifest inspect`s the tag — only `docker build && push`
    on cache miss. The week prefix forces a weekly rebuild so
    security updates roll in even when no input file changed.

    All Rust jobs declare `needs: image` and use the image via
    `container.image: ${{ needs.image.outputs.ref }}` (with
    `credentials:` for the registry pull). They drop every "Install
    git", "Install build dependencies", "Add Rust to PATH", "Add
    clippy/rustfmt component" step. The cargo cache narrows to the
    bits that vary per branch (~/.cargo/registry, target/) since
    ~/.cargo/bin and ~/.rustup/toolchains are now in the image.

    shellcheck stays on plain ubuntu:25.10 — its install is one
    package (~1 MB), nowhere near OOM territory.

  - docs/ai/history/: design doc with diagnosis, the chaining-based
    approach we considered first, and follow-ups (token scope,
    package visibility, mid-week security refresh).
CI: install docker CLI in image job + assert daemon reachable
Some checks failed
CI / ShellCheck (pull_request) Successful in 7s
CI / CI image (pull_request) Failing after 22s
CI / Build (pull_request) Has been skipped
CI / Test (pull_request) Has been skipped
CI / E2E (pull_request) Has been skipped
CI / Clippy (pull_request) Has been skipped
CI / Rustfmt (pull_request) Has been skipped
b9d0ab4771
Run #33 (the first attempt at the pre-baked image flow) failed with
`docker: command not found`. The default `runs-on: ubuntu-latest`
job container is `node:20-bookworm`, which has git but no docker
CLI; act-runner also doesn't expose its host docker socket into job
containers by default.

Install docker.io as the image job's first step so the CLI is on
PATH, and add an explicit `docker info` check that prints a clear
"set container.docker_host: - in act-runner config" message if the
daemon socket isn't mounted. The previous failure mode was a curt
"command not found" that didn't point at the runner config.

Doc updates note the act-runner config requirement explicitly under
a new "Manual setup the runner needs" section.
CI: point the image job at the runner's DinD sidecar
Some checks failed
CI / ShellCheck (pull_request) Successful in 7s
CI / CI image (pull_request) Failing after 48s
CI / Build (pull_request) Has been skipped
CI / Test (pull_request) Has been skipped
CI / E2E (pull_request) Has been skipped
CI / Clippy (pull_request) Has been skipped
CI / Rustfmt (pull_request) Has been skipped
f9f64ac31d
Run #34's `docker info` check fired correctly but pointed at the
wrong remediation: the runner already has docker access configured
(via `container.docker_host: tcp://docker:2375`, the docker-in-docker
layout), it just doesn't auto-inject DOCKER_HOST into job containers.
Set DOCKER_HOST=tcp://docker:2375 explicitly on the image job, and
poll `docker info` for up to 30s while the DinD sidecar warms up
(it takes a few seconds to come up).

Switching the runner to the host-socket mount (`docker_host: -`)
would have erased the per-workflow isolation that other repos
served by the same runner rely on, so adapting the workflow is the
better fix.

Doc updated to record both iterations and the rationale for not
changing the runner config.
CI: detect DinD daemon at the bridge gateway, not via 'docker' hostname
Some checks failed
CI / ShellCheck (pull_request) Successful in 7s
CI / CI image (pull_request) Failing after 16s
CI / Build (pull_request) Has been skipped
CI / Test (pull_request) Has been skipped
CI / E2E (pull_request) Has been skipped
CI / Clippy (pull_request) Has been skipped
CI / Rustfmt (pull_request) Has been skipped
56c2beba55
Run #35 failed the same way as #34: `docker info` couldn't reach the
daemon. The user's runner is docker-compose with a `runner` service
and a `docker:dind` service on a shared compose network. The runner
reaches dind via that network's DNS name `docker`, but job containers
are spawned by dind itself and live on dind's *internal* bridge
network — where `docker` doesn't resolve.

The dind daemon is still reachable from a job container, just at the
bridge gateway IP (which is the dind container itself, listening on
0.0.0.0:2375 because the compose env sets DOCKER_TLS_CERTDIR=""). Read
the default gateway out of /proc/net/route at job startup and point
DOCKER_HOST at tcp://<gateway>:2375. Parsing /proc/net/route directly
(via awk) avoids depending on iproute2 in the base image.

While we're at it, the doc now explains the original OOM root cause
(config.yml's `container.options: --cpus=2 --memory=2g` per-job
hard cap, blown by the LLVM dev libs unpack step) and notes that
cargo compilation in the consumer jobs is also capped at 2 GB and
may need revisiting if rust-side OOMs reappear.
CI: use \ip route\ instead of strtonum to find the DinD gateway
Some checks failed
CI / ShellCheck (pull_request) Successful in 7s
CI / CI image (pull_request) Failing after 5m22s
CI / Build (pull_request) Has been skipped
CI / Test (pull_request) Has been skipped
CI / E2E (pull_request) Has been skipped
CI / Clippy (pull_request) Has been skipped
CI / Rustfmt (pull_request) Has been skipped
9c88a86840
Run #36 died inside the gateway-detection step with
"awk: function strtonum never defined" — the base image has mawk,
not gawk, and strtonum is a gawk extension. set -e then killed the
step before the fallback error path could fire.

`docker.io` pulls in iproute2 as a transitive dep, so just use
`ip route | awk '/default/{print $3}'` and dump `ip route` for
diagnostics if the parse comes up empty. Drops the hand-rolled
hex/little-endian conversion of /proc/net/route entirely.
CI: use a dedicated REGISTRY_TOKEN secret to push to the registry
All checks were successful
CI / ShellCheck (pull_request) Successful in 6s
CI / CI image (pull_request) Successful in 4m40s
CI / Build (pull_request) Successful in 3m46s
CI / E2E (pull_request) Successful in 4m3s
CI / Rustfmt (pull_request) Successful in 6s
CI / Test (pull_request) Successful in 4m16s
CI / Clippy (pull_request) Successful in 1m31s
dd58276a61
Run #37 cleared every other obstacle (the image actually built end-
to-end inside the dind sidecar) and then 401'd on `docker push`:

    unknown: unexpected status from POST request to
    https://git.armeafamily.com/v2/albert/shepherd-launcher-ci/blobs/uploads/:
    401 Unauthorized

The `docker login` step had reported "Login Succeeded" — login just
validates the token, which `secrets.GITHUB_TOKEN` is fine for. Push
needs package *write* scope, and the workflow's
`permissions: packages: write` block isn't elevating GITHUB_TOKEN
that far on this Forgejo instance.

Switch every `secrets.GITHUB_TOKEN` reference (image-job login + the
five heavy jobs' `container.credentials`) over to a new repo secret
`REGISTRY_TOKEN`. The user mints a Forgejo personal access token
with Package read+write scope and stashes it there. The
`permissions:` block stays in place — harmless, and documents
intent if Forgejo starts honoring it later.

Workflow header and the design doc both note the REGISTRY_TOKEN
requirement explicitly.
CI: add the process firewall E2E test to CI as a privileged sidecar job
Some checks failed
CI / ShellCheck (pull_request) Successful in 7s
CI / CI image (pull_request) Failing after 27s
CI / Build (pull_request) Has been skipped
CI / Test (pull_request) Has been skipped
CI / E2E (pull_request) Has been skipped
CI / Clippy (pull_request) Has been skipped
CI / Rustfmt (pull_request) Has been skipped
CI / Firewall E2E (pull_request) Has been skipped
3e24f58047
`firewall_real.rs` exercises the full process-firewall enforcement
chain: shepherdd launches a [entries.kind=process] activity through
the *system* systemd manager, pkexecs the privileged helper, the
helper attaches a cgroup_skb BPF program to the activity's scope,
and a probe inside the scope confirms loopback succeeds while
8.8.8.8:53 is dropped. Until now the test self-skipped in CI
because the prerequisites — systemd as PID1, polkit + dbus running,
the helper installed setuid'd, root in the shepherd-firewall group,
--privileged + --cgroupns=host for cgroup_skb attach — weren't
there.

Two pieces:

  - .ci/Dockerfile: ~70 MB layer for systemd + systemd-sysv + dbus +
    polkit + sudo, plus a `systemctl mask` pass for the units that
    fail noisily inside a container (udev, resolved, networkd,
    NetworkManager, getty, etc.). Other jobs override the
    entrypoint and don't boot systemd, so they're unaffected.

  - .github/workflows/ci.yml: new `firewall` job. Rather than ask
    the user to flip `container.privileged: true` globally on the
    runner (which would erode isolation for every other job), the
    job stays a regular non-privileged Forgejo job and uses the
    dind sidecar it already talks to to launch its *own* private
    privileged container with `--entrypoint /sbin/init`. The
    workspace copies in via `tar | docker exec` (the runner's job
    container and the dind daemon don't share a filesystem, so
    plain --volume mounts the wrong path), then a single
    `docker exec` runs `cargo build` of the helper, the project's
    `scripts/shepherd install firewall --debug` to drop the helper
    + polkit assets + group, `usermod -aG shepherd-firewall root`,
    and `sg shepherd-firewall -c "cargo test ... firewall_real"`.
    The sidecar is torn down on job exit via trap.

Snap and flatpak variants stay manual — snapd doesn't run reliably
in containers, and the snap/flatpak adapter code is a thin wrapper
over the same primitive this test exercises, so a green process
test catches ~all of the same regressions.

Doc records the design choice (privileged sidecar vs. global
runner privileged), the image growth, and a follow-up to add a
target/ cache once things land green.
CI: Ubuntu 25.10 packages it as polkitd, not polkit
Some checks failed
CI / ShellCheck (pull_request) Successful in 6s
CI / CI image (pull_request) Successful in 24s
CI / Test (pull_request) Successful in 1m27s
CI / Build (pull_request) Successful in 1m32s
CI / Rustfmt (pull_request) Successful in 6s
CI / E2E (pull_request) Successful in 1m44s
CI / Firewall E2E (pull_request) Failing after 58s
CI / Clippy (pull_request) Successful in 1m38s
48c9feb5b0
Run #39's image build died at the new apt step:

    E: Unable to locate package polkit

On Ubuntu 25.10 the daemon (and its pkcheck/pkexec binaries) ships
in the `polkitd` package; `polkit` is not a top-level package name.
The deps install layer earlier in the same log even shows it
pulling `polkitd | policykit-1` as alternatives for a transitive
dep, confirming the right name.
CI: build the full workspace in the firewall job, not just the helper
Some checks failed
CI / ShellCheck (pull_request) Successful in 6s
CI / CI image (pull_request) Successful in 17s
CI / Rustfmt (pull_request) Successful in 9s
CI / Test (pull_request) Successful in 1m59s
CI / Build (pull_request) Successful in 2m7s
CI / E2E (pull_request) Successful in 2m15s
CI / Firewall E2E (pull_request) Failing after 2m17s
CI / Clippy (pull_request) Successful in 2m29s
b9a6bb5761
Run #40's firewall job got all the way through the sidecar setup,
the polkit + group install, and the sg re-exec — and then the test
itself failed with:

    Error: shepherdd binary not found at /work/target/debug/shepherdd

The TestHarness spawns shepherdd (and friends), not just the
helper. Switch the build step to `./scripts/shepherd build` (the
same thing the existing e2e job uses), which compiles the whole
workspace + the embedded web UI under target/debug.
CI: add pkexec to the image (split out of polkitd on modern Ubuntu)
Some checks failed
CI / ShellCheck (pull_request) Successful in 6s
CI / CI image (pull_request) Successful in 25s
CI / Rustfmt (pull_request) Successful in 7s
CI / Test (pull_request) Successful in 1m52s
CI / Build (pull_request) Successful in 1m58s
CI / E2E (pull_request) Successful in 2m9s
CI / Clippy (pull_request) Successful in 2m17s
CI / Firewall E2E (pull_request) Failing after 2m41s
4290ccb2db
Run #41's firewall job got past every infrastructure hurdle and
into the actual test, then died at the first launch attempt:

    "result":"denied","reasons":[...
     "Spawn failed: Failed to spawn pkexec: No such file or directory (os error 2)"]

The earlier `pkcheck` call in `skip_reason()` worked, which initially
looked confusing — but pkcheck and pkexec are now in *different*
packages on modern Ubuntu. After the PwnKit class of vulnerabilities
(CVE-2021-4034 et al.), distros split the pkexec SUID binary out of
polkitd into its own `pkexec` package so systems that don't need it
can avoid the suid binary entirely.

Add pkexec to the Dockerfile alongside polkitd.
Make firewall e2e test run in its own user
All checks were successful
CI / ShellCheck (pull_request) Successful in 6s
CI / CI image (pull_request) Successful in 19s
CI / Rustfmt (pull_request) Successful in 10s
CI / Test (pull_request) Successful in 1m59s
CI / Build (pull_request) Successful in 2m11s
CI / E2E (pull_request) Successful in 2m21s
CI / Clippy (pull_request) Successful in 2m27s
CI / Firewall E2E (pull_request) Successful in 3m10s
d70002d5d9
Author
Owner

Prior to merge, this looks like it needs some more manual validation and a README mention (though maybe that should wait until the browser integration #10)

Prior to merge, this looks like it needs some more manual validation and a README mention (though maybe that should wait until the browser integration #10)
Author
Owner

Looks like we might be missing something in the Rust setup -- I'm seeing the following when building on a fresh machine:

error: failed to run custom build command for `shepherd-firewall-helper v0.1.0 (/home/aarmea/Code/shepherd-launcher/crates/shepherd-firewall-helper)`

Caused by:
  process didn't exit successfully: `/home/aarmea/Code/shepherd-launcher/target/release/build/shepherd-firewall-helper-30152d5cd6614448/build-script-build` (exit status: 101)
  --- stdout
  cargo:rerun-if-changed=/home/aarmea/Code/shepherd-launcher/crates/shepherd-firewall-bpf/src/main.rs
  cargo:rerun-if-changed=/home/aarmea/Code/shepherd-launcher/crates/shepherd-firewall-bpf/Cargo.toml
  cargo:rerun-if-changed=/home/aarmea/Code/shepherd-launcher/crates/shepherd-firewall-bpf/.cargo/config.toml

  --- stderr
  error: toolchain 'nightly-x86_64-unknown-linux-gnu' is not installed
  help: run `rustup toolchain install` to install it

  thread 'main' (11912) panicked at crates/shepherd-firewall-helper/build.rs:48:5:
  shepherd-firewall-bpf build failed
  note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
warning: build failed, waiting for other jobs to finish...

     Compiling shepherd-firewall-bpf v0.1.0 (/home/aarmea/Code/shepherd-launcher/crates/shepherd-firewall-bpf)
  error: linker `bpf-linker` not found
    |
    = note: No such file or directory (os error 2)
Looks like we might be missing something in the Rust setup -- I'm seeing the following when building on a fresh machine: ``` error: failed to run custom build command for `shepherd-firewall-helper v0.1.0 (/home/aarmea/Code/shepherd-launcher/crates/shepherd-firewall-helper)` Caused by: process didn't exit successfully: `/home/aarmea/Code/shepherd-launcher/target/release/build/shepherd-firewall-helper-30152d5cd6614448/build-script-build` (exit status: 101) --- stdout cargo:rerun-if-changed=/home/aarmea/Code/shepherd-launcher/crates/shepherd-firewall-bpf/src/main.rs cargo:rerun-if-changed=/home/aarmea/Code/shepherd-launcher/crates/shepherd-firewall-bpf/Cargo.toml cargo:rerun-if-changed=/home/aarmea/Code/shepherd-launcher/crates/shepherd-firewall-bpf/.cargo/config.toml --- stderr error: toolchain 'nightly-x86_64-unknown-linux-gnu' is not installed help: run `rustup toolchain install` to install it thread 'main' (11912) panicked at crates/shepherd-firewall-helper/build.rs:48:5: shepherd-firewall-bpf build failed note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace warning: build failed, waiting for other jobs to finish... ``` ``` Compiling shepherd-firewall-bpf v0.1.0 (/home/aarmea/Code/shepherd-launcher/crates/shepherd-firewall-bpf) error: linker `bpf-linker` not found | = note: No such file or directory (os error 2) ```
All checks were successful
CI / ShellCheck (pull_request) Successful in 6s
Required
Details
CI / CI image (pull_request) Successful in 19s
CI / Rustfmt (pull_request) Successful in 10s
Required
Details
CI / Test (pull_request) Successful in 1m59s
Required
Details
CI / Build (pull_request) Successful in 2m11s
Required
Details
CI / E2E (pull_request) Successful in 2m21s
Required
Details
CI / Clippy (pull_request) Successful in 2m27s
Required
Details
CI / Firewall E2E (pull_request) Successful in 3m10s
This pull request has changes conflicting with the target branch.
  • .github/workflows/ci.yml
  • crates/shepherd-config/src/policy.rs
  • crates/shepherd-config/src/schema.rs
  • crates/shepherd-config/src/validation.rs
  • crates/shepherd-core/src/engine.rs
  • crates/shepherd-host-api/src/traits.rs
  • crates/shepherd-host-linux/src/adapter.rs
  • crates/shepherd-http/src/handlers/sessions.rs
  • crates/shepherd-http/tests/api.rs
  • crates/shepherdd/src/main.rs
  • crates/shepherdd/tests/integration.rs
  • scripts/lib/install.sh
View command line instructions

Manual merge helper

Use this merge commit message when completing the merge manually.

Checkout

From your project repository, check out a new branch and test the changes.
git fetch -u origin u/albert/4/firewall:u/albert/4/firewall
git switch u/albert/4/firewall
Sign in to join this conversation.
No reviewers
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
albert/shepherd-launcher!32
No description provided.