Compare commits
1 commit
main
...
u/aarmea/9
| Author | SHA1 | Date | |
|---|---|---|---|
| 266685628e |
2
.gitattributes
vendored
Normal file
|
|
@ -0,0 +1,2 @@
|
||||||
|
*.webm filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.png filter=lfs diff=lfs merge=lfs -text
|
||||||
40
.github/workflows/ci.yml
vendored
|
|
@ -120,46 +120,6 @@ jobs:
|
||||||
. "$HOME/.cargo/env"
|
. "$HOME/.cargo/env"
|
||||||
cargo clippy --all-targets -- -D warnings
|
cargo clippy --all-targets -- -D warnings
|
||||||
|
|
||||||
fmt:
|
|
||||||
name: Rustfmt
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
container:
|
|
||||||
image: ubuntu:25.10
|
|
||||||
steps:
|
|
||||||
- name: Install git
|
|
||||||
run: |
|
|
||||||
apt-get update
|
|
||||||
apt-get install -y git curl
|
|
||||||
|
|
||||||
- uses: actions/checkout@v4
|
|
||||||
|
|
||||||
- name: Install build dependencies
|
|
||||||
run: ./scripts/shepherd deps install build
|
|
||||||
|
|
||||||
- name: Add rustfmt component
|
|
||||||
run: |
|
|
||||||
. "$HOME/.cargo/env"
|
|
||||||
rustup component add rustfmt
|
|
||||||
|
|
||||||
- name: Add Rust to PATH
|
|
||||||
run: echo "$HOME/.cargo/bin" >> $GITHUB_PATH
|
|
||||||
|
|
||||||
- name: Cache cargo registry and build
|
|
||||||
uses: actions/cache@v4
|
|
||||||
with:
|
|
||||||
path: |
|
|
||||||
~/.cargo/registry
|
|
||||||
~/.cargo/git
|
|
||||||
target
|
|
||||||
key: ${{ runner.os }}-cargo-${{ hashFiles('**/Cargo.lock') }}
|
|
||||||
restore-keys: |
|
|
||||||
${{ runner.os }}-cargo-
|
|
||||||
|
|
||||||
- name: Check formatting
|
|
||||||
run: |
|
|
||||||
. "$HOME/.cargo/env"
|
|
||||||
cargo fmt --all -- --check
|
|
||||||
|
|
||||||
shellcheck:
|
shellcheck:
|
||||||
name: ShellCheck
|
name: ShellCheck
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
|
|
|
||||||
13
AGENTS.md
|
|
@ -1,13 +0,0 @@
|
||||||
Agents: please use the existing documentation for setup.
|
|
||||||
|
|
||||||
<CONTRIBUTING.md> describes environment setup and build, test, and lint, including helper scripts and exact commands.
|
|
||||||
|
|
||||||
Please ensure that your changes build and pass tests and lint, and run `cargo fmt --all` to match your changes to the rest of the code.
|
|
||||||
|
|
||||||
If you changed the example configuration at <config.example.toml>, make sure that it passes config validation.
|
|
||||||
|
|
||||||
Each of the Rust crates in <crates> contains a README.md that describes each at a high level.
|
|
||||||
|
|
||||||
<.github/workflows/ci.yml> and <docs/INSTALL.md> describes exact environment setup, especially if coming from Ubuntu 24.04 (shepherd-launcher requires 25.10).
|
|
||||||
|
|
||||||
Historical prompts and design docs provided to agents are placed in <docs/ai/history>. Please refer there for history, and if this prompt is substantial, write it along with any relevant context (like the GitHub issue) to that directory as well.
|
|
||||||
996
Cargo.lock
generated
|
|
@ -58,8 +58,14 @@ anyhow = "1.0"
|
||||||
uuid = { version = "1.6", features = ["v4", "serde"] }
|
uuid = { version = "1.6", features = ["v4", "serde"] }
|
||||||
bitflags = "2.4"
|
bitflags = "2.4"
|
||||||
|
|
||||||
|
# HTTP client (for connectivity checks)
|
||||||
|
reqwest = { version = "0.12", default-features = false, features = ["rustls-tls"] }
|
||||||
|
|
||||||
# Unix-specific
|
# Unix-specific
|
||||||
nix = { version = "0.29", features = ["signal", "process", "user", "socket"] }
|
nix = { version = "0.29", features = ["signal", "process", "user", "socket"] }
|
||||||
|
netlink-sys = "0.8"
|
||||||
|
netlink-packet-core = "0.7"
|
||||||
|
netlink-packet-route = "0.21"
|
||||||
|
|
||||||
# CLI
|
# CLI
|
||||||
clap = { version = "4.5", features = ["derive", "env"] }
|
clap = { version = "4.5", features = ["derive", "env"] }
|
||||||
|
|
|
||||||
62
README.md
|
|
@ -25,7 +25,7 @@ or write your own.
|
||||||
|
|
||||||
The flow of manually opening and closing activities should be familiar.
|
The flow of manually opening and closing activities should be familiar.
|
||||||
|
|
||||||
<video controls src="https://git.armeafamily.com/albert/shepherd-launcher/raw/branch/main/docs/readme/basic-flow.webm" alt="Happy path demo showing home screen --> GCompris --> home screen"></video>
|
["Happy path" demo showing home screen --> GCompris --> home screen](https://github.com/user-attachments/assets/1aed2040-b381-4022-8353-5ce076b1eee0)
|
||||||
|
|
||||||
Activities can be made selectively available at certain times of day.
|
Activities can be made selectively available at certain times of day.
|
||||||
|
|
||||||
|
|
@ -40,7 +40,7 @@ Activities can have configurable time limits, including:
|
||||||
* total usage per day
|
* total usage per day
|
||||||
* cooldown periods before that particular activity can be restarted
|
* cooldown periods before that particular activity can be restarted
|
||||||
|
|
||||||
<video controls src="https://git.armeafamily.com/albert/shepherd-launcher/raw/branch/main/docs/readme/tuxmath-expiring.webm" alt="TuxMath session shown about to expire, including warnings and automatic termination"></video>
|
[TuxMath session shown about to expire, including warnings and automatic termination](https://github.com/user-attachments/assets/541aa456-ef7c-4974-b918-5b143c5304c3)
|
||||||
|
|
||||||
### Anything on Linux
|
### Anything on Linux
|
||||||
|
|
||||||
|
|
@ -64,7 +64,7 @@ If it can run on Linux in *any way, shape, or form*, it can be supervised by
|
||||||

|

|
||||||
|
|
||||||
> [Minecraft](https://www.minecraft.net/) running via the
|
> [Minecraft](https://www.minecraft.net/) running via the
|
||||||
> [Prism Launcher Flatpak](https://flathub.org/en/apps/org.prismlauncher.PrismLauncher)
|
> [mc-installer Snap](https://snapcraft.io/mc-installer)
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
|
|
@ -84,37 +84,9 @@ If it can run on Linux in *any way, shape, or form*, it can be supervised by
|
||||||
|
|
||||||
## Non-goals
|
## Non-goals
|
||||||
|
|
||||||
1. Modifying or patching third-party applications
|
* Modifying or patching third-party applications
|
||||||
2. Circumventing DRM or platform protections
|
* Circumventing DRM or platform protections
|
||||||
3. Replacing parental involvement with automation or third-party content moderation
|
* Replacing parental involvement with automation
|
||||||
4. Remotely monitoring users with telemetry
|
|
||||||
5. Collecting, storing, or reporting personally identifying information (PII)
|
|
||||||
|
|
||||||
### Regarding age verification
|
|
||||||
|
|
||||||
`shepherd-launcher` may be considered "operating system software" under the
|
|
||||||
[Digital Age Assurance Act][age-california] and similar legislation,
|
|
||||||
and therefore subject to an age verification requirement.
|
|
||||||
|
|
||||||
[age-california]: https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202520260AB1043
|
|
||||||
|
|
||||||
As legislated, such requirements are fundamentally incompatible with non-goals 3, 4, and 5.
|
|
||||||
|
|
||||||
`shepherd-launcher` will *never* collect telemetry or PII, and as such, it will never implement this type of age verification.
|
|
||||||
|
|
||||||
As a result, `shepherd-launcher` is not licensed for use in any region that requires OS-level age verification by law.
|
|
||||||
**If you reside in any such region, you may not download, install, or redistribute `shepherd-launcher`.**
|
|
||||||
|
|
||||||
This includes, but is not limited to:
|
|
||||||
|
|
||||||
* California
|
|
||||||
* Louisiana
|
|
||||||
* Texas
|
|
||||||
* Utah
|
|
||||||
|
|
||||||
[Many other states are considering similar legislation.](https://actonline.org/2025/01/14/the-abcs-of-age-verification-in-the-united-states/)
|
|
||||||
|
|
||||||
If you disagree with this assessment and you reside in an affected region, **please contact your representatives.**
|
|
||||||
|
|
||||||
## Installation
|
## Installation
|
||||||
|
|
||||||
|
|
@ -132,16 +104,16 @@ All behavior shown above is driven entirely by declarative configuration.
|
||||||
For the Minecraft example shown above:
|
For the Minecraft example shown above:
|
||||||
|
|
||||||
```toml
|
```toml
|
||||||
# Prism Launcher - Minecraft launcher (Flatpak)
|
# Minecraft via mc-installer snap
|
||||||
# Install: flatpak install flathub org.prismlauncher.PrismLauncher
|
# Ubuntu: sudo snap install mc-installer
|
||||||
[[entries]]
|
[[entries]]
|
||||||
id = "prism-launcher"
|
id = "minecraft"
|
||||||
label = "Prism Launcher"
|
label = "Minecraft"
|
||||||
icon = "org.prismlauncher.PrismLauncher"
|
icon = "minecraft"
|
||||||
|
|
||||||
[entries.kind]
|
[entries.kind]
|
||||||
type = "flatpak"
|
type = "snap"
|
||||||
app_id = "org.prismlauncher.PrismLauncher"
|
snap_name = "mc-installer"
|
||||||
|
|
||||||
[entries.availability]
|
[entries.availability]
|
||||||
[[entries.availability.windows]]
|
[[entries.availability.windows]]
|
||||||
|
|
@ -177,9 +149,9 @@ See [config.example.toml](./config.example.toml) for more.
|
||||||
Build instructions and contribution guidelines are described in
|
Build instructions and contribution guidelines are described in
|
||||||
[CONTRIBUTING.md](./CONTRIBUTING.md).
|
[CONTRIBUTING.md](./CONTRIBUTING.md).
|
||||||
|
|
||||||
If you'd like to help out, you can find potential work items on
|
If you'd like to help out, look on
|
||||||
[the Issues page](https://git.armeafamily.com/albert/shepherd-launcher/issues).
|
[GitHub Issues](https://github.com/aarmea/shepherd-launcher/issues) for
|
||||||
You may email me patch sets at <shepherd-launcher-patch@albertarmea.com>.
|
potential work items.
|
||||||
|
|
||||||
## Written in 2025, responsibly
|
## Written in 2025, responsibly
|
||||||
|
|
||||||
|
|
@ -188,7 +160,7 @@ compatibility infrastructure:
|
||||||
|
|
||||||
* Wayland and Sway
|
* Wayland and Sway
|
||||||
* Rust
|
* Rust
|
||||||
* Flatpak and Snap
|
* Snap
|
||||||
* Proton and WINE
|
* Proton and WINE
|
||||||
|
|
||||||
This project was written with the assistance of generative AI-based coding
|
This project was written with the assistance of generative AI-based coding
|
||||||
|
|
|
||||||
|
|
@ -30,13 +30,15 @@ max_volume = 80 # Maximum volume percentage (0-100)
|
||||||
allow_mute = true # Whether mute toggle is allowed
|
allow_mute = true # Whether mute toggle is allowed
|
||||||
allow_change = true # Whether volume changes are allowed at all
|
allow_change = true # Whether volume changes are allowed at all
|
||||||
|
|
||||||
# Internet connectivity check (optional)
|
# Network connectivity settings (optional)
|
||||||
# Entries can require internet and will be hidden when offline.
|
# Used to check if Internet is available before allowing network-dependent entries
|
||||||
# Supported schemes: https://, http://, tcp://
|
[service.network]
|
||||||
[service.internet]
|
# URL to check for global network connectivity (default: Google's connectivity check)
|
||||||
check = "https://connectivitycheck.gstatic.com/generate_204"
|
# check_url = "http://connectivitycheck.gstatic.com/generate_204"
|
||||||
interval_seconds = 300 # Keep this high to avoid excessive network requests
|
# How often to perform periodic connectivity checks, in seconds (default: 60)
|
||||||
timeout_ms = 1500
|
# check_interval_seconds = 60
|
||||||
|
# Timeout for connectivity checks, in seconds (default: 5)
|
||||||
|
# check_timeout_seconds = 5
|
||||||
|
|
||||||
# Default warning thresholds
|
# Default warning thresholds
|
||||||
[[service.default_warnings]]
|
[[service.default_warnings]]
|
||||||
|
|
@ -173,23 +175,16 @@ max_run_seconds = 0 # Unlimited
|
||||||
daily_quota_seconds = 0 # Unlimited
|
daily_quota_seconds = 0 # Unlimited
|
||||||
cooldown_seconds = 0 # No cooldown
|
cooldown_seconds = 0 # No cooldown
|
||||||
|
|
||||||
## === Steam games ===
|
# Minecraft via mc-installer snap
|
||||||
# Steam can be used via Canonical's Steam snap package:
|
# Ubuntu: sudo snap install mc-installer
|
||||||
# https://snapcraft.io/steam
|
|
||||||
# Install it with: sudo snap install steam
|
|
||||||
# Steam must be set up and logged in before using these entries.
|
|
||||||
# You must have the games installed in your Steam library.
|
|
||||||
|
|
||||||
# Celeste via Steam
|
|
||||||
# https://store.steampowered.com/app/504230/Celeste
|
|
||||||
[[entries]]
|
[[entries]]
|
||||||
id = "steam-celeste"
|
id = "minecraft"
|
||||||
label = "Celeste"
|
label = "Minecraft"
|
||||||
icon = "~/Games/Icons/Celeste.png"
|
icon = "~/.minecraft/launcher/icons/minecraft256.png"
|
||||||
|
|
||||||
[entries.kind]
|
[entries.kind]
|
||||||
type = "steam"
|
type = "snap"
|
||||||
app_id = 504230 # Steam App ID
|
snap_name = "mc-installer"
|
||||||
|
|
||||||
[entries.availability]
|
[entries.availability]
|
||||||
[[entries.availability.windows]]
|
[[entries.availability.windows]]
|
||||||
|
|
@ -202,62 +197,6 @@ days = "weekends"
|
||||||
start = "10:00"
|
start = "10:00"
|
||||||
end = "20:00"
|
end = "20:00"
|
||||||
|
|
||||||
# No [entries.limits] section - uses service defaults
|
|
||||||
# Omitting limits entirely uses default_max_run_seconds
|
|
||||||
|
|
||||||
# A Short Hike via Steam
|
|
||||||
# https://store.steampowered.com/app/1055540/A_Short_Hike/
|
|
||||||
[[entries]]
|
|
||||||
id = "steam-a-short-hike"
|
|
||||||
label = "A Short Hike"
|
|
||||||
icon = "~/Games/Icons/A_Short_Hike.png"
|
|
||||||
|
|
||||||
[entries.kind]
|
|
||||||
type = "steam"
|
|
||||||
app_id = 1055540 # Steam App ID
|
|
||||||
|
|
||||||
[entries.availability]
|
|
||||||
[[entries.availability.windows]]
|
|
||||||
days = "weekdays"
|
|
||||||
start = "15:00"
|
|
||||||
end = "18:00"
|
|
||||||
|
|
||||||
[[entries.availability.windows]]
|
|
||||||
days = "weekends"
|
|
||||||
start = "10:00"
|
|
||||||
end = "20:00"
|
|
||||||
|
|
||||||
## === Flatpak-based applications ===
|
|
||||||
# Flatpak entries use the "flatpak" type for proper process management.
|
|
||||||
# Similar to Snap, Flatpak apps run in sandboxed environments and use
|
|
||||||
# systemd scopes for process management.
|
|
||||||
|
|
||||||
# Prism Launcher - Minecraft launcher (Flatpak)
|
|
||||||
# Install: flatpak install flathub org.prismlauncher.PrismLauncher
|
|
||||||
[[entries]]
|
|
||||||
id = "prism-launcher"
|
|
||||||
label = "Prism Launcher"
|
|
||||||
icon = "org.prismlauncher.PrismLauncher"
|
|
||||||
|
|
||||||
[entries.kind]
|
|
||||||
type = "flatpak"
|
|
||||||
app_id = "org.prismlauncher.PrismLauncher"
|
|
||||||
|
|
||||||
[entries.availability]
|
|
||||||
[[entries.availability.windows]]
|
|
||||||
days = "weekdays"
|
|
||||||
start = "15:00"
|
|
||||||
end = "18:00"
|
|
||||||
|
|
||||||
[[entries.availability.windows]]
|
|
||||||
days = "weekends"
|
|
||||||
start = "10:00"
|
|
||||||
end = "20:00"
|
|
||||||
|
|
||||||
[entries.internet]
|
|
||||||
required = true
|
|
||||||
check = "http://www.msftconnecttest.com/connecttest.txt" # Use Microsoft's test URL (Minecraft is owned by Microsoft)
|
|
||||||
|
|
||||||
[entries.limits]
|
[entries.limits]
|
||||||
max_run_seconds = 1800 # 30 minutes (roughly 3 in-game days)
|
max_run_seconds = 1800 # 30 minutes (roughly 3 in-game days)
|
||||||
daily_quota_seconds = 3600 # 1 hour per day
|
daily_quota_seconds = 3600 # 1 hour per day
|
||||||
|
|
@ -282,24 +221,72 @@ message = "30 seconds! Save NOW!"
|
||||||
[entries.volume]
|
[entries.volume]
|
||||||
max_volume = 60 # Limit volume during gaming sessions
|
max_volume = 60 # Limit volume during gaming sessions
|
||||||
|
|
||||||
# Krita - digital painting (Flatpak)
|
# Network requirements for online games
|
||||||
# Install: flatpak install flathub org.kde.krita
|
[entries.network]
|
||||||
|
required = true # Minecraft needs network for authentication and multiplayer
|
||||||
|
check_url = "http://www.msftconnecttest.com/connecttest.txt" # Use Microsoft's check (Minecraft is owned by Microsoft)
|
||||||
|
|
||||||
|
## === Steam games ===
|
||||||
|
# Steam can be used via Canonical's Steam snap package:
|
||||||
|
# https://snapcraft.io/steam
|
||||||
|
# Install it with: sudo snap install steam
|
||||||
|
# Steam must be set up and logged in before using these entries.
|
||||||
|
# You must have the games installed in your Steam library.
|
||||||
|
|
||||||
|
# Celeste via Steam
|
||||||
|
# https://store.steampowered.com/app/504230/Celeste
|
||||||
[[entries]]
|
[[entries]]
|
||||||
id = "krita"
|
id = "steam-celeste"
|
||||||
label = "Krita"
|
label = "Celeste"
|
||||||
icon = "org.kde.krita"
|
icon = "~/Games/Icons/Celeste.png"
|
||||||
|
|
||||||
[entries.kind]
|
[entries.kind]
|
||||||
type = "flatpak"
|
type = "snap"
|
||||||
app_id = "org.kde.krita"
|
snap_name = "steam"
|
||||||
|
args = ["steam://rungameid/504230"] # Steam App ID (passed to 'snap run steam')
|
||||||
|
|
||||||
[entries.availability]
|
[entries.availability]
|
||||||
always = true
|
[[entries.availability.windows]]
|
||||||
|
days = "weekdays"
|
||||||
|
start = "15:00"
|
||||||
|
end = "18:00"
|
||||||
|
|
||||||
[entries.limits]
|
[[entries.availability.windows]]
|
||||||
max_run_seconds = 0 # Unlimited
|
days = "weekends"
|
||||||
daily_quota_seconds = 0 # Unlimited
|
start = "10:00"
|
||||||
cooldown_seconds = 0 # No cooldown
|
end = "20:00"
|
||||||
|
|
||||||
|
# No [entries.limits] section - uses service defaults
|
||||||
|
# Omitting limits entirely uses default_max_run_seconds
|
||||||
|
|
||||||
|
[entries.network]
|
||||||
|
required = true # Steam needs network for authentication
|
||||||
|
|
||||||
|
# A Short Hike via Steam
|
||||||
|
# https://store.steampowered.com/app/1055540/A_Short_Hike/
|
||||||
|
[[entries]]
|
||||||
|
id = "steam-a-short-hike"
|
||||||
|
label = "A Short Hike"
|
||||||
|
icon = "~/Games/Icons/A_Short_Hike.png"
|
||||||
|
|
||||||
|
[entries.kind]
|
||||||
|
type = "snap"
|
||||||
|
snap_name = "steam"
|
||||||
|
args = ["steam://rungameid/1055540"] # Steam App ID (passed to 'snap run steam')
|
||||||
|
|
||||||
|
[entries.availability]
|
||||||
|
[[entries.availability.windows]]
|
||||||
|
days = "weekdays"
|
||||||
|
start = "15:00"
|
||||||
|
end = "18:00"
|
||||||
|
|
||||||
|
[[entries.availability.windows]]
|
||||||
|
days = "weekends"
|
||||||
|
start = "10:00"
|
||||||
|
end = "20:00"
|
||||||
|
|
||||||
|
[entries.network]
|
||||||
|
required = true # Steam needs network for authentication
|
||||||
|
|
||||||
## === Media ===
|
## === Media ===
|
||||||
# Just use `mpv` to play media (for now).
|
# Just use `mpv` to play media (for now).
|
||||||
|
|
@ -348,6 +335,9 @@ max_run_seconds = 0 # Unlimited: sleep/study aid
|
||||||
daily_quota_seconds = 0 # Unlimited
|
daily_quota_seconds = 0 # Unlimited
|
||||||
cooldown_seconds = 0 # No cooldown
|
cooldown_seconds = 0 # No cooldown
|
||||||
|
|
||||||
|
[entries.network]
|
||||||
|
required = true # YouTube streaming needs network
|
||||||
|
|
||||||
# Terminal for debugging only
|
# Terminal for debugging only
|
||||||
[[entries]]
|
[[entries]]
|
||||||
id = "terminal"
|
id = "terminal"
|
||||||
|
|
|
||||||
|
|
@ -104,7 +104,6 @@ if view.enabled {
|
||||||
ReasonCode::QuotaExhausted { used, quota } => { /* ... */ }
|
ReasonCode::QuotaExhausted { used, quota } => { /* ... */ }
|
||||||
ReasonCode::CooldownActive { available_at } => { /* ... */ }
|
ReasonCode::CooldownActive { available_at } => { /* ... */ }
|
||||||
ReasonCode::SessionActive { entry_id, remaining } => { /* ... */ }
|
ReasonCode::SessionActive { entry_id, remaining } => { /* ... */ }
|
||||||
ReasonCode::InternetUnavailable { check } => { /* ... */ }
|
|
||||||
// ...
|
// ...
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -5,7 +5,7 @@ use serde::{Deserialize, Serialize};
|
||||||
use shepherd_util::{ClientId, EntryId};
|
use shepherd_util::{ClientId, EntryId};
|
||||||
use std::time::Duration;
|
use std::time::Duration;
|
||||||
|
|
||||||
use crate::{API_VERSION, ClientRole, StopMode};
|
use crate::{ClientRole, StopMode, API_VERSION};
|
||||||
|
|
||||||
/// Request wrapper with metadata
|
/// Request wrapper with metadata
|
||||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
|
|
@ -128,6 +128,7 @@ pub enum Command {
|
||||||
GetHealth,
|
GetHealth,
|
||||||
|
|
||||||
// Volume control commands
|
// Volume control commands
|
||||||
|
|
||||||
/// Get current volume status
|
/// Get current volume status
|
||||||
GetVolume,
|
GetVolume,
|
||||||
|
|
||||||
|
|
@ -141,6 +142,7 @@ pub enum Command {
|
||||||
SetMute { muted: bool },
|
SetMute { muted: bool },
|
||||||
|
|
||||||
// Admin commands
|
// Admin commands
|
||||||
|
|
||||||
/// Extend the current session (admin only)
|
/// Extend the current session (admin only)
|
||||||
ExtendCurrent { by: Duration },
|
ExtendCurrent { by: Duration },
|
||||||
|
|
||||||
|
|
@ -232,6 +234,7 @@ mod tests {
|
||||||
current_session: None,
|
current_session: None,
|
||||||
entry_count: 5,
|
entry_count: 5,
|
||||||
entries: vec![],
|
entries: vec![],
|
||||||
|
connectivity: Default::default(),
|
||||||
}),
|
}),
|
||||||
);
|
);
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -5,7 +5,7 @@ use serde::{Deserialize, Serialize};
|
||||||
use shepherd_util::{EntryId, SessionId};
|
use shepherd_util::{EntryId, SessionId};
|
||||||
use std::time::Duration;
|
use std::time::Duration;
|
||||||
|
|
||||||
use crate::{API_VERSION, ServiceStateSnapshot, SessionEndReason, WarningSeverity};
|
use crate::{ServiceStateSnapshot, SessionEndReason, WarningSeverity, API_VERSION};
|
||||||
|
|
||||||
/// Event envelope
|
/// Event envelope
|
||||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
|
|
@ -51,7 +51,9 @@ pub enum EventPayload {
|
||||||
},
|
},
|
||||||
|
|
||||||
/// Session is expiring (termination initiated)
|
/// Session is expiring (termination initiated)
|
||||||
SessionExpiring { session_id: SessionId },
|
SessionExpiring {
|
||||||
|
session_id: SessionId,
|
||||||
|
},
|
||||||
|
|
||||||
/// Session has ended
|
/// Session has ended
|
||||||
SessionEnded {
|
SessionEnded {
|
||||||
|
|
@ -62,13 +64,21 @@ pub enum EventPayload {
|
||||||
},
|
},
|
||||||
|
|
||||||
/// Policy was reloaded
|
/// Policy was reloaded
|
||||||
PolicyReloaded { entry_count: usize },
|
PolicyReloaded {
|
||||||
|
entry_count: usize,
|
||||||
|
},
|
||||||
|
|
||||||
/// Entry availability changed (for UI updates)
|
/// Entry availability changed (for UI updates)
|
||||||
EntryAvailabilityChanged { entry_id: EntryId, enabled: bool },
|
EntryAvailabilityChanged {
|
||||||
|
entry_id: EntryId,
|
||||||
|
enabled: bool,
|
||||||
|
},
|
||||||
|
|
||||||
/// Volume status changed
|
/// Volume status changed
|
||||||
VolumeChanged { percent: u8, muted: bool },
|
VolumeChanged {
|
||||||
|
percent: u8,
|
||||||
|
muted: bool,
|
||||||
|
},
|
||||||
|
|
||||||
/// Service is shutting down
|
/// Service is shutting down
|
||||||
Shutdown,
|
Shutdown,
|
||||||
|
|
@ -78,6 +88,14 @@ pub enum EventPayload {
|
||||||
event_type: String,
|
event_type: String,
|
||||||
details: serde_json::Value,
|
details: serde_json::Value,
|
||||||
},
|
},
|
||||||
|
|
||||||
|
/// Network connectivity status changed
|
||||||
|
ConnectivityChanged {
|
||||||
|
/// Whether global connectivity check now passes
|
||||||
|
connected: bool,
|
||||||
|
/// The URL that was checked
|
||||||
|
check_url: String,
|
||||||
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
#[cfg(test)]
|
#[cfg(test)]
|
||||||
|
|
@ -97,10 +115,7 @@ mod tests {
|
||||||
let parsed: Event = serde_json::from_str(&json).unwrap();
|
let parsed: Event = serde_json::from_str(&json).unwrap();
|
||||||
|
|
||||||
assert_eq!(parsed.api_version, API_VERSION);
|
assert_eq!(parsed.api_version, API_VERSION);
|
||||||
assert!(matches!(
|
assert!(matches!(parsed.payload, EventPayload::SessionStarted { .. }));
|
||||||
parsed.payload,
|
|
||||||
EventPayload::SessionStarted { .. }
|
|
||||||
));
|
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
|
|
|
||||||
|
|
@ -13,8 +13,6 @@ use std::time::Duration;
|
||||||
pub enum EntryKindTag {
|
pub enum EntryKindTag {
|
||||||
Process,
|
Process,
|
||||||
Snap,
|
Snap,
|
||||||
Steam,
|
|
||||||
Flatpak,
|
|
||||||
Vm,
|
Vm,
|
||||||
Media,
|
Media,
|
||||||
Custom,
|
Custom,
|
||||||
|
|
@ -48,28 +46,6 @@ pub enum EntryKind {
|
||||||
#[serde(default)]
|
#[serde(default)]
|
||||||
env: HashMap<String, String>,
|
env: HashMap<String, String>,
|
||||||
},
|
},
|
||||||
/// Steam game launched via the Steam snap (Linux)
|
|
||||||
Steam {
|
|
||||||
/// Steam App ID (e.g., 504230 for Celeste)
|
|
||||||
app_id: u32,
|
|
||||||
/// Additional command-line arguments passed to Steam
|
|
||||||
#[serde(default)]
|
|
||||||
args: Vec<String>,
|
|
||||||
/// Additional environment variables
|
|
||||||
#[serde(default)]
|
|
||||||
env: HashMap<String, String>,
|
|
||||||
},
|
|
||||||
/// Flatpak application - uses systemd scope-based process management
|
|
||||||
Flatpak {
|
|
||||||
/// The Flatpak application ID (e.g., "org.prismlauncher.PrismLauncher")
|
|
||||||
app_id: String,
|
|
||||||
/// Additional command-line arguments
|
|
||||||
#[serde(default)]
|
|
||||||
args: Vec<String>,
|
|
||||||
/// Additional environment variables
|
|
||||||
#[serde(default)]
|
|
||||||
env: HashMap<String, String>,
|
|
||||||
},
|
|
||||||
Vm {
|
Vm {
|
||||||
driver: String,
|
driver: String,
|
||||||
#[serde(default)]
|
#[serde(default)]
|
||||||
|
|
@ -91,8 +67,6 @@ impl EntryKind {
|
||||||
match self {
|
match self {
|
||||||
EntryKind::Process { .. } => EntryKindTag::Process,
|
EntryKind::Process { .. } => EntryKindTag::Process,
|
||||||
EntryKind::Snap { .. } => EntryKindTag::Snap,
|
EntryKind::Snap { .. } => EntryKindTag::Snap,
|
||||||
EntryKind::Steam { .. } => EntryKindTag::Steam,
|
|
||||||
EntryKind::Flatpak { .. } => EntryKindTag::Flatpak,
|
|
||||||
EntryKind::Vm { .. } => EntryKindTag::Vm,
|
EntryKind::Vm { .. } => EntryKindTag::Vm,
|
||||||
EntryKind::Media { .. } => EntryKindTag::Media,
|
EntryKind::Media { .. } => EntryKindTag::Media,
|
||||||
EntryKind::Custom { .. } => EntryKindTag::Custom,
|
EntryKind::Custom { .. } => EntryKindTag::Custom,
|
||||||
|
|
@ -125,9 +99,14 @@ pub enum ReasonCode {
|
||||||
next_window_start: Option<DateTime<Local>>,
|
next_window_start: Option<DateTime<Local>>,
|
||||||
},
|
},
|
||||||
/// Daily quota exhausted
|
/// Daily quota exhausted
|
||||||
QuotaExhausted { used: Duration, quota: Duration },
|
QuotaExhausted {
|
||||||
|
used: Duration,
|
||||||
|
quota: Duration,
|
||||||
|
},
|
||||||
/// Cooldown period active
|
/// Cooldown period active
|
||||||
CooldownActive { available_at: DateTime<Local> },
|
CooldownActive {
|
||||||
|
available_at: DateTime<Local>,
|
||||||
|
},
|
||||||
/// Another session is active
|
/// Another session is active
|
||||||
SessionActive {
|
SessionActive {
|
||||||
entry_id: EntryId,
|
entry_id: EntryId,
|
||||||
|
|
@ -135,11 +114,18 @@ pub enum ReasonCode {
|
||||||
remaining: Option<Duration>,
|
remaining: Option<Duration>,
|
||||||
},
|
},
|
||||||
/// Host doesn't support this entry kind
|
/// Host doesn't support this entry kind
|
||||||
UnsupportedKind { kind: EntryKindTag },
|
UnsupportedKind {
|
||||||
|
kind: EntryKindTag,
|
||||||
|
},
|
||||||
/// Entry is explicitly disabled
|
/// Entry is explicitly disabled
|
||||||
Disabled { reason: Option<String> },
|
Disabled {
|
||||||
/// Internet connectivity is required but unavailable
|
reason: Option<String>,
|
||||||
InternetUnavailable { check: Option<String> },
|
},
|
||||||
|
/// Network connectivity check failed
|
||||||
|
NetworkUnavailable {
|
||||||
|
/// The URL that was checked
|
||||||
|
check_url: String,
|
||||||
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Warning severity level
|
/// Warning severity level
|
||||||
|
|
@ -216,6 +202,20 @@ pub struct ServiceStateSnapshot {
|
||||||
/// Available entries for UI display
|
/// Available entries for UI display
|
||||||
#[serde(default)]
|
#[serde(default)]
|
||||||
pub entries: Vec<EntryView>,
|
pub entries: Vec<EntryView>,
|
||||||
|
/// Network connectivity status
|
||||||
|
#[serde(default)]
|
||||||
|
pub connectivity: ConnectivityStatus,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Network connectivity status
|
||||||
|
#[derive(Debug, Clone, Default, Serialize, Deserialize)]
|
||||||
|
pub struct ConnectivityStatus {
|
||||||
|
/// Whether global network connectivity check passed
|
||||||
|
pub connected: bool,
|
||||||
|
/// The URL that was checked for global connectivity
|
||||||
|
pub check_url: Option<String>,
|
||||||
|
/// When the last check was performed
|
||||||
|
pub last_check: Option<DateTime<Local>>,
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Role for authorization
|
/// Role for authorization
|
||||||
|
|
|
||||||
|
|
@ -23,12 +23,6 @@ socket_path = "/run/shepherdd/shepherdd.sock"
|
||||||
data_dir = "/var/lib/shepherdd"
|
data_dir = "/var/lib/shepherdd"
|
||||||
default_max_run_seconds = 1800 # 30 minutes default
|
default_max_run_seconds = 1800 # 30 minutes default
|
||||||
|
|
||||||
# Internet connectivity check (optional)
|
|
||||||
[service.internet]
|
|
||||||
check = "https://connectivitycheck.gstatic.com/generate_204"
|
|
||||||
interval_seconds = 10
|
|
||||||
timeout_ms = 1500
|
|
||||||
|
|
||||||
# Global volume restrictions
|
# Global volume restrictions
|
||||||
[service.volume]
|
[service.volume]
|
||||||
max_volume = 80
|
max_volume = 80
|
||||||
|
|
@ -55,9 +49,6 @@ label = "Minecraft"
|
||||||
icon = "minecraft"
|
icon = "minecraft"
|
||||||
kind = { type = "snap", snap_name = "mc-installer" }
|
kind = { type = "snap", snap_name = "mc-installer" }
|
||||||
|
|
||||||
[entries.internet]
|
|
||||||
required = true
|
|
||||||
|
|
||||||
[entries.availability]
|
[entries.availability]
|
||||||
[[entries.availability.windows]]
|
[[entries.availability.windows]]
|
||||||
days = "weekdays"
|
days = "weekdays"
|
||||||
|
|
@ -119,9 +110,6 @@ kind = { type = "process", command = "/usr/bin/game", args = ["--fullscreen"] }
|
||||||
# Snap application
|
# Snap application
|
||||||
kind = { type = "snap", snap_name = "mc-installer" }
|
kind = { type = "snap", snap_name = "mc-installer" }
|
||||||
|
|
||||||
# Steam game (via Steam snap)
|
|
||||||
kind = { type = "steam", app_id = 504230 }
|
|
||||||
|
|
||||||
# Virtual machine (future)
|
# Virtual machine (future)
|
||||||
kind = { type = "vm", driver = "qemu", args = { disk = "game.qcow2" } }
|
kind = { type = "vm", driver = "qemu", args = { disk = "game.qcow2" } }
|
||||||
|
|
||||||
|
|
@ -160,22 +148,6 @@ daily_quota_seconds = 7200 # Total daily limit
|
||||||
cooldown_seconds = 600 # Wait time between sessions
|
cooldown_seconds = 600 # Wait time between sessions
|
||||||
```
|
```
|
||||||
|
|
||||||
### Internet Requirements
|
|
||||||
|
|
||||||
Entries can require internet connectivity. When the device is offline, those entries are hidden.
|
|
||||||
|
|
||||||
```toml
|
|
||||||
[service.internet]
|
|
||||||
check = "https://connectivitycheck.gstatic.com/generate_204"
|
|
||||||
interval_seconds = 300
|
|
||||||
timeout_ms = 1500
|
|
||||||
|
|
||||||
[entries.internet]
|
|
||||||
required = true
|
|
||||||
# Optional per-entry override:
|
|
||||||
# check = "tcp://1.1.1.1:53"
|
|
||||||
```
|
|
||||||
|
|
||||||
## Validation
|
## Validation
|
||||||
|
|
||||||
The configuration is validated at load time. Validation catches:
|
The configuration is validated at load time. Validation catches:
|
||||||
|
|
|
||||||
|
|
@ -29,10 +29,7 @@ fn main() -> ExitCode {
|
||||||
|
|
||||||
// Check file exists
|
// Check file exists
|
||||||
if !config_path.exists() {
|
if !config_path.exists() {
|
||||||
eprintln!(
|
eprintln!("Error: Configuration file not found: {}", config_path.display());
|
||||||
"Error: Configuration file not found: {}",
|
|
||||||
config_path.display()
|
|
||||||
);
|
|
||||||
return ExitCode::from(1);
|
return ExitCode::from(1);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -42,10 +39,7 @@ fn main() -> ExitCode {
|
||||||
println!("✓ Configuration is valid");
|
println!("✓ Configuration is valid");
|
||||||
println!();
|
println!();
|
||||||
println!("Summary:");
|
println!("Summary:");
|
||||||
println!(
|
println!(" Config version: {}", shepherd_config::CURRENT_CONFIG_VERSION);
|
||||||
" Config version: {}",
|
|
||||||
shepherd_config::CURRENT_CONFIG_VERSION
|
|
||||||
);
|
|
||||||
println!(" Entries: {}", policy.entries.len());
|
println!(" Entries: {}", policy.entries.len());
|
||||||
|
|
||||||
// Show entry summary
|
// Show entry summary
|
||||||
|
|
@ -60,12 +54,6 @@ fn main() -> ExitCode {
|
||||||
EntryKind::Snap { snap_name, .. } => {
|
EntryKind::Snap { snap_name, .. } => {
|
||||||
format!("snap ({})", snap_name)
|
format!("snap ({})", snap_name)
|
||||||
}
|
}
|
||||||
EntryKind::Steam { app_id, .. } => {
|
|
||||||
format!("steam ({})", app_id)
|
|
||||||
}
|
|
||||||
EntryKind::Flatpak { app_id, .. } => {
|
|
||||||
format!("flatpak ({})", app_id)
|
|
||||||
}
|
|
||||||
EntryKind::Vm { driver, .. } => {
|
EntryKind::Vm { driver, .. } => {
|
||||||
format!("vm ({})", driver)
|
format!("vm ({})", driver)
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -1,152 +0,0 @@
|
||||||
//! Internet connectivity configuration and parsing.
|
|
||||||
|
|
||||||
use std::time::Duration;
|
|
||||||
|
|
||||||
/// Default interval between connectivity checks.
|
|
||||||
pub const DEFAULT_INTERNET_CHECK_INTERVAL: Duration = Duration::from_secs(10);
|
|
||||||
/// Default timeout for a single connectivity check.
|
|
||||||
pub const DEFAULT_INTERNET_CHECK_TIMEOUT: Duration = Duration::from_millis(1500);
|
|
||||||
|
|
||||||
/// Supported connectivity check schemes.
|
|
||||||
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
|
|
||||||
pub enum InternetCheckScheme {
|
|
||||||
Tcp,
|
|
||||||
Http,
|
|
||||||
Https,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl InternetCheckScheme {
|
|
||||||
fn from_str(value: &str) -> Result<Self, String> {
|
|
||||||
match value.to_lowercase().as_str() {
|
|
||||||
"tcp" => Ok(Self::Tcp),
|
|
||||||
"http" => Ok(Self::Http),
|
|
||||||
"https" => Ok(Self::Https),
|
|
||||||
other => Err(format!("unsupported scheme '{}'", other)),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
fn default_port(self) -> u16 {
|
|
||||||
match self {
|
|
||||||
Self::Tcp => 0,
|
|
||||||
Self::Http => 80,
|
|
||||||
Self::Https => 443,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Connectivity check target.
|
|
||||||
#[derive(Debug, Clone, PartialEq, Eq, Hash)]
|
|
||||||
pub struct InternetCheckTarget {
|
|
||||||
pub scheme: InternetCheckScheme,
|
|
||||||
pub host: String,
|
|
||||||
pub port: u16,
|
|
||||||
pub original: String,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl InternetCheckTarget {
|
|
||||||
/// Parse a connectivity check string (e.g., "https://example.com" or "tcp://1.1.1.1:53").
|
|
||||||
pub fn parse(value: &str) -> Result<Self, String> {
|
|
||||||
let trimmed = value.trim();
|
|
||||||
if trimmed.is_empty() {
|
|
||||||
return Err("check target cannot be empty".into());
|
|
||||||
}
|
|
||||||
|
|
||||||
let (scheme_raw, rest) = trimmed
|
|
||||||
.split_once("://")
|
|
||||||
.ok_or_else(|| "missing scheme (expected scheme://)".to_string())?;
|
|
||||||
|
|
||||||
let scheme = InternetCheckScheme::from_str(scheme_raw)?;
|
|
||||||
let rest = rest.trim();
|
|
||||||
if rest.is_empty() {
|
|
||||||
return Err("missing host".into());
|
|
||||||
}
|
|
||||||
|
|
||||||
let host_port = rest.split('/').next().unwrap_or(rest);
|
|
||||||
let (host, port_opt) = parse_host_port(host_port)?;
|
|
||||||
|
|
||||||
let port = match scheme {
|
|
||||||
InternetCheckScheme::Tcp => {
|
|
||||||
port_opt.ok_or_else(|| "tcp check requires explicit port".to_string())?
|
|
||||||
}
|
|
||||||
_ => port_opt.unwrap_or_else(|| scheme.default_port()),
|
|
||||||
};
|
|
||||||
|
|
||||||
if port == 0 {
|
|
||||||
return Err("invalid port".into());
|
|
||||||
}
|
|
||||||
|
|
||||||
Ok(Self {
|
|
||||||
scheme,
|
|
||||||
host,
|
|
||||||
port,
|
|
||||||
original: trimmed.to_string(),
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
fn parse_host_port(value: &str) -> Result<(String, Option<u16>), String> {
|
|
||||||
let trimmed = value.trim();
|
|
||||||
if trimmed.is_empty() {
|
|
||||||
return Err("missing host".into());
|
|
||||||
}
|
|
||||||
|
|
||||||
if trimmed.starts_with('[') {
|
|
||||||
let end = trimmed
|
|
||||||
.find(']')
|
|
||||||
.ok_or_else(|| "invalid IPv6 host".to_string())?;
|
|
||||||
let host = trimmed[1..end].trim();
|
|
||||||
if host.is_empty() {
|
|
||||||
return Err("missing host".into());
|
|
||||||
}
|
|
||||||
let port = if let Some(port_str) = trimmed[end + 1..].strip_prefix(':') {
|
|
||||||
Some(parse_port(port_str)?)
|
|
||||||
} else {
|
|
||||||
None
|
|
||||||
};
|
|
||||||
return Ok((host.to_string(), port));
|
|
||||||
}
|
|
||||||
|
|
||||||
let mut parts = trimmed.splitn(2, ':');
|
|
||||||
let host = parts.next().unwrap_or("").trim();
|
|
||||||
if host.is_empty() {
|
|
||||||
return Err("missing host".into());
|
|
||||||
}
|
|
||||||
let port = parts.next().map(parse_port).transpose()?;
|
|
||||||
Ok((host.to_string(), port))
|
|
||||||
}
|
|
||||||
|
|
||||||
fn parse_port(value: &str) -> Result<u16, String> {
|
|
||||||
let port: u16 = value
|
|
||||||
.trim()
|
|
||||||
.parse()
|
|
||||||
.map_err(|_| "invalid port".to_string())?;
|
|
||||||
if port == 0 {
|
|
||||||
return Err("invalid port".into());
|
|
||||||
}
|
|
||||||
Ok(port)
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Service-level internet connectivity configuration.
|
|
||||||
#[derive(Debug, Clone)]
|
|
||||||
pub struct InternetConfig {
|
|
||||||
pub check: Option<InternetCheckTarget>,
|
|
||||||
pub interval: Duration,
|
|
||||||
pub timeout: Duration,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl InternetConfig {
|
|
||||||
pub fn new(check: Option<InternetCheckTarget>, interval: Duration, timeout: Duration) -> Self {
|
|
||||||
Self {
|
|
||||||
check,
|
|
||||||
interval,
|
|
||||||
timeout,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Entry-level internet requirement.
|
|
||||||
#[derive(Debug, Clone, Default)]
|
|
||||||
pub struct EntryInternetPolicy {
|
|
||||||
pub required: bool,
|
|
||||||
pub check: Option<InternetCheckTarget>,
|
|
||||||
}
|
|
||||||
|
|
@ -6,12 +6,10 @@
|
||||||
//! - Time windows, limits, and warnings
|
//! - Time windows, limits, and warnings
|
||||||
//! - Validation with clear error messages
|
//! - Validation with clear error messages
|
||||||
|
|
||||||
mod internet;
|
|
||||||
mod policy;
|
mod policy;
|
||||||
mod schema;
|
mod schema;
|
||||||
mod validation;
|
mod validation;
|
||||||
|
|
||||||
pub use internet::*;
|
|
||||||
pub use policy::*;
|
pub use policy::*;
|
||||||
pub use schema::*;
|
pub use schema::*;
|
||||||
pub use validation::*;
|
pub use validation::*;
|
||||||
|
|
|
||||||
|
|
@ -1,19 +1,9 @@
|
||||||
//! Validated policy structures
|
//! Validated policy structures
|
||||||
|
|
||||||
use crate::internet::{
|
use crate::schema::{RawConfig, RawEntry, RawEntryKind, RawNetworkConfig, RawEntryNetwork, RawVolumeConfig, RawServiceConfig, RawWarningThreshold};
|
||||||
DEFAULT_INTERNET_CHECK_INTERVAL, DEFAULT_INTERNET_CHECK_TIMEOUT, EntryInternetPolicy,
|
|
||||||
InternetCheckTarget, InternetConfig,
|
|
||||||
};
|
|
||||||
use crate::schema::{
|
|
||||||
RawConfig, RawEntry, RawEntryKind, RawInternetConfig, RawServiceConfig, RawVolumeConfig,
|
|
||||||
RawWarningThreshold,
|
|
||||||
};
|
|
||||||
use crate::validation::{parse_days, parse_time};
|
use crate::validation::{parse_days, parse_time};
|
||||||
use shepherd_api::{EntryKind, WarningSeverity, WarningThreshold};
|
use shepherd_api::{EntryKind, WarningSeverity, WarningThreshold};
|
||||||
use shepherd_util::{
|
use shepherd_util::{DaysOfWeek, EntryId, TimeWindow, WallClock, default_data_dir, default_log_dir, socket_path_without_env};
|
||||||
DaysOfWeek, EntryId, TimeWindow, WallClock, default_data_dir, default_log_dir,
|
|
||||||
socket_path_without_env,
|
|
||||||
};
|
|
||||||
use std::path::PathBuf;
|
use std::path::PathBuf;
|
||||||
use std::time::Duration;
|
use std::time::Duration;
|
||||||
|
|
||||||
|
|
@ -34,6 +24,9 @@ pub struct Policy {
|
||||||
|
|
||||||
/// Global volume restrictions
|
/// Global volume restrictions
|
||||||
pub volume: VolumePolicy,
|
pub volume: VolumePolicy,
|
||||||
|
|
||||||
|
/// Network connectivity policy
|
||||||
|
pub network: NetworkPolicy,
|
||||||
}
|
}
|
||||||
|
|
||||||
impl Policy {
|
impl Policy {
|
||||||
|
|
@ -60,6 +53,13 @@ impl Policy {
|
||||||
.map(convert_volume_config)
|
.map(convert_volume_config)
|
||||||
.unwrap_or_default();
|
.unwrap_or_default();
|
||||||
|
|
||||||
|
let network = raw
|
||||||
|
.service
|
||||||
|
.network
|
||||||
|
.as_ref()
|
||||||
|
.map(convert_network_config)
|
||||||
|
.unwrap_or_default();
|
||||||
|
|
||||||
let entries = raw
|
let entries = raw
|
||||||
.entries
|
.entries
|
||||||
.into_iter()
|
.into_iter()
|
||||||
|
|
@ -72,6 +72,7 @@ impl Policy {
|
||||||
default_warnings,
|
default_warnings,
|
||||||
default_max_run,
|
default_max_run,
|
||||||
volume: global_volume,
|
volume: global_volume,
|
||||||
|
network,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -91,24 +92,27 @@ pub struct ServiceConfig {
|
||||||
pub capture_child_output: bool,
|
pub capture_child_output: bool,
|
||||||
/// Directory for child application logs
|
/// Directory for child application logs
|
||||||
pub child_log_dir: PathBuf,
|
pub child_log_dir: PathBuf,
|
||||||
/// Internet connectivity configuration
|
|
||||||
pub internet: InternetConfig,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
impl ServiceConfig {
|
impl ServiceConfig {
|
||||||
fn from_raw(raw: RawServiceConfig) -> Self {
|
fn from_raw(raw: RawServiceConfig) -> Self {
|
||||||
let log_dir = raw.log_dir.clone().unwrap_or_else(default_log_dir);
|
let log_dir = raw
|
||||||
|
.log_dir
|
||||||
|
.clone()
|
||||||
|
.unwrap_or_else(default_log_dir);
|
||||||
let child_log_dir = raw
|
let child_log_dir = raw
|
||||||
.child_log_dir
|
.child_log_dir
|
||||||
.unwrap_or_else(|| log_dir.join("sessions"));
|
.unwrap_or_else(|| log_dir.join("sessions"));
|
||||||
let internet = convert_internet_config(raw.internet.as_ref());
|
|
||||||
Self {
|
Self {
|
||||||
socket_path: raw.socket_path.unwrap_or_else(socket_path_without_env),
|
socket_path: raw
|
||||||
|
.socket_path
|
||||||
|
.unwrap_or_else(socket_path_without_env),
|
||||||
log_dir,
|
log_dir,
|
||||||
capture_child_output: raw.capture_child_output,
|
capture_child_output: raw.capture_child_output,
|
||||||
child_log_dir,
|
child_log_dir,
|
||||||
data_dir: raw.data_dir.unwrap_or_else(default_data_dir),
|
data_dir: raw
|
||||||
internet,
|
.data_dir
|
||||||
|
.unwrap_or_else(default_data_dir),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
@ -122,11 +126,6 @@ impl Default for ServiceConfig {
|
||||||
log_dir,
|
log_dir,
|
||||||
data_dir: default_data_dir(),
|
data_dir: default_data_dir(),
|
||||||
capture_child_output: false,
|
capture_child_output: false,
|
||||||
internet: InternetConfig::new(
|
|
||||||
None,
|
|
||||||
DEFAULT_INTERNET_CHECK_INTERVAL,
|
|
||||||
DEFAULT_INTERNET_CHECK_TIMEOUT,
|
|
||||||
),
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
@ -142,9 +141,9 @@ pub struct Entry {
|
||||||
pub limits: LimitsPolicy,
|
pub limits: LimitsPolicy,
|
||||||
pub warnings: Vec<WarningThreshold>,
|
pub warnings: Vec<WarningThreshold>,
|
||||||
pub volume: Option<VolumePolicy>,
|
pub volume: Option<VolumePolicy>,
|
||||||
|
pub network: NetworkRequirement,
|
||||||
pub disabled: bool,
|
pub disabled: bool,
|
||||||
pub disabled_reason: Option<String>,
|
pub disabled_reason: Option<String>,
|
||||||
pub internet: EntryInternetPolicy,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
impl Entry {
|
impl Entry {
|
||||||
|
|
@ -172,7 +171,11 @@ impl Entry {
|
||||||
.map(|w| w.into_iter().map(convert_warning).collect())
|
.map(|w| w.into_iter().map(convert_warning).collect())
|
||||||
.unwrap_or_else(|| default_warnings.to_vec());
|
.unwrap_or_else(|| default_warnings.to_vec());
|
||||||
let volume = raw.volume.as_ref().map(convert_volume_config);
|
let volume = raw.volume.as_ref().map(convert_volume_config);
|
||||||
let internet = convert_entry_internet(raw.internet.as_ref());
|
let network = raw
|
||||||
|
.network
|
||||||
|
.as_ref()
|
||||||
|
.map(convert_entry_network)
|
||||||
|
.unwrap_or_default();
|
||||||
|
|
||||||
Self {
|
Self {
|
||||||
id: EntryId::new(raw.id),
|
id: EntryId::new(raw.id),
|
||||||
|
|
@ -183,9 +186,9 @@ impl Entry {
|
||||||
limits,
|
limits,
|
||||||
warnings,
|
warnings,
|
||||||
volume,
|
volume,
|
||||||
|
network,
|
||||||
disabled: raw.disabled,
|
disabled: raw.disabled,
|
||||||
disabled_reason: raw.disabled_reason,
|
disabled_reason: raw.disabled_reason,
|
||||||
internet,
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
@ -212,7 +215,10 @@ impl AvailabilityPolicy {
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Get remaining time in current window
|
/// Get remaining time in current window
|
||||||
pub fn remaining_in_window(&self, dt: &chrono::DateTime<chrono::Local>) -> Option<Duration> {
|
pub fn remaining_in_window(
|
||||||
|
&self,
|
||||||
|
dt: &chrono::DateTime<chrono::Local>,
|
||||||
|
) -> Option<Duration> {
|
||||||
if self.always {
|
if self.always {
|
||||||
return None; // No limit from windows
|
return None; // No limit from windows
|
||||||
}
|
}
|
||||||
|
|
@ -262,34 +268,58 @@ impl VolumePolicy {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Default connectivity check URL (Google's connectivity check service)
|
||||||
|
pub const DEFAULT_CHECK_URL: &str = "http://connectivitycheck.gstatic.com/generate_204";
|
||||||
|
|
||||||
|
/// Default interval for periodic connectivity checks (60 seconds)
|
||||||
|
pub const DEFAULT_CHECK_INTERVAL_SECS: u64 = 60;
|
||||||
|
|
||||||
|
/// Default timeout for connectivity checks (5 seconds)
|
||||||
|
pub const DEFAULT_CHECK_TIMEOUT_SECS: u64 = 5;
|
||||||
|
|
||||||
|
/// Network connectivity policy
|
||||||
|
#[derive(Debug, Clone)]
|
||||||
|
pub struct NetworkPolicy {
|
||||||
|
/// URL to check for global network connectivity
|
||||||
|
pub check_url: String,
|
||||||
|
/// How often to perform periodic connectivity checks
|
||||||
|
pub check_interval: Duration,
|
||||||
|
/// Timeout for connectivity checks
|
||||||
|
pub check_timeout: Duration,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Default for NetworkPolicy {
|
||||||
|
fn default() -> Self {
|
||||||
|
Self {
|
||||||
|
check_url: DEFAULT_CHECK_URL.to_string(),
|
||||||
|
check_interval: Duration::from_secs(DEFAULT_CHECK_INTERVAL_SECS),
|
||||||
|
check_timeout: Duration::from_secs(DEFAULT_CHECK_TIMEOUT_SECS),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Network requirements for a specific entry
|
||||||
|
#[derive(Debug, Clone, Default)]
|
||||||
|
pub struct NetworkRequirement {
|
||||||
|
/// Whether this entry requires network connectivity to launch
|
||||||
|
pub required: bool,
|
||||||
|
/// Override check URL for this entry (uses global if None)
|
||||||
|
pub check_url_override: Option<String>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl NetworkRequirement {
|
||||||
|
/// Get the check URL to use for this entry, given the global policy
|
||||||
|
pub fn effective_check_url<'a>(&'a self, global: &'a NetworkPolicy) -> &'a str {
|
||||||
|
self.check_url_override.as_deref().unwrap_or(&global.check_url)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Conversion helpers
|
// Conversion helpers
|
||||||
|
|
||||||
fn convert_entry_kind(raw: RawEntryKind) -> EntryKind {
|
fn convert_entry_kind(raw: RawEntryKind) -> EntryKind {
|
||||||
match raw {
|
match raw {
|
||||||
RawEntryKind::Process {
|
RawEntryKind::Process { command, args, env, cwd } => EntryKind::Process { command, args, env, cwd },
|
||||||
command,
|
RawEntryKind::Snap { snap_name, command, args, env } => EntryKind::Snap { snap_name, command, args, env },
|
||||||
args,
|
|
||||||
env,
|
|
||||||
cwd,
|
|
||||||
} => EntryKind::Process {
|
|
||||||
command,
|
|
||||||
args,
|
|
||||||
env,
|
|
||||||
cwd,
|
|
||||||
},
|
|
||||||
RawEntryKind::Snap {
|
|
||||||
snap_name,
|
|
||||||
command,
|
|
||||||
args,
|
|
||||||
env,
|
|
||||||
} => EntryKind::Snap {
|
|
||||||
snap_name,
|
|
||||||
command,
|
|
||||||
args,
|
|
||||||
env,
|
|
||||||
},
|
|
||||||
RawEntryKind::Steam { app_id, args, env } => EntryKind::Steam { app_id, args, env },
|
|
||||||
RawEntryKind::Flatpak { app_id, args, env } => EntryKind::Flatpak { app_id, args, env },
|
|
||||||
RawEntryKind::Vm { driver, args } => EntryKind::Vm { driver, args },
|
RawEntryKind::Vm { driver, args } => EntryKind::Vm { driver, args },
|
||||||
RawEntryKind::Media { library_id, args } => EntryKind::Media { library_id, args },
|
RawEntryKind::Media { library_id, args } => EntryKind::Media { library_id, args },
|
||||||
RawEntryKind::Custom { type_name, payload } => EntryKind::Custom {
|
RawEntryKind::Custom { type_name, payload } => EntryKind::Custom {
|
||||||
|
|
@ -316,31 +346,19 @@ fn convert_volume_config(raw: &RawVolumeConfig) -> VolumePolicy {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
fn convert_internet_config(raw: Option<&RawInternetConfig>) -> InternetConfig {
|
fn convert_network_config(raw: &RawNetworkConfig) -> NetworkPolicy {
|
||||||
let check = raw
|
NetworkPolicy {
|
||||||
.and_then(|cfg| cfg.check.as_ref())
|
check_url: raw.check_url.clone().unwrap_or_else(|| DEFAULT_CHECK_URL.to_string()),
|
||||||
.and_then(|value| InternetCheckTarget::parse(value).ok());
|
check_interval: Duration::from_secs(raw.check_interval_seconds.unwrap_or(DEFAULT_CHECK_INTERVAL_SECS)),
|
||||||
|
check_timeout: Duration::from_secs(raw.check_timeout_seconds.unwrap_or(DEFAULT_CHECK_TIMEOUT_SECS)),
|
||||||
let interval = raw
|
}
|
||||||
.and_then(|cfg| cfg.interval_seconds)
|
|
||||||
.map(Duration::from_secs)
|
|
||||||
.unwrap_or(DEFAULT_INTERNET_CHECK_INTERVAL);
|
|
||||||
|
|
||||||
let timeout = raw
|
|
||||||
.and_then(|cfg| cfg.timeout_ms)
|
|
||||||
.map(Duration::from_millis)
|
|
||||||
.unwrap_or(DEFAULT_INTERNET_CHECK_TIMEOUT);
|
|
||||||
|
|
||||||
InternetConfig::new(check, interval, timeout)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
fn convert_entry_internet(raw: Option<&crate::schema::RawEntryInternet>) -> EntryInternetPolicy {
|
fn convert_entry_network(raw: &RawEntryNetwork) -> NetworkRequirement {
|
||||||
let required = raw.map(|cfg| cfg.required).unwrap_or(false);
|
NetworkRequirement {
|
||||||
let check = raw
|
required: raw.required,
|
||||||
.and_then(|cfg| cfg.check.as_ref())
|
check_url_override: raw.check_url.clone(),
|
||||||
.and_then(|value| InternetCheckTarget::parse(value).ok());
|
}
|
||||||
|
|
||||||
EntryInternetPolicy { required, check }
|
|
||||||
}
|
}
|
||||||
|
|
||||||
fn convert_time_window(raw: crate::schema::RawTimeWindow) -> TimeWindow {
|
fn convert_time_window(raw: crate::schema::RawTimeWindow) -> TimeWindow {
|
||||||
|
|
@ -364,10 +382,7 @@ fn seconds_to_duration_or_unlimited(secs: u64) -> Option<Duration> {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
fn convert_limits(
|
fn convert_limits(raw: crate::schema::RawLimits, default_max_run: Option<Duration>) -> LimitsPolicy {
|
||||||
raw: crate::schema::RawLimits,
|
|
||||||
default_max_run: Option<Duration>,
|
|
||||||
) -> LimitsPolicy {
|
|
||||||
LimitsPolicy {
|
LimitsPolicy {
|
||||||
max_run: raw
|
max_run: raw
|
||||||
.max_run_seconds
|
.max_run_seconds
|
||||||
|
|
|
||||||
|
|
@ -49,9 +49,25 @@ pub struct RawServiceConfig {
|
||||||
#[serde(default)]
|
#[serde(default)]
|
||||||
pub volume: Option<RawVolumeConfig>,
|
pub volume: Option<RawVolumeConfig>,
|
||||||
|
|
||||||
/// Internet connectivity check settings
|
/// Network connectivity settings
|
||||||
#[serde(default)]
|
#[serde(default)]
|
||||||
pub internet: Option<RawInternetConfig>,
|
pub network: Option<RawNetworkConfig>,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Network connectivity configuration
|
||||||
|
#[derive(Debug, Clone, Default, Deserialize, Serialize)]
|
||||||
|
pub struct RawNetworkConfig {
|
||||||
|
/// URL to check for global network connectivity
|
||||||
|
/// Default: "http://connectivitycheck.gstatic.com/generate_204"
|
||||||
|
pub check_url: Option<String>,
|
||||||
|
|
||||||
|
/// How often to perform periodic connectivity checks (in seconds)
|
||||||
|
/// Default: 30
|
||||||
|
pub check_interval_seconds: Option<u64>,
|
||||||
|
|
||||||
|
/// Timeout for connectivity checks (in seconds)
|
||||||
|
/// Default: 5
|
||||||
|
pub check_timeout_seconds: Option<u64>,
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Raw entry definition
|
/// Raw entry definition
|
||||||
|
|
@ -85,16 +101,30 @@ pub struct RawEntry {
|
||||||
#[serde(default)]
|
#[serde(default)]
|
||||||
pub volume: Option<RawVolumeConfig>,
|
pub volume: Option<RawVolumeConfig>,
|
||||||
|
|
||||||
|
/// Network requirements for this entry
|
||||||
|
#[serde(default)]
|
||||||
|
pub network: Option<RawEntryNetwork>,
|
||||||
|
|
||||||
/// Explicitly disabled
|
/// Explicitly disabled
|
||||||
#[serde(default)]
|
#[serde(default)]
|
||||||
pub disabled: bool,
|
pub disabled: bool,
|
||||||
|
|
||||||
/// Reason for disabling
|
/// Reason for disabling
|
||||||
pub disabled_reason: Option<String>,
|
pub disabled_reason: Option<String>,
|
||||||
|
}
|
||||||
|
|
||||||
/// Internet requirement for this entry
|
/// Network requirements for an entry
|
||||||
|
#[derive(Debug, Clone, Default, Deserialize, Serialize)]
|
||||||
|
pub struct RawEntryNetwork {
|
||||||
|
/// Whether this entry requires network connectivity to launch
|
||||||
|
/// If true, the entry will not be available if the network check fails
|
||||||
#[serde(default)]
|
#[serde(default)]
|
||||||
pub internet: Option<RawEntryInternet>,
|
pub required: bool,
|
||||||
|
|
||||||
|
/// Override check URL for this entry
|
||||||
|
/// If specified, this URL will be checked instead of the global check_url
|
||||||
|
/// This is useful for entries that need specific services (e.g., Google, Microsoft)
|
||||||
|
pub check_url: Option<String>,
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Raw entry kind
|
/// Raw entry kind
|
||||||
|
|
@ -124,28 +154,6 @@ pub enum RawEntryKind {
|
||||||
#[serde(default)]
|
#[serde(default)]
|
||||||
env: HashMap<String, String>,
|
env: HashMap<String, String>,
|
||||||
},
|
},
|
||||||
/// Steam game launched via the Steam snap (Linux)
|
|
||||||
Steam {
|
|
||||||
/// Steam App ID (e.g., 504230 for Celeste)
|
|
||||||
app_id: u32,
|
|
||||||
/// Additional command-line arguments passed to Steam
|
|
||||||
#[serde(default)]
|
|
||||||
args: Vec<String>,
|
|
||||||
/// Additional environment variables
|
|
||||||
#[serde(default)]
|
|
||||||
env: HashMap<String, String>,
|
|
||||||
},
|
|
||||||
/// Flatpak application - uses systemd scope-based process management
|
|
||||||
Flatpak {
|
|
||||||
/// The Flatpak application ID (e.g., "org.prismlauncher.PrismLauncher")
|
|
||||||
app_id: String,
|
|
||||||
/// Additional command-line arguments
|
|
||||||
#[serde(default)]
|
|
||||||
args: Vec<String>,
|
|
||||||
/// Additional environment variables
|
|
||||||
#[serde(default)]
|
|
||||||
env: HashMap<String, String>,
|
|
||||||
},
|
|
||||||
Vm {
|
Vm {
|
||||||
driver: String,
|
driver: String,
|
||||||
#[serde(default)]
|
#[serde(default)]
|
||||||
|
|
@ -223,30 +231,6 @@ pub struct RawWarningThreshold {
|
||||||
pub message: Option<String>,
|
pub message: Option<String>,
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Internet connectivity check configuration
|
|
||||||
#[derive(Debug, Clone, Default, Deserialize, Serialize)]
|
|
||||||
pub struct RawInternetConfig {
|
|
||||||
/// Connectivity check target (e.g., "https://example.com" or "tcp://1.1.1.1:53")
|
|
||||||
pub check: Option<String>,
|
|
||||||
|
|
||||||
/// Interval between checks (seconds)
|
|
||||||
pub interval_seconds: Option<u64>,
|
|
||||||
|
|
||||||
/// Timeout per check (milliseconds)
|
|
||||||
pub timeout_ms: Option<u64>,
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Per-entry internet requirement
|
|
||||||
#[derive(Debug, Clone, Default, Deserialize, Serialize)]
|
|
||||||
pub struct RawEntryInternet {
|
|
||||||
/// Whether this entry requires internet connectivity
|
|
||||||
#[serde(default)]
|
|
||||||
pub required: bool,
|
|
||||||
|
|
||||||
/// Override connectivity check target for this entry
|
|
||||||
pub check: Option<String>,
|
|
||||||
}
|
|
||||||
|
|
||||||
fn default_severity() -> String {
|
fn default_severity() -> String {
|
||||||
"warn".to_string()
|
"warn".to_string()
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -1,6 +1,5 @@
|
||||||
//! Configuration validation
|
//! Configuration validation
|
||||||
|
|
||||||
use crate::internet::InternetCheckTarget;
|
|
||||||
use crate::schema::{RawConfig, RawDays, RawEntry, RawEntryKind, RawTimeWindow};
|
use crate::schema::{RawConfig, RawDays, RawEntry, RawEntryKind, RawTimeWindow};
|
||||||
use std::collections::HashSet;
|
use std::collections::HashSet;
|
||||||
use thiserror::Error;
|
use thiserror::Error;
|
||||||
|
|
@ -35,34 +34,6 @@ pub enum ValidationError {
|
||||||
pub fn validate_config(config: &RawConfig) -> Vec<ValidationError> {
|
pub fn validate_config(config: &RawConfig) -> Vec<ValidationError> {
|
||||||
let mut errors = Vec::new();
|
let mut errors = Vec::new();
|
||||||
|
|
||||||
// Validate global internet check (if set)
|
|
||||||
if let Some(internet) = &config.service.internet
|
|
||||||
&& let Some(check) = &internet.check
|
|
||||||
&& let Err(e) = InternetCheckTarget::parse(check)
|
|
||||||
{
|
|
||||||
errors.push(ValidationError::GlobalError(format!(
|
|
||||||
"Invalid internet check '{}': {}",
|
|
||||||
check, e
|
|
||||||
)));
|
|
||||||
}
|
|
||||||
|
|
||||||
if let Some(internet) = &config.service.internet {
|
|
||||||
if let Some(interval) = internet.interval_seconds
|
|
||||||
&& interval == 0
|
|
||||||
{
|
|
||||||
errors.push(ValidationError::GlobalError(
|
|
||||||
"Internet check interval_seconds must be > 0".into(),
|
|
||||||
));
|
|
||||||
}
|
|
||||||
if let Some(timeout) = internet.timeout_ms
|
|
||||||
&& timeout == 0
|
|
||||||
{
|
|
||||||
errors.push(ValidationError::GlobalError(
|
|
||||||
"Internet check timeout_ms must be > 0".into(),
|
|
||||||
));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check for duplicate entry IDs
|
// Check for duplicate entry IDs
|
||||||
let mut seen_ids = HashSet::new();
|
let mut seen_ids = HashSet::new();
|
||||||
for entry in &config.entries {
|
for entry in &config.entries {
|
||||||
|
|
@ -100,22 +71,6 @@ fn validate_entry(entry: &RawEntry, config: &RawConfig) -> Vec<ValidationError>
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
RawEntryKind::Steam { app_id, .. } => {
|
|
||||||
if *app_id == 0 {
|
|
||||||
errors.push(ValidationError::EntryError {
|
|
||||||
entry_id: entry.id.clone(),
|
|
||||||
message: "app_id must be > 0".into(),
|
|
||||||
});
|
|
||||||
}
|
|
||||||
}
|
|
||||||
RawEntryKind::Flatpak { app_id, .. } => {
|
|
||||||
if app_id.is_empty() {
|
|
||||||
errors.push(ValidationError::EntryError {
|
|
||||||
entry_id: entry.id.clone(),
|
|
||||||
message: "app_id cannot be empty".into(),
|
|
||||||
});
|
|
||||||
}
|
|
||||||
}
|
|
||||||
RawEntryKind::Vm { driver, .. } => {
|
RawEntryKind::Vm { driver, .. } => {
|
||||||
if driver.is_empty() {
|
if driver.is_empty() {
|
||||||
errors.push(ValidationError::EntryError {
|
errors.push(ValidationError::EntryError {
|
||||||
|
|
@ -159,48 +114,19 @@ fn validate_entry(entry: &RawEntry, config: &RawConfig) -> Vec<ValidationError>
|
||||||
|
|
||||||
// Only validate warnings if max_run is Some and not 0 (unlimited)
|
// Only validate warnings if max_run is Some and not 0 (unlimited)
|
||||||
if let (Some(warnings), Some(max_run)) = (&entry.warnings, max_run)
|
if let (Some(warnings), Some(max_run)) = (&entry.warnings, max_run)
|
||||||
&& max_run > 0
|
&& max_run > 0 {
|
||||||
{
|
for warning in warnings {
|
||||||
for warning in warnings {
|
if warning.seconds_before >= max_run {
|
||||||
if warning.seconds_before >= max_run {
|
errors.push(ValidationError::WarningExceedsMaxRun {
|
||||||
errors.push(ValidationError::WarningExceedsMaxRun {
|
entry_id: entry.id.clone(),
|
||||||
entry_id: entry.id.clone(),
|
seconds: warning.seconds_before,
|
||||||
seconds: warning.seconds_before,
|
max_run,
|
||||||
max_run,
|
});
|
||||||
});
|
}
|
||||||
}
|
}
|
||||||
}
|
|
||||||
// Note: warnings are ignored for unlimited entries (max_run = 0)
|
// Note: warnings are ignored for unlimited entries (max_run = 0)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Validate internet requirements
|
|
||||||
if let Some(internet) = &entry.internet {
|
|
||||||
if let Some(check) = &internet.check
|
|
||||||
&& let Err(e) = InternetCheckTarget::parse(check)
|
|
||||||
{
|
|
||||||
errors.push(ValidationError::EntryError {
|
|
||||||
entry_id: entry.id.clone(),
|
|
||||||
message: format!("Invalid internet check '{}': {}", check, e),
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
if internet.required {
|
|
||||||
let has_check = internet.check.is_some()
|
|
||||||
|| config
|
|
||||||
.service
|
|
||||||
.internet
|
|
||||||
.as_ref()
|
|
||||||
.and_then(|cfg| cfg.check.as_ref())
|
|
||||||
.is_some();
|
|
||||||
if !has_check {
|
|
||||||
errors.push(ValidationError::EntryError {
|
|
||||||
entry_id: entry.id.clone(),
|
|
||||||
message: "internet is required but no check is configured (set service.internet.check or entries.internet.check)".into(),
|
|
||||||
});
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
errors
|
errors
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -241,8 +167,12 @@ pub fn parse_time(s: &str) -> Result<(u8, u8), String> {
|
||||||
return Err("Expected HH:MM format".into());
|
return Err("Expected HH:MM format".into());
|
||||||
}
|
}
|
||||||
|
|
||||||
let hour: u8 = parts[0].parse().map_err(|_| "Invalid hour".to_string())?;
|
let hour: u8 = parts[0]
|
||||||
let minute: u8 = parts[1].parse().map_err(|_| "Invalid minute".to_string())?;
|
.parse()
|
||||||
|
.map_err(|_| "Invalid hour".to_string())?;
|
||||||
|
let minute: u8 = parts[1]
|
||||||
|
.parse()
|
||||||
|
.map_err(|_| "Invalid minute".to_string())?;
|
||||||
|
|
||||||
if hour >= 24 {
|
if hour >= 24 {
|
||||||
return Err("Hour must be 0-23".into());
|
return Err("Hour must be 0-23".into());
|
||||||
|
|
@ -300,23 +230,12 @@ mod tests {
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
fn test_parse_days() {
|
fn test_parse_days() {
|
||||||
assert_eq!(
|
assert_eq!(parse_days(&RawDays::Preset("weekdays".into())).unwrap(), 0x1F);
|
||||||
parse_days(&RawDays::Preset("weekdays".into())).unwrap(),
|
assert_eq!(parse_days(&RawDays::Preset("weekends".into())).unwrap(), 0x60);
|
||||||
0x1F
|
|
||||||
);
|
|
||||||
assert_eq!(
|
|
||||||
parse_days(&RawDays::Preset("weekends".into())).unwrap(),
|
|
||||||
0x60
|
|
||||||
);
|
|
||||||
assert_eq!(parse_days(&RawDays::Preset("all".into())).unwrap(), 0x7F);
|
assert_eq!(parse_days(&RawDays::Preset("all".into())).unwrap(), 0x7F);
|
||||||
|
|
||||||
assert_eq!(
|
assert_eq!(
|
||||||
parse_days(&RawDays::List(vec![
|
parse_days(&RawDays::List(vec!["mon".into(), "wed".into(), "fri".into()])).unwrap(),
|
||||||
"mon".into(),
|
|
||||||
"wed".into(),
|
|
||||||
"fri".into()
|
|
||||||
]))
|
|
||||||
.unwrap(),
|
|
||||||
0b10101
|
0b10101
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
@ -341,9 +260,9 @@ mod tests {
|
||||||
limits: None,
|
limits: None,
|
||||||
warnings: None,
|
warnings: None,
|
||||||
volume: None,
|
volume: None,
|
||||||
|
network: None,
|
||||||
disabled: false,
|
disabled: false,
|
||||||
disabled_reason: None,
|
disabled_reason: None,
|
||||||
internet: None,
|
|
||||||
},
|
},
|
||||||
RawEntry {
|
RawEntry {
|
||||||
id: "game".into(),
|
id: "game".into(),
|
||||||
|
|
@ -359,18 +278,14 @@ mod tests {
|
||||||
limits: None,
|
limits: None,
|
||||||
warnings: None,
|
warnings: None,
|
||||||
volume: None,
|
volume: None,
|
||||||
|
network: None,
|
||||||
disabled: false,
|
disabled: false,
|
||||||
disabled_reason: None,
|
disabled_reason: None,
|
||||||
internet: None,
|
|
||||||
},
|
},
|
||||||
],
|
],
|
||||||
};
|
};
|
||||||
|
|
||||||
let errors = validate_config(&config);
|
let errors = validate_config(&config);
|
||||||
assert!(
|
assert!(errors.iter().any(|e| matches!(e, ValidationError::DuplicateEntryId(_))));
|
||||||
errors
|
|
||||||
.iter()
|
|
||||||
.any(|e| matches!(e, ValidationError::DuplicateEntryId(_)))
|
|
||||||
);
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -2,13 +2,14 @@
|
||||||
|
|
||||||
use chrono::{DateTime, Local};
|
use chrono::{DateTime, Local};
|
||||||
use shepherd_api::{
|
use shepherd_api::{
|
||||||
API_VERSION, EntryView, ReasonCode, ServiceStateSnapshot, SessionEndReason, WarningSeverity,
|
ServiceStateSnapshot, EntryView, ReasonCode, SessionEndReason,
|
||||||
|
WarningSeverity, API_VERSION,
|
||||||
};
|
};
|
||||||
use shepherd_config::{Entry, InternetCheckTarget, Policy};
|
use shepherd_config::{Entry, Policy};
|
||||||
use shepherd_host_api::{HostCapabilities, HostSessionHandle};
|
use shepherd_host_api::{HostCapabilities, HostSessionHandle};
|
||||||
use shepherd_store::{AuditEvent, AuditEventType, Store};
|
use shepherd_store::{AuditEvent, AuditEventType, Store};
|
||||||
use shepherd_util::{EntryId, MonotonicInstant, SessionId};
|
use shepherd_util::{EntryId, MonotonicInstant, SessionId};
|
||||||
use std::collections::{HashMap, HashSet};
|
use std::collections::HashSet;
|
||||||
use std::sync::Arc;
|
use std::sync::Arc;
|
||||||
use std::time::Duration;
|
use std::time::Duration;
|
||||||
use tracing::{debug, info};
|
use tracing::{debug, info};
|
||||||
|
|
@ -37,13 +38,15 @@ pub struct CoreEngine {
|
||||||
current_session: Option<ActiveSession>,
|
current_session: Option<ActiveSession>,
|
||||||
/// Tracks which entries were enabled on the last tick, to detect availability changes
|
/// Tracks which entries were enabled on the last tick, to detect availability changes
|
||||||
last_availability_set: HashSet<EntryId>,
|
last_availability_set: HashSet<EntryId>,
|
||||||
/// Latest known internet connectivity status per check target
|
|
||||||
internet_status: HashMap<InternetCheckTarget, bool>,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
impl CoreEngine {
|
impl CoreEngine {
|
||||||
/// Create a new core engine
|
/// Create a new core engine
|
||||||
pub fn new(policy: Policy, store: Arc<dyn Store>, capabilities: HostCapabilities) -> Self {
|
pub fn new(
|
||||||
|
policy: Policy,
|
||||||
|
store: Arc<dyn Store>,
|
||||||
|
capabilities: HostCapabilities,
|
||||||
|
) -> Self {
|
||||||
info!(
|
info!(
|
||||||
entry_count = policy.entries.len(),
|
entry_count = policy.entries.len(),
|
||||||
"Core engine initialized"
|
"Core engine initialized"
|
||||||
|
|
@ -60,7 +63,6 @@ impl CoreEngine {
|
||||||
capabilities,
|
capabilities,
|
||||||
current_session: None,
|
current_session: None,
|
||||||
last_availability_set: HashSet::new(),
|
last_availability_set: HashSet::new(),
|
||||||
internet_status: HashMap::new(),
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -74,27 +76,15 @@ impl CoreEngine {
|
||||||
let entry_count = policy.entries.len();
|
let entry_count = policy.entries.len();
|
||||||
self.policy = policy;
|
self.policy = policy;
|
||||||
|
|
||||||
let _ = self
|
let _ = self.store.append_audit(AuditEvent::new(AuditEventType::PolicyLoaded {
|
||||||
.store
|
entry_count,
|
||||||
.append_audit(AuditEvent::new(AuditEventType::PolicyLoaded {
|
}));
|
||||||
entry_count,
|
|
||||||
}));
|
|
||||||
|
|
||||||
info!(entry_count, "Policy reloaded");
|
info!(entry_count, "Policy reloaded");
|
||||||
|
|
||||||
CoreEvent::PolicyReloaded { entry_count }
|
CoreEvent::PolicyReloaded { entry_count }
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Update internet connectivity status for a check target.
|
|
||||||
pub fn set_internet_status(&mut self, target: InternetCheckTarget, available: bool) -> bool {
|
|
||||||
let previous = self.internet_status.insert(target, available);
|
|
||||||
previous != Some(available)
|
|
||||||
}
|
|
||||||
|
|
||||||
fn internet_available(&self, target: &InternetCheckTarget) -> bool {
|
|
||||||
self.internet_status.get(target).copied().unwrap_or(false)
|
|
||||||
}
|
|
||||||
|
|
||||||
/// List all entries with availability status
|
/// List all entries with availability status
|
||||||
pub fn list_entries(&self, now: DateTime<Local>) -> Vec<EntryView> {
|
pub fn list_entries(&self, now: DateTime<Local>) -> Vec<EntryView> {
|
||||||
self.policy
|
self.policy
|
||||||
|
|
@ -132,26 +122,6 @@ impl CoreEngine {
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
// Check internet requirement
|
|
||||||
if entry.internet.required {
|
|
||||||
let check = entry.internet.check.as_ref().or(self
|
|
||||||
.policy
|
|
||||||
.service
|
|
||||||
.internet
|
|
||||||
.check
|
|
||||||
.as_ref());
|
|
||||||
let available = check
|
|
||||||
.map(|target| self.internet_available(target))
|
|
||||||
.unwrap_or(false);
|
|
||||||
|
|
||||||
if !available {
|
|
||||||
enabled = false;
|
|
||||||
reasons.push(ReasonCode::InternetUnavailable {
|
|
||||||
check: check.map(|target| target.original.clone()),
|
|
||||||
});
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check if another session is active
|
// Check if another session is active
|
||||||
if let Some(session) = &self.current_session {
|
if let Some(session) = &self.current_session {
|
||||||
enabled = false;
|
enabled = false;
|
||||||
|
|
@ -163,23 +133,19 @@ impl CoreEngine {
|
||||||
|
|
||||||
// Check cooldown
|
// Check cooldown
|
||||||
if let Ok(Some(until)) = self.store.get_cooldown_until(&entry.id)
|
if let Ok(Some(until)) = self.store.get_cooldown_until(&entry.id)
|
||||||
&& until > now
|
&& until > now {
|
||||||
{
|
enabled = false;
|
||||||
enabled = false;
|
reasons.push(ReasonCode::CooldownActive { available_at: until });
|
||||||
reasons.push(ReasonCode::CooldownActive {
|
}
|
||||||
available_at: until,
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check daily quota
|
// Check daily quota
|
||||||
if let Some(quota) = entry.limits.daily_quota {
|
if let Some(quota) = entry.limits.daily_quota {
|
||||||
let today = now.date_naive();
|
let today = now.date_naive();
|
||||||
if let Ok(used) = self.store.get_usage(&entry.id, today)
|
if let Ok(used) = self.store.get_usage(&entry.id, today)
|
||||||
&& used >= quota
|
&& used >= quota {
|
||||||
{
|
enabled = false;
|
||||||
enabled = false;
|
reasons.push(ReasonCode::QuotaExhausted { used, quota });
|
||||||
reasons.push(ReasonCode::QuotaExhausted { used, quota });
|
}
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Calculate max run if enabled (None when disabled, Some(None) flattened for unlimited)
|
// Calculate max run if enabled (None when disabled, Some(None) flattened for unlimited)
|
||||||
|
|
@ -229,7 +195,11 @@ impl CoreEngine {
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Request to launch an entry
|
/// Request to launch an entry
|
||||||
pub fn request_launch(&self, entry_id: &EntryId, now: DateTime<Local>) -> LaunchDecision {
|
pub fn request_launch(
|
||||||
|
&self,
|
||||||
|
entry_id: &EntryId,
|
||||||
|
now: DateTime<Local>,
|
||||||
|
) -> LaunchDecision {
|
||||||
// Find entry
|
// Find entry
|
||||||
let entry = match self.policy.get_entry(entry_id) {
|
let entry = match self.policy.get_entry(entry_id) {
|
||||||
Some(e) => e,
|
Some(e) => e,
|
||||||
|
|
@ -247,12 +217,10 @@ impl CoreEngine {
|
||||||
|
|
||||||
if !view.enabled {
|
if !view.enabled {
|
||||||
// Log denial
|
// Log denial
|
||||||
let _ = self
|
let _ = self.store.append_audit(AuditEvent::new(AuditEventType::LaunchDenied {
|
||||||
.store
|
entry_id: entry_id.clone(),
|
||||||
.append_audit(AuditEvent::new(AuditEventType::LaunchDenied {
|
reasons: view.reasons.iter().map(|r| format!("{:?}", r)).collect(),
|
||||||
entry_id: entry_id.clone(),
|
}));
|
||||||
reasons: view.reasons.iter().map(|r| format!("{:?}", r)).collect(),
|
|
||||||
}));
|
|
||||||
|
|
||||||
return LaunchDecision::Denied {
|
return LaunchDecision::Denied {
|
||||||
reasons: view.reasons,
|
reasons: view.reasons,
|
||||||
|
|
@ -302,14 +270,12 @@ impl CoreEngine {
|
||||||
};
|
};
|
||||||
|
|
||||||
// Log to audit
|
// Log to audit
|
||||||
let _ = self
|
let _ = self.store.append_audit(AuditEvent::new(AuditEventType::SessionStarted {
|
||||||
.store
|
session_id: session.plan.session_id.clone(),
|
||||||
.append_audit(AuditEvent::new(AuditEventType::SessionStarted {
|
entry_id: session.plan.entry_id.clone(),
|
||||||
session_id: session.plan.session_id.clone(),
|
label: session.plan.label.clone(),
|
||||||
entry_id: session.plan.entry_id.clone(),
|
deadline: session.deadline,
|
||||||
label: session.plan.label.clone(),
|
}));
|
||||||
deadline: session.deadline,
|
|
||||||
}));
|
|
||||||
|
|
||||||
if let Some(deadline) = session.deadline {
|
if let Some(deadline) = session.deadline {
|
||||||
info!(
|
info!(
|
||||||
|
|
@ -386,12 +352,10 @@ impl CoreEngine {
|
||||||
session.mark_warning_issued(threshold);
|
session.mark_warning_issued(threshold);
|
||||||
|
|
||||||
// Log to audit
|
// Log to audit
|
||||||
let _ = self
|
let _ = self.store.append_audit(AuditEvent::new(AuditEventType::WarningIssued {
|
||||||
.store
|
session_id: session.plan.session_id.clone(),
|
||||||
.append_audit(AuditEvent::new(AuditEventType::WarningIssued {
|
threshold_seconds: threshold,
|
||||||
session_id: session.plan.session_id.clone(),
|
}));
|
||||||
threshold_seconds: threshold,
|
|
||||||
}));
|
|
||||||
|
|
||||||
info!(
|
info!(
|
||||||
session_id = %session.plan.session_id,
|
session_id = %session.plan.session_id,
|
||||||
|
|
@ -447,27 +411,22 @@ impl CoreEngine {
|
||||||
|
|
||||||
// Update usage accounting
|
// Update usage accounting
|
||||||
let today = now.date_naive();
|
let today = now.date_naive();
|
||||||
let _ = self
|
let _ = self.store.add_usage(&session.plan.entry_id, today, duration);
|
||||||
.store
|
|
||||||
.add_usage(&session.plan.entry_id, today, duration);
|
|
||||||
|
|
||||||
// Set cooldown if configured
|
// Set cooldown if configured
|
||||||
if let Some(entry) = self.policy.get_entry(&session.plan.entry_id)
|
if let Some(entry) = self.policy.get_entry(&session.plan.entry_id)
|
||||||
&& let Some(cooldown) = entry.limits.cooldown
|
&& let Some(cooldown) = entry.limits.cooldown {
|
||||||
{
|
let until = now + chrono::Duration::from_std(cooldown).unwrap();
|
||||||
let until = now + chrono::Duration::from_std(cooldown).unwrap();
|
let _ = self.store.set_cooldown_until(&session.plan.entry_id, until);
|
||||||
let _ = self.store.set_cooldown_until(&session.plan.entry_id, until);
|
}
|
||||||
}
|
|
||||||
|
|
||||||
// Log to audit
|
// Log to audit
|
||||||
let _ = self
|
let _ = self.store.append_audit(AuditEvent::new(AuditEventType::SessionEnded {
|
||||||
.store
|
session_id: session.plan.session_id.clone(),
|
||||||
.append_audit(AuditEvent::new(AuditEventType::SessionEnded {
|
entry_id: session.plan.entry_id.clone(),
|
||||||
session_id: session.plan.session_id.clone(),
|
reason: reason.clone(),
|
||||||
entry_id: session.plan.entry_id.clone(),
|
duration,
|
||||||
reason: reason.clone(),
|
}));
|
||||||
duration,
|
|
||||||
}));
|
|
||||||
|
|
||||||
info!(
|
info!(
|
||||||
session_id = %session.plan.session_id,
|
session_id = %session.plan.session_id,
|
||||||
|
|
@ -501,27 +460,22 @@ impl CoreEngine {
|
||||||
|
|
||||||
// Update usage accounting
|
// Update usage accounting
|
||||||
let today = now.date_naive();
|
let today = now.date_naive();
|
||||||
let _ = self
|
let _ = self.store.add_usage(&session.plan.entry_id, today, duration);
|
||||||
.store
|
|
||||||
.add_usage(&session.plan.entry_id, today, duration);
|
|
||||||
|
|
||||||
// Set cooldown if configured
|
// Set cooldown if configured
|
||||||
if let Some(entry) = self.policy.get_entry(&session.plan.entry_id)
|
if let Some(entry) = self.policy.get_entry(&session.plan.entry_id)
|
||||||
&& let Some(cooldown) = entry.limits.cooldown
|
&& let Some(cooldown) = entry.limits.cooldown {
|
||||||
{
|
let until = now + chrono::Duration::from_std(cooldown).unwrap();
|
||||||
let until = now + chrono::Duration::from_std(cooldown).unwrap();
|
let _ = self.store.set_cooldown_until(&session.plan.entry_id, until);
|
||||||
let _ = self.store.set_cooldown_until(&session.plan.entry_id, until);
|
}
|
||||||
}
|
|
||||||
|
|
||||||
// Log to audit
|
// Log to audit
|
||||||
let _ = self
|
let _ = self.store.append_audit(AuditEvent::new(AuditEventType::SessionEnded {
|
||||||
.store
|
session_id: session.plan.session_id.clone(),
|
||||||
.append_audit(AuditEvent::new(AuditEventType::SessionEnded {
|
entry_id: session.plan.entry_id.clone(),
|
||||||
session_id: session.plan.session_id.clone(),
|
reason: reason.clone(),
|
||||||
entry_id: session.plan.entry_id.clone(),
|
duration,
|
||||||
reason: reason.clone(),
|
}));
|
||||||
duration,
|
|
||||||
}));
|
|
||||||
|
|
||||||
info!(
|
info!(
|
||||||
session_id = %session.plan.session_id,
|
session_id = %session.plan.session_id,
|
||||||
|
|
@ -539,10 +493,9 @@ impl CoreEngine {
|
||||||
|
|
||||||
/// Get current service state snapshot
|
/// Get current service state snapshot
|
||||||
pub fn get_state(&self) -> ServiceStateSnapshot {
|
pub fn get_state(&self) -> ServiceStateSnapshot {
|
||||||
let current_session = self
|
let current_session = self.current_session.as_ref().map(|s| {
|
||||||
.current_session
|
s.to_session_info(MonotonicInstant::now())
|
||||||
.as_ref()
|
});
|
||||||
.map(|s| s.to_session_info(MonotonicInstant::now()));
|
|
||||||
|
|
||||||
// Build entry views for the snapshot
|
// Build entry views for the snapshot
|
||||||
let entries = self.list_entries(shepherd_util::now());
|
let entries = self.list_entries(shepherd_util::now());
|
||||||
|
|
@ -553,6 +506,8 @@ impl CoreEngine {
|
||||||
current_session,
|
current_session,
|
||||||
entry_count: self.policy.entries.len(),
|
entry_count: self.policy.entries.len(),
|
||||||
entries,
|
entries,
|
||||||
|
// Connectivity is populated by the daemon, not the core engine
|
||||||
|
connectivity: Default::default(),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -592,13 +547,11 @@ impl CoreEngine {
|
||||||
session.deadline = Some(new_deadline);
|
session.deadline = Some(new_deadline);
|
||||||
|
|
||||||
// Log to audit
|
// Log to audit
|
||||||
let _ = self
|
let _ = self.store.append_audit(AuditEvent::new(AuditEventType::SessionExtended {
|
||||||
.store
|
session_id: session.plan.session_id.clone(),
|
||||||
.append_audit(AuditEvent::new(AuditEventType::SessionExtended {
|
extended_by: by,
|
||||||
session_id: session.plan.session_id.clone(),
|
new_deadline,
|
||||||
extended_by: by,
|
}));
|
||||||
new_deadline,
|
|
||||||
}));
|
|
||||||
|
|
||||||
info!(
|
info!(
|
||||||
session_id = %session.plan.session_id,
|
session_id = %session.plan.session_id,
|
||||||
|
|
@ -614,8 +567,8 @@ impl CoreEngine {
|
||||||
#[cfg(test)]
|
#[cfg(test)]
|
||||||
mod tests {
|
mod tests {
|
||||||
use super::*;
|
use super::*;
|
||||||
|
use shepherd_config::{AvailabilityPolicy, Entry, LimitsPolicy, NetworkRequirement};
|
||||||
use shepherd_api::EntryKind;
|
use shepherd_api::EntryKind;
|
||||||
use shepherd_config::{AvailabilityPolicy, Entry, LimitsPolicy};
|
|
||||||
use shepherd_store::SqliteStore;
|
use shepherd_store::SqliteStore;
|
||||||
use std::collections::HashMap;
|
use std::collections::HashMap;
|
||||||
|
|
||||||
|
|
@ -643,13 +596,14 @@ mod tests {
|
||||||
},
|
},
|
||||||
warnings: vec![],
|
warnings: vec![],
|
||||||
volume: None,
|
volume: None,
|
||||||
|
network: NetworkRequirement::default(),
|
||||||
disabled: false,
|
disabled: false,
|
||||||
disabled_reason: None,
|
disabled_reason: None,
|
||||||
internet: Default::default(),
|
|
||||||
}],
|
}],
|
||||||
default_warnings: vec![],
|
default_warnings: vec![],
|
||||||
default_max_run: Some(Duration::from_secs(3600)),
|
default_max_run: Some(Duration::from_secs(3600)),
|
||||||
volume: Default::default(),
|
volume: Default::default(),
|
||||||
|
network: Default::default(),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -727,14 +681,15 @@ mod tests {
|
||||||
message_template: Some("1 minute left".into()),
|
message_template: Some("1 minute left".into()),
|
||||||
}],
|
}],
|
||||||
volume: None,
|
volume: None,
|
||||||
|
network: NetworkRequirement::default(),
|
||||||
disabled: false,
|
disabled: false,
|
||||||
disabled_reason: None,
|
disabled_reason: None,
|
||||||
internet: Default::default(),
|
|
||||||
}],
|
}],
|
||||||
service: Default::default(),
|
service: Default::default(),
|
||||||
default_warnings: vec![],
|
default_warnings: vec![],
|
||||||
default_max_run: Some(Duration::from_secs(3600)),
|
default_max_run: Some(Duration::from_secs(3600)),
|
||||||
volume: Default::default(),
|
volume: Default::default(),
|
||||||
|
network: Default::default(),
|
||||||
};
|
};
|
||||||
|
|
||||||
let store = Arc::new(SqliteStore::in_memory().unwrap());
|
let store = Arc::new(SqliteStore::in_memory().unwrap());
|
||||||
|
|
@ -753,34 +708,19 @@ mod tests {
|
||||||
// No warnings initially (first tick may emit AvailabilitySetChanged)
|
// No warnings initially (first tick may emit AvailabilitySetChanged)
|
||||||
let events = engine.tick(now_mono, now);
|
let events = engine.tick(now_mono, now);
|
||||||
// Filter to just warning events for this test
|
// Filter to just warning events for this test
|
||||||
let warning_events: Vec<_> = events
|
let warning_events: Vec<_> = events.iter().filter(|e| matches!(e, CoreEvent::Warning { .. })).collect();
|
||||||
.iter()
|
|
||||||
.filter(|e| matches!(e, CoreEvent::Warning { .. }))
|
|
||||||
.collect();
|
|
||||||
assert!(warning_events.is_empty());
|
assert!(warning_events.is_empty());
|
||||||
|
|
||||||
// At 70 seconds (10 seconds past warning threshold), warning should fire
|
// At 70 seconds (10 seconds past warning threshold), warning should fire
|
||||||
let later = now_mono + Duration::from_secs(70);
|
let later = now_mono + Duration::from_secs(70);
|
||||||
let events = engine.tick(later, now);
|
let events = engine.tick(later, now);
|
||||||
let warning_events: Vec<_> = events
|
let warning_events: Vec<_> = events.iter().filter(|e| matches!(e, CoreEvent::Warning { .. })).collect();
|
||||||
.iter()
|
|
||||||
.filter(|e| matches!(e, CoreEvent::Warning { .. }))
|
|
||||||
.collect();
|
|
||||||
assert_eq!(warning_events.len(), 1);
|
assert_eq!(warning_events.len(), 1);
|
||||||
assert!(matches!(
|
assert!(matches!(warning_events[0], CoreEvent::Warning { threshold_seconds: 60, .. }));
|
||||||
warning_events[0],
|
|
||||||
CoreEvent::Warning {
|
|
||||||
threshold_seconds: 60,
|
|
||||||
..
|
|
||||||
}
|
|
||||||
));
|
|
||||||
|
|
||||||
// Warning shouldn't fire twice
|
// Warning shouldn't fire twice
|
||||||
let events = engine.tick(later, now);
|
let events = engine.tick(later, now);
|
||||||
let warning_events: Vec<_> = events
|
let warning_events: Vec<_> = events.iter().filter(|e| matches!(e, CoreEvent::Warning { .. })).collect();
|
||||||
.iter()
|
|
||||||
.filter(|e| matches!(e, CoreEvent::Warning { .. }))
|
|
||||||
.collect();
|
|
||||||
assert!(warning_events.is_empty());
|
assert!(warning_events.is_empty());
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -808,14 +748,15 @@ mod tests {
|
||||||
},
|
},
|
||||||
warnings: vec![],
|
warnings: vec![],
|
||||||
volume: None,
|
volume: None,
|
||||||
|
network: NetworkRequirement::default(),
|
||||||
disabled: false,
|
disabled: false,
|
||||||
disabled_reason: None,
|
disabled_reason: None,
|
||||||
internet: Default::default(),
|
|
||||||
}],
|
}],
|
||||||
service: Default::default(),
|
service: Default::default(),
|
||||||
default_warnings: vec![],
|
default_warnings: vec![],
|
||||||
default_max_run: Some(Duration::from_secs(3600)),
|
default_max_run: Some(Duration::from_secs(3600)),
|
||||||
volume: Default::default(),
|
volume: Default::default(),
|
||||||
|
network: Default::default(),
|
||||||
};
|
};
|
||||||
|
|
||||||
let store = Arc::new(SqliteStore::in_memory().unwrap());
|
let store = Arc::new(SqliteStore::in_memory().unwrap());
|
||||||
|
|
@ -835,10 +776,7 @@ mod tests {
|
||||||
let later = now_mono + Duration::from_secs(61);
|
let later = now_mono + Duration::from_secs(61);
|
||||||
let events = engine.tick(later, now);
|
let events = engine.tick(later, now);
|
||||||
// Filter to just expiry events for this test
|
// Filter to just expiry events for this test
|
||||||
let expiry_events: Vec<_> = events
|
let expiry_events: Vec<_> = events.iter().filter(|e| matches!(e, CoreEvent::ExpireDue { .. })).collect();
|
||||||
.iter()
|
|
||||||
.filter(|e| matches!(e, CoreEvent::ExpireDue { .. }))
|
|
||||||
.collect();
|
|
||||||
assert_eq!(expiry_events.len(), 1);
|
assert_eq!(expiry_events.len(), 1);
|
||||||
assert!(matches!(expiry_events[0], CoreEvent::ExpireDue { .. }));
|
assert!(matches!(expiry_events[0], CoreEvent::ExpireDue { .. }));
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -30,7 +30,9 @@ pub enum CoreEvent {
|
||||||
},
|
},
|
||||||
|
|
||||||
/// Session is expiring (termination initiated)
|
/// Session is expiring (termination initiated)
|
||||||
ExpireDue { session_id: SessionId },
|
ExpireDue {
|
||||||
|
session_id: SessionId,
|
||||||
|
},
|
||||||
|
|
||||||
/// Session has ended
|
/// Session has ended
|
||||||
SessionEnded {
|
SessionEnded {
|
||||||
|
|
@ -41,8 +43,13 @@ pub enum CoreEvent {
|
||||||
},
|
},
|
||||||
|
|
||||||
/// Entry availability changed
|
/// Entry availability changed
|
||||||
EntryAvailabilityChanged { entry_id: EntryId, enabled: bool },
|
EntryAvailabilityChanged {
|
||||||
|
entry_id: EntryId,
|
||||||
|
enabled: bool,
|
||||||
|
},
|
||||||
|
|
||||||
/// Policy was reloaded
|
/// Policy was reloaded
|
||||||
PolicyReloaded { entry_count: usize },
|
PolicyReloaded {
|
||||||
|
entry_count: usize,
|
||||||
|
},
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -29,7 +29,8 @@ impl SessionPlan {
|
||||||
.iter()
|
.iter()
|
||||||
.filter(|w| Duration::from_secs(w.seconds_before) < max_duration)
|
.filter(|w| Duration::from_secs(w.seconds_before) < max_duration)
|
||||||
.map(|w| {
|
.map(|w| {
|
||||||
let trigger_after = max_duration - Duration::from_secs(w.seconds_before);
|
let trigger_after =
|
||||||
|
max_duration - Duration::from_secs(w.seconds_before);
|
||||||
(w.seconds_before, trigger_after)
|
(w.seconds_before, trigger_after)
|
||||||
})
|
})
|
||||||
.collect()
|
.collect()
|
||||||
|
|
@ -66,7 +67,11 @@ pub struct ActiveSession {
|
||||||
|
|
||||||
impl ActiveSession {
|
impl ActiveSession {
|
||||||
/// Create a new session from an approved plan
|
/// Create a new session from an approved plan
|
||||||
pub fn new(plan: SessionPlan, now: DateTime<Local>, now_mono: MonotonicInstant) -> Self {
|
pub fn new(
|
||||||
|
plan: SessionPlan,
|
||||||
|
now: DateTime<Local>,
|
||||||
|
now_mono: MonotonicInstant,
|
||||||
|
) -> Self {
|
||||||
let (deadline, deadline_mono) = match plan.max_duration {
|
let (deadline, deadline_mono) = match plan.max_duration {
|
||||||
Some(max_dur) => {
|
Some(max_dur) => {
|
||||||
let deadline = now + chrono::Duration::from_std(max_dur).unwrap();
|
let deadline = now + chrono::Duration::from_std(max_dur).unwrap();
|
||||||
|
|
@ -96,8 +101,7 @@ impl ActiveSession {
|
||||||
|
|
||||||
/// Get time remaining using monotonic time. None means unlimited.
|
/// Get time remaining using monotonic time. None means unlimited.
|
||||||
pub fn time_remaining(&self, now_mono: MonotonicInstant) -> Option<Duration> {
|
pub fn time_remaining(&self, now_mono: MonotonicInstant) -> Option<Duration> {
|
||||||
self.deadline_mono
|
self.deadline_mono.map(|deadline| deadline.saturating_duration_until(now_mono))
|
||||||
.map(|deadline| deadline.saturating_duration_until(now_mono))
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Check if session is expired (never true for unlimited sessions)
|
/// Check if session is expired (never true for unlimited sessions)
|
||||||
|
|
@ -216,10 +220,7 @@ mod tests {
|
||||||
|
|
||||||
assert_eq!(session.state, SessionState::Launching);
|
assert_eq!(session.state, SessionState::Launching);
|
||||||
assert!(session.warnings_issued.is_empty());
|
assert!(session.warnings_issued.is_empty());
|
||||||
assert_eq!(
|
assert_eq!(session.time_remaining(now_mono), Some(Duration::from_secs(300)));
|
||||||
session.time_remaining(now_mono),
|
|
||||||
Some(Duration::from_secs(300))
|
|
||||||
);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
|
|
|
||||||
|
|
@ -30,7 +30,6 @@ let caps = host.capabilities();
|
||||||
// Check supported entry kinds
|
// Check supported entry kinds
|
||||||
if caps.supports_kind(EntryKindTag::Process) { /* ... */ }
|
if caps.supports_kind(EntryKindTag::Process) { /* ... */ }
|
||||||
if caps.supports_kind(EntryKindTag::Snap) { /* ... */ }
|
if caps.supports_kind(EntryKindTag::Snap) { /* ... */ }
|
||||||
if caps.supports_kind(EntryKindTag::Steam) { /* ... */ }
|
|
||||||
|
|
||||||
// Check enforcement capabilities
|
// Check enforcement capabilities
|
||||||
if caps.can_kill_forcefully { /* Can use SIGKILL/TerminateProcess */ }
|
if caps.can_kill_forcefully { /* Can use SIGKILL/TerminateProcess */ }
|
||||||
|
|
|
||||||
|
|
@ -59,8 +59,6 @@ impl HostCapabilities {
|
||||||
let mut spawn_kinds = HashSet::new();
|
let mut spawn_kinds = HashSet::new();
|
||||||
spawn_kinds.insert(EntryKindTag::Process);
|
spawn_kinds.insert(EntryKindTag::Process);
|
||||||
spawn_kinds.insert(EntryKindTag::Snap);
|
spawn_kinds.insert(EntryKindTag::Snap);
|
||||||
spawn_kinds.insert(EntryKindTag::Steam);
|
|
||||||
spawn_kinds.insert(EntryKindTag::Flatpak);
|
|
||||||
spawn_kinds.insert(EntryKindTag::Vm);
|
spawn_kinds.insert(EntryKindTag::Vm);
|
||||||
spawn_kinds.insert(EntryKindTag::Media);
|
spawn_kinds.insert(EntryKindTag::Media);
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -18,10 +18,7 @@ pub struct HostSessionHandle {
|
||||||
|
|
||||||
impl HostSessionHandle {
|
impl HostSessionHandle {
|
||||||
pub fn new(session_id: SessionId, payload: HostHandlePayload) -> Self {
|
pub fn new(session_id: SessionId, payload: HostHandlePayload) -> Self {
|
||||||
Self {
|
Self { session_id, payload }
|
||||||
session_id,
|
|
||||||
payload,
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn payload(&self) -> &HostHandlePayload {
|
pub fn payload(&self) -> &HostHandlePayload {
|
||||||
|
|
@ -34,16 +31,27 @@ impl HostSessionHandle {
|
||||||
#[serde(tag = "platform", rename_all = "snake_case")]
|
#[serde(tag = "platform", rename_all = "snake_case")]
|
||||||
pub enum HostHandlePayload {
|
pub enum HostHandlePayload {
|
||||||
/// Linux: process group ID
|
/// Linux: process group ID
|
||||||
Linux { pid: u32, pgid: u32 },
|
Linux {
|
||||||
|
pid: u32,
|
||||||
|
pgid: u32,
|
||||||
|
},
|
||||||
|
|
||||||
/// Windows: job object handle (serialized as name/id)
|
/// Windows: job object handle (serialized as name/id)
|
||||||
Windows { job_name: String, process_id: u32 },
|
Windows {
|
||||||
|
job_name: String,
|
||||||
|
process_id: u32,
|
||||||
|
},
|
||||||
|
|
||||||
/// macOS: bundle or process identifier
|
/// macOS: bundle or process identifier
|
||||||
MacOs { pid: u32, bundle_id: Option<String> },
|
MacOs {
|
||||||
|
pid: u32,
|
||||||
|
bundle_id: Option<String>,
|
||||||
|
},
|
||||||
|
|
||||||
/// Mock for testing
|
/// Mock for testing
|
||||||
Mock { id: u64 },
|
Mock {
|
||||||
|
id: u64,
|
||||||
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
impl HostHandlePayload {
|
impl HostHandlePayload {
|
||||||
|
|
@ -109,10 +117,7 @@ mod tests {
|
||||||
fn handle_serialization() {
|
fn handle_serialization() {
|
||||||
let handle = HostSessionHandle::new(
|
let handle = HostSessionHandle::new(
|
||||||
SessionId::new(),
|
SessionId::new(),
|
||||||
HostHandlePayload::Linux {
|
HostHandlePayload::Linux { pid: 1234, pgid: 1234 },
|
||||||
pid: 1234,
|
|
||||||
pgid: 1234,
|
|
||||||
},
|
|
||||||
);
|
);
|
||||||
|
|
||||||
let json = serde_json::to_string(&handle).unwrap();
|
let json = serde_json::to_string(&handle).unwrap();
|
||||||
|
|
|
||||||
|
|
@ -10,8 +10,8 @@ use std::time::Duration;
|
||||||
use tokio::sync::mpsc;
|
use tokio::sync::mpsc;
|
||||||
|
|
||||||
use crate::{
|
use crate::{
|
||||||
ExitStatus, HostAdapter, HostCapabilities, HostError, HostEvent, HostHandlePayload, HostResult,
|
ExitStatus, HostAdapter, HostCapabilities, HostError, HostEvent, HostHandlePayload,
|
||||||
HostSessionHandle, SpawnOptions, StopMode,
|
HostResult, HostSessionHandle, SpawnOptions, StopMode,
|
||||||
};
|
};
|
||||||
|
|
||||||
/// Mock session state for testing
|
/// Mock session state for testing
|
||||||
|
|
@ -79,9 +79,7 @@ impl MockHost {
|
||||||
if let Some(session) = sessions.values().find(|s| &s.session_id == session_id) {
|
if let Some(session) = sessions.values().find(|s| &s.session_id == session_id) {
|
||||||
let handle = HostSessionHandle::new(
|
let handle = HostSessionHandle::new(
|
||||||
session.session_id.clone(),
|
session.session_id.clone(),
|
||||||
HostHandlePayload::Mock {
|
HostHandlePayload::Mock { id: session.mock_id },
|
||||||
id: session.mock_id,
|
|
||||||
},
|
|
||||||
);
|
);
|
||||||
let _ = self.event_tx.send(HostEvent::Exited { handle, status });
|
let _ = self.event_tx.send(HostEvent::Exited { handle, status });
|
||||||
}
|
}
|
||||||
|
|
@ -124,13 +122,12 @@ impl HostAdapter for MockHost {
|
||||||
exit_delay: *self.auto_exit_delay.lock().unwrap(),
|
exit_delay: *self.auto_exit_delay.lock().unwrap(),
|
||||||
};
|
};
|
||||||
|
|
||||||
self.sessions
|
self.sessions.lock().unwrap().insert(mock_id, session.clone());
|
||||||
.lock()
|
|
||||||
.unwrap()
|
|
||||||
.insert(mock_id, session.clone());
|
|
||||||
|
|
||||||
let handle =
|
let handle = HostSessionHandle::new(
|
||||||
HostSessionHandle::new(session_id.clone(), HostHandlePayload::Mock { id: mock_id });
|
session_id.clone(),
|
||||||
|
HostHandlePayload::Mock { id: mock_id },
|
||||||
|
);
|
||||||
|
|
||||||
// If auto-exit is configured, spawn a task to send exit event
|
// If auto-exit is configured, spawn a task to send exit event
|
||||||
if let Some(delay) = session.exit_delay {
|
if let Some(delay) = session.exit_delay {
|
||||||
|
|
|
||||||
|
|
@ -82,7 +82,9 @@ pub enum HostEvent {
|
||||||
},
|
},
|
||||||
|
|
||||||
/// Window is ready (for UI notification)
|
/// Window is ready (for UI notification)
|
||||||
WindowReady { handle: HostSessionHandle },
|
WindowReady {
|
||||||
|
handle: HostSessionHandle,
|
||||||
|
},
|
||||||
|
|
||||||
/// Spawn failed after handle was created
|
/// Spawn failed after handle was created
|
||||||
SpawnFailed {
|
SpawnFailed {
|
||||||
|
|
@ -139,8 +141,6 @@ mod tests {
|
||||||
#[test]
|
#[test]
|
||||||
fn stop_mode_default() {
|
fn stop_mode_default() {
|
||||||
let mode = StopMode::default();
|
let mode = StopMode::default();
|
||||||
assert!(
|
assert!(matches!(mode, StopMode::Graceful { timeout } if timeout == Duration::from_secs(5)));
|
||||||
matches!(mode, StopMode::Graceful { timeout } if timeout == Duration::from_secs(5))
|
|
||||||
);
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -17,6 +17,11 @@ nix = { workspace = true }
|
||||||
async-trait = "0.1"
|
async-trait = "0.1"
|
||||||
dirs = "5.0"
|
dirs = "5.0"
|
||||||
shell-escape = "0.1"
|
shell-escape = "0.1"
|
||||||
|
chrono = { workspace = true }
|
||||||
|
reqwest = { version = "0.12", default-features = false, features = ["rustls-tls"] }
|
||||||
|
netlink-sys = "0.8"
|
||||||
|
netlink-packet-core = "0.7"
|
||||||
|
netlink-packet-route = "0.21"
|
||||||
|
|
||||||
[dev-dependencies]
|
[dev-dependencies]
|
||||||
tempfile = { workspace = true }
|
tempfile = { workspace = true }
|
||||||
|
|
|
||||||
|
|
@ -90,21 +90,6 @@ let entry_kind = EntryKind::Snap {
|
||||||
let handle = host.spawn(session_id, &entry_kind, options).await?;
|
let handle = host.spawn(session_id, &entry_kind, options).await?;
|
||||||
```
|
```
|
||||||
|
|
||||||
### Spawning Steam Games
|
|
||||||
|
|
||||||
Steam games are launched via the Steam snap:
|
|
||||||
|
|
||||||
```rust
|
|
||||||
let entry_kind = EntryKind::Steam {
|
|
||||||
app_id: 504230,
|
|
||||||
args: vec![],
|
|
||||||
env: Default::default(),
|
|
||||||
};
|
|
||||||
|
|
||||||
// Spawns via: snap run steam steam://rungameid/504230
|
|
||||||
let handle = host.spawn(session_id, &entry_kind, options).await?;
|
|
||||||
```
|
|
||||||
|
|
||||||
### Stopping Sessions
|
### Stopping Sessions
|
||||||
|
|
||||||
```rust
|
```rust
|
||||||
|
|
|
||||||
|
|
@ -3,20 +3,17 @@
|
||||||
use async_trait::async_trait;
|
use async_trait::async_trait;
|
||||||
use shepherd_api::EntryKind;
|
use shepherd_api::EntryKind;
|
||||||
use shepherd_host_api::{
|
use shepherd_host_api::{
|
||||||
ExitStatus, HostAdapter, HostCapabilities, HostError, HostEvent, HostHandlePayload, HostResult,
|
HostAdapter, HostCapabilities, HostError, HostEvent, HostHandlePayload,
|
||||||
HostSessionHandle, SpawnOptions, StopMode,
|
HostResult, HostSessionHandle, SpawnOptions, StopMode,
|
||||||
};
|
};
|
||||||
use shepherd_util::SessionId;
|
use shepherd_util::SessionId;
|
||||||
use std::collections::{HashMap, HashSet};
|
use std::collections::HashMap;
|
||||||
use std::sync::{Arc, Mutex};
|
use std::sync::{Arc, Mutex};
|
||||||
use std::time::Duration;
|
use std::time::Duration;
|
||||||
use tokio::sync::mpsc;
|
use tokio::sync::mpsc;
|
||||||
use tracing::{info, warn};
|
use tracing::{info, warn};
|
||||||
|
|
||||||
use crate::process::{
|
use crate::process::{init, kill_by_command, kill_snap_cgroup, ManagedProcess};
|
||||||
ManagedProcess, find_steam_game_pids, init, kill_by_command, kill_flatpak_cgroup,
|
|
||||||
kill_snap_cgroup, kill_steam_game_processes,
|
|
||||||
};
|
|
||||||
|
|
||||||
/// Expand `~` at the beginning of a path to the user's home directory
|
/// Expand `~` at the beginning of a path to the user's home directory
|
||||||
fn expand_tilde(path: &str) -> String {
|
fn expand_tilde(path: &str) -> String {
|
||||||
|
|
@ -42,16 +39,6 @@ fn expand_args(args: &[String]) -> Vec<String> {
|
||||||
struct SessionInfo {
|
struct SessionInfo {
|
||||||
command_name: String,
|
command_name: String,
|
||||||
snap_name: Option<String>,
|
snap_name: Option<String>,
|
||||||
flatpak_app_id: Option<String>,
|
|
||||||
steam_app_id: Option<u32>,
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Clone, Debug)]
|
|
||||||
struct SteamSession {
|
|
||||||
pid: u32,
|
|
||||||
pgid: u32,
|
|
||||||
app_id: u32,
|
|
||||||
seen_game: bool,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Linux host adapter
|
/// Linux host adapter
|
||||||
|
|
@ -60,7 +47,6 @@ pub struct LinuxHost {
|
||||||
processes: Arc<Mutex<HashMap<u32, ManagedProcess>>>,
|
processes: Arc<Mutex<HashMap<u32, ManagedProcess>>>,
|
||||||
/// Track session info for killing
|
/// Track session info for killing
|
||||||
session_info: Arc<Mutex<HashMap<SessionId, SessionInfo>>>,
|
session_info: Arc<Mutex<HashMap<SessionId, SessionInfo>>>,
|
||||||
steam_sessions: Arc<Mutex<HashMap<u32, SteamSession>>>,
|
|
||||||
event_tx: mpsc::UnboundedSender<HostEvent>,
|
event_tx: mpsc::UnboundedSender<HostEvent>,
|
||||||
event_rx: Arc<Mutex<Option<mpsc::UnboundedReceiver<HostEvent>>>>,
|
event_rx: Arc<Mutex<Option<mpsc::UnboundedReceiver<HostEvent>>>>,
|
||||||
}
|
}
|
||||||
|
|
@ -68,7 +54,7 @@ pub struct LinuxHost {
|
||||||
impl LinuxHost {
|
impl LinuxHost {
|
||||||
pub fn new() -> Self {
|
pub fn new() -> Self {
|
||||||
let (tx, rx) = mpsc::unbounded_channel();
|
let (tx, rx) = mpsc::unbounded_channel();
|
||||||
|
|
||||||
// Initialize process management
|
// Initialize process management
|
||||||
init();
|
init();
|
||||||
|
|
||||||
|
|
@ -76,7 +62,6 @@ impl LinuxHost {
|
||||||
capabilities: HostCapabilities::linux_full(),
|
capabilities: HostCapabilities::linux_full(),
|
||||||
processes: Arc::new(Mutex::new(HashMap::new())),
|
processes: Arc::new(Mutex::new(HashMap::new())),
|
||||||
session_info: Arc::new(Mutex::new(HashMap::new())),
|
session_info: Arc::new(Mutex::new(HashMap::new())),
|
||||||
steam_sessions: Arc::new(Mutex::new(HashMap::new())),
|
|
||||||
event_tx: tx,
|
event_tx: tx,
|
||||||
event_rx: Arc::new(Mutex::new(Some(rx))),
|
event_rx: Arc::new(Mutex::new(Some(rx))),
|
||||||
}
|
}
|
||||||
|
|
@ -85,7 +70,6 @@ impl LinuxHost {
|
||||||
/// Start the background process monitor
|
/// Start the background process monitor
|
||||||
pub fn start_monitor(&self) -> tokio::task::JoinHandle<()> {
|
pub fn start_monitor(&self) -> tokio::task::JoinHandle<()> {
|
||||||
let processes = self.processes.clone();
|
let processes = self.processes.clone();
|
||||||
let steam_sessions = self.steam_sessions.clone();
|
|
||||||
let event_tx = self.event_tx.clone();
|
let event_tx = self.event_tx.clone();
|
||||||
|
|
||||||
tokio::spawn(async move {
|
tokio::spawn(async move {
|
||||||
|
|
@ -93,16 +77,13 @@ impl LinuxHost {
|
||||||
tokio::time::sleep(Duration::from_millis(100)).await;
|
tokio::time::sleep(Duration::from_millis(100)).await;
|
||||||
|
|
||||||
let mut exited = Vec::new();
|
let mut exited = Vec::new();
|
||||||
let steam_pids: HashSet<u32> =
|
|
||||||
{ steam_sessions.lock().unwrap().keys().cloned().collect() };
|
|
||||||
|
|
||||||
{
|
{
|
||||||
let mut procs = processes.lock().unwrap();
|
let mut procs = processes.lock().unwrap();
|
||||||
for (pid, proc) in procs.iter_mut() {
|
for (pid, proc) in procs.iter_mut() {
|
||||||
match proc.try_wait() {
|
match proc.try_wait() {
|
||||||
Ok(Some(status)) => {
|
Ok(Some(status)) => {
|
||||||
let is_steam = steam_pids.contains(pid);
|
exited.push((*pid, proc.pgid, status));
|
||||||
exited.push((*pid, proc.pgid, status, is_steam));
|
|
||||||
}
|
}
|
||||||
Ok(None) => {}
|
Ok(None) => {}
|
||||||
Err(e) => {
|
Err(e) => {
|
||||||
|
|
@ -111,16 +92,12 @@ impl LinuxHost {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
for (pid, _, _, _) in &exited {
|
for (pid, _, _) in &exited {
|
||||||
procs.remove(pid);
|
procs.remove(pid);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
for (pid, pgid, status, is_steam) in exited {
|
for (pid, pgid, status) in exited {
|
||||||
if is_steam {
|
|
||||||
info!(pid = pid, pgid = pgid, status = ?status, "Steam launch process exited");
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
info!(pid = pid, pgid = pgid, status = ?status, "Process exited - sending HostEvent::Exited");
|
info!(pid = pid, pgid = pgid, status = ?status, "Process exited - sending HostEvent::Exited");
|
||||||
|
|
||||||
// We don't have the session_id here, so we use a placeholder
|
// We don't have the session_id here, so we use a placeholder
|
||||||
|
|
@ -132,43 +109,6 @@ impl LinuxHost {
|
||||||
|
|
||||||
let _ = event_tx.send(HostEvent::Exited { handle, status });
|
let _ = event_tx.send(HostEvent::Exited { handle, status });
|
||||||
}
|
}
|
||||||
|
|
||||||
// Track Steam sessions by Steam App ID instead of process exit
|
|
||||||
let steam_snapshot: Vec<SteamSession> =
|
|
||||||
{ steam_sessions.lock().unwrap().values().cloned().collect() };
|
|
||||||
|
|
||||||
let mut ended = Vec::new();
|
|
||||||
|
|
||||||
for session in &steam_snapshot {
|
|
||||||
let has_game = !find_steam_game_pids(session.app_id).is_empty();
|
|
||||||
if has_game {
|
|
||||||
if let Ok(mut map) = steam_sessions.lock() {
|
|
||||||
map.entry(session.pid)
|
|
||||||
.and_modify(|entry| entry.seen_game = true);
|
|
||||||
}
|
|
||||||
} else if session.seen_game {
|
|
||||||
ended.push((session.pid, session.pgid));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if !ended.is_empty() {
|
|
||||||
let mut map = steam_sessions.lock().unwrap();
|
|
||||||
let mut procs = processes.lock().unwrap();
|
|
||||||
|
|
||||||
for (pid, pgid) in ended {
|
|
||||||
map.remove(&pid);
|
|
||||||
procs.remove(&pid);
|
|
||||||
|
|
||||||
let handle = HostSessionHandle::new(
|
|
||||||
SessionId::new(),
|
|
||||||
HostHandlePayload::Linux { pid, pgid },
|
|
||||||
);
|
|
||||||
let _ = event_tx.send(HostEvent::Exited {
|
|
||||||
handle,
|
|
||||||
status: ExitStatus::success(),
|
|
||||||
});
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
@ -192,56 +132,28 @@ impl HostAdapter for LinuxHost {
|
||||||
entry_kind: &EntryKind,
|
entry_kind: &EntryKind,
|
||||||
options: SpawnOptions,
|
options: SpawnOptions,
|
||||||
) -> HostResult<HostSessionHandle> {
|
) -> HostResult<HostSessionHandle> {
|
||||||
// Extract argv, env, cwd, snap_name, flatpak_app_id, and steam_app_id based on entry kind
|
// Extract argv, env, cwd, and snap_name based on entry kind
|
||||||
let (argv, env, cwd, snap_name, flatpak_app_id, steam_app_id) = match entry_kind {
|
let (argv, env, cwd, snap_name) = match entry_kind {
|
||||||
EntryKind::Process {
|
EntryKind::Process { command, args, env, cwd } => {
|
||||||
command,
|
|
||||||
args,
|
|
||||||
env,
|
|
||||||
cwd,
|
|
||||||
} => {
|
|
||||||
let mut argv = vec![expand_tilde(command)];
|
let mut argv = vec![expand_tilde(command)];
|
||||||
argv.extend(expand_args(args));
|
argv.extend(expand_args(args));
|
||||||
let expanded_cwd = cwd
|
let expanded_cwd = cwd.as_ref().map(|c| {
|
||||||
.as_ref()
|
std::path::PathBuf::from(expand_tilde(&c.to_string_lossy()))
|
||||||
.map(|c| std::path::PathBuf::from(expand_tilde(&c.to_string_lossy())));
|
});
|
||||||
(argv, env.clone(), expanded_cwd, None, None, None)
|
(argv, env.clone(), expanded_cwd, None)
|
||||||
}
|
}
|
||||||
EntryKind::Snap {
|
EntryKind::Snap { snap_name, command, args, env } => {
|
||||||
snap_name,
|
|
||||||
command,
|
|
||||||
args,
|
|
||||||
env,
|
|
||||||
} => {
|
|
||||||
// For snap apps, we need to use 'snap run <snap_name>' to launch them.
|
// For snap apps, we need to use 'snap run <snap_name>' to launch them.
|
||||||
// The command (if specified) is passed as an argument after the snap name,
|
// The command (if specified) is passed as an argument after the snap name,
|
||||||
// followed by any additional args.
|
// followed by any additional args.
|
||||||
let mut argv = vec!["snap".to_string(), "run".to_string(), snap_name.clone()];
|
let mut argv = vec!["snap".to_string(), "run".to_string(), snap_name.clone()];
|
||||||
// If a custom command is specified (different from snap_name), add it
|
// If a custom command is specified (different from snap_name), add it
|
||||||
if let Some(cmd) = command
|
if let Some(cmd) = command
|
||||||
&& cmd != snap_name
|
&& cmd != snap_name {
|
||||||
{
|
argv.push(cmd.clone());
|
||||||
argv.push(cmd.clone());
|
}
|
||||||
}
|
|
||||||
argv.extend(expand_args(args));
|
argv.extend(expand_args(args));
|
||||||
(argv, env.clone(), None, Some(snap_name.clone()), None, None)
|
(argv, env.clone(), None, Some(snap_name.clone()))
|
||||||
}
|
|
||||||
EntryKind::Steam { app_id, args, env } => {
|
|
||||||
// Steam games are launched via the Steam snap: snap run steam steam://rungameid/<app_id>
|
|
||||||
let mut argv = vec![
|
|
||||||
"snap".to_string(),
|
|
||||||
"run".to_string(),
|
|
||||||
"steam".to_string(),
|
|
||||||
format!("steam://rungameid/{}", app_id),
|
|
||||||
];
|
|
||||||
argv.extend(expand_args(args));
|
|
||||||
(argv, env.clone(), None, None, None, Some(*app_id))
|
|
||||||
}
|
|
||||||
EntryKind::Flatpak { app_id, args, env } => {
|
|
||||||
// For Flatpak apps, we use 'flatpak run <app_id>' to launch them.
|
|
||||||
let mut argv = vec!["flatpak".to_string(), "run".to_string(), app_id.clone()];
|
|
||||||
argv.extend(expand_args(args));
|
|
||||||
(argv, env.clone(), None, None, Some(app_id.clone()), None)
|
|
||||||
}
|
}
|
||||||
EntryKind::Vm { driver, args } => {
|
EntryKind::Vm { driver, args } => {
|
||||||
// Construct command line from VM driver
|
// Construct command line from VM driver
|
||||||
|
|
@ -254,80 +166,53 @@ impl HostAdapter for LinuxHost {
|
||||||
argv.push(value.to_string());
|
argv.push(value.to_string());
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
(argv, HashMap::new(), None, None, None, None)
|
(argv, HashMap::new(), None, None)
|
||||||
}
|
}
|
||||||
EntryKind::Media {
|
EntryKind::Media { library_id, args: _ } => {
|
||||||
library_id,
|
|
||||||
args: _,
|
|
||||||
} => {
|
|
||||||
// For media, we'd typically launch a media player
|
// For media, we'd typically launch a media player
|
||||||
// This is a placeholder - real implementation would integrate with a player
|
// This is a placeholder - real implementation would integrate with a player
|
||||||
let argv = vec!["xdg-open".to_string(), expand_tilde(library_id)];
|
let argv = vec!["xdg-open".to_string(), expand_tilde(library_id)];
|
||||||
(argv, HashMap::new(), None, None, None, None)
|
(argv, HashMap::new(), None, None)
|
||||||
}
|
}
|
||||||
EntryKind::Custom {
|
EntryKind::Custom { type_name: _, payload: _ } => {
|
||||||
type_name: _,
|
|
||||||
payload: _,
|
|
||||||
} => {
|
|
||||||
return Err(HostError::UnsupportedKind);
|
return Err(HostError::UnsupportedKind);
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
// Get the command name for fallback killing
|
// Get the command name for fallback killing
|
||||||
// For snap/flatpak apps, use the app name (not "snap"/"flatpak") to avoid killing unrelated processes
|
// For snap apps, use the snap_name (not "snap") to avoid killing unrelated processes
|
||||||
let command_name = if let Some(ref snap) = snap_name {
|
let command_name = if let Some(ref snap) = snap_name {
|
||||||
snap.clone()
|
snap.clone()
|
||||||
} else if steam_app_id.is_some() {
|
|
||||||
"steam".to_string()
|
|
||||||
} else if let Some(ref app_id) = flatpak_app_id {
|
|
||||||
app_id.clone()
|
|
||||||
} else {
|
} else {
|
||||||
argv.first().cloned().unwrap_or_default()
|
argv.first().cloned().unwrap_or_default()
|
||||||
};
|
};
|
||||||
|
|
||||||
// Determine if this is a sandboxed app (snap or flatpak)
|
|
||||||
let sandboxed_app_name = snap_name.clone().or_else(|| flatpak_app_id.clone());
|
|
||||||
|
|
||||||
let proc = ManagedProcess::spawn(
|
let proc = ManagedProcess::spawn(
|
||||||
&argv,
|
&argv,
|
||||||
&env,
|
&env,
|
||||||
cwd.as_ref(),
|
cwd.as_ref(),
|
||||||
options.log_path.clone(),
|
options.log_path.clone(),
|
||||||
sandboxed_app_name,
|
snap_name.clone(),
|
||||||
)?;
|
)?;
|
||||||
|
|
||||||
let pid = proc.pid;
|
let pid = proc.pid;
|
||||||
let pgid = proc.pgid;
|
let pgid = proc.pgid;
|
||||||
|
|
||||||
// Store the session info so we can use it for killing even after process exits
|
// Store the session info so we can use it for killing even after process exits
|
||||||
let session_info_entry = SessionInfo {
|
let session_info_entry = SessionInfo {
|
||||||
command_name: command_name.clone(),
|
command_name: command_name.clone(),
|
||||||
snap_name: snap_name.clone(),
|
snap_name: snap_name.clone(),
|
||||||
flatpak_app_id: flatpak_app_id.clone(),
|
|
||||||
steam_app_id,
|
|
||||||
};
|
};
|
||||||
self.session_info
|
self.session_info.lock().unwrap().insert(session_id.clone(), session_info_entry);
|
||||||
.lock()
|
info!(session_id = %session_id, command = %command_name, snap = ?snap_name, "Tracking session info");
|
||||||
.unwrap()
|
|
||||||
.insert(session_id.clone(), session_info_entry);
|
|
||||||
info!(session_id = %session_id, command = %command_name, snap = ?snap_name, flatpak = ?flatpak_app_id, "Tracking session info");
|
|
||||||
|
|
||||||
let handle = HostSessionHandle::new(session_id, HostHandlePayload::Linux { pid, pgid });
|
let handle = HostSessionHandle::new(
|
||||||
|
session_id,
|
||||||
|
HostHandlePayload::Linux { pid, pgid },
|
||||||
|
);
|
||||||
|
|
||||||
self.processes.lock().unwrap().insert(pid, proc);
|
self.processes.lock().unwrap().insert(pid, proc);
|
||||||
|
|
||||||
if let Some(app_id) = steam_app_id {
|
|
||||||
self.steam_sessions.lock().unwrap().insert(
|
|
||||||
pid,
|
|
||||||
SteamSession {
|
|
||||||
pid,
|
|
||||||
pgid,
|
|
||||||
app_id,
|
|
||||||
seen_game: false,
|
|
||||||
},
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
info!(pid = pid, pgid = pgid, "Spawned process");
|
info!(pid = pid, pgid = pgid, "Spawned process");
|
||||||
|
|
||||||
Ok(handle)
|
Ok(handle)
|
||||||
|
|
@ -342,10 +227,10 @@ impl HostAdapter for LinuxHost {
|
||||||
|
|
||||||
// Get the session's info for killing
|
// Get the session's info for killing
|
||||||
let session_info = self.session_info.lock().unwrap().get(&session_id).cloned();
|
let session_info = self.session_info.lock().unwrap().get(&session_id).cloned();
|
||||||
|
|
||||||
// Check if we have session info OR a tracked process
|
// Check if we have session info OR a tracked process
|
||||||
let has_process = self.processes.lock().unwrap().contains_key(&pid);
|
let has_process = self.processes.lock().unwrap().contains_key(&pid);
|
||||||
|
|
||||||
if session_info.is_none() && !has_process {
|
if session_info.is_none() && !has_process {
|
||||||
warn!(session_id = %session_id, pid = pid, "No session info or tracked process found");
|
warn!(session_id = %session_id, pid = pid, "No session info or tracked process found");
|
||||||
return Err(HostError::SessionNotFound);
|
return Err(HostError::SessionNotFound);
|
||||||
|
|
@ -353,37 +238,20 @@ impl HostAdapter for LinuxHost {
|
||||||
|
|
||||||
match mode {
|
match mode {
|
||||||
StopMode::Graceful { timeout } => {
|
StopMode::Graceful { timeout } => {
|
||||||
// If this is a snap or flatpak app, use cgroup-based killing (most reliable)
|
// If this is a snap app, use cgroup-based killing (most reliable)
|
||||||
if let Some(ref info) = session_info {
|
if let Some(ref info) = session_info {
|
||||||
if let Some(ref snap) = info.snap_name {
|
if let Some(ref snap) = info.snap_name {
|
||||||
kill_snap_cgroup(snap, nix::sys::signal::Signal::SIGTERM);
|
kill_snap_cgroup(snap, nix::sys::signal::Signal::SIGTERM);
|
||||||
info!(snap = %snap, "Sent SIGTERM via snap cgroup");
|
info!(snap = %snap, "Sent SIGTERM via snap cgroup");
|
||||||
} else if let Some(app_id) = info.steam_app_id {
|
|
||||||
let _ =
|
|
||||||
kill_steam_game_processes(app_id, nix::sys::signal::Signal::SIGTERM);
|
|
||||||
if let Ok(mut map) = self.steam_sessions.lock() {
|
|
||||||
map.entry(pid).and_modify(|entry| entry.seen_game = true);
|
|
||||||
}
|
|
||||||
info!(
|
|
||||||
steam_app_id = app_id,
|
|
||||||
"Sent SIGTERM to Steam game processes"
|
|
||||||
);
|
|
||||||
} else if let Some(ref app_id) = info.flatpak_app_id {
|
|
||||||
kill_flatpak_cgroup(app_id, nix::sys::signal::Signal::SIGTERM);
|
|
||||||
info!(flatpak = %app_id, "Sent SIGTERM via flatpak cgroup");
|
|
||||||
} else {
|
} else {
|
||||||
// Fall back to command name for non-sandboxed apps
|
// Fall back to command name for non-snap apps
|
||||||
kill_by_command(&info.command_name, nix::sys::signal::Signal::SIGTERM);
|
kill_by_command(&info.command_name, nix::sys::signal::Signal::SIGTERM);
|
||||||
info!(command = %info.command_name, "Sent SIGTERM via command name");
|
info!(command = %info.command_name, "Sent SIGTERM via command name");
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Also send SIGTERM via process handle (skip for Steam sessions)
|
// Also send SIGTERM via process handle
|
||||||
let is_steam = session_info
|
{
|
||||||
.as_ref()
|
|
||||||
.and_then(|info| info.steam_app_id)
|
|
||||||
.is_some();
|
|
||||||
if !is_steam {
|
|
||||||
let procs = self.processes.lock().unwrap();
|
let procs = self.processes.lock().unwrap();
|
||||||
if let Some(p) = procs.get(&pid) {
|
if let Some(p) = procs.get(&pid) {
|
||||||
let _ = p.terminate();
|
let _ = p.terminate();
|
||||||
|
|
@ -394,52 +262,28 @@ impl HostAdapter for LinuxHost {
|
||||||
let start = std::time::Instant::now();
|
let start = std::time::Instant::now();
|
||||||
loop {
|
loop {
|
||||||
if start.elapsed() >= timeout {
|
if start.elapsed() >= timeout {
|
||||||
// Force kill after timeout using snap/flatpak cgroup or command name
|
// Force kill after timeout using snap cgroup or command name
|
||||||
if let Some(ref info) = session_info {
|
if let Some(ref info) = session_info {
|
||||||
if let Some(ref snap) = info.snap_name {
|
if let Some(ref snap) = info.snap_name {
|
||||||
kill_snap_cgroup(snap, nix::sys::signal::Signal::SIGKILL);
|
kill_snap_cgroup(snap, nix::sys::signal::Signal::SIGKILL);
|
||||||
info!(snap = %snap, "Sent SIGKILL via snap cgroup (timeout)");
|
info!(snap = %snap, "Sent SIGKILL via snap cgroup (timeout)");
|
||||||
} else if let Some(app_id) = info.steam_app_id {
|
|
||||||
let _ = kill_steam_game_processes(
|
|
||||||
app_id,
|
|
||||||
nix::sys::signal::Signal::SIGKILL,
|
|
||||||
);
|
|
||||||
info!(
|
|
||||||
steam_app_id = app_id,
|
|
||||||
"Sent SIGKILL to Steam game processes (timeout)"
|
|
||||||
);
|
|
||||||
} else if let Some(ref app_id) = info.flatpak_app_id {
|
|
||||||
kill_flatpak_cgroup(app_id, nix::sys::signal::Signal::SIGKILL);
|
|
||||||
info!(flatpak = %app_id, "Sent SIGKILL via flatpak cgroup (timeout)");
|
|
||||||
} else {
|
} else {
|
||||||
kill_by_command(
|
kill_by_command(&info.command_name, nix::sys::signal::Signal::SIGKILL);
|
||||||
&info.command_name,
|
|
||||||
nix::sys::signal::Signal::SIGKILL,
|
|
||||||
);
|
|
||||||
info!(command = %info.command_name, "Sent SIGKILL via command name (timeout)");
|
info!(command = %info.command_name, "Sent SIGKILL via command name (timeout)");
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Also force kill via process handle (skip for Steam sessions)
|
// Also force kill via process handle
|
||||||
if !is_steam {
|
let procs = self.processes.lock().unwrap();
|
||||||
let procs = self.processes.lock().unwrap();
|
if let Some(p) = procs.get(&pid) {
|
||||||
if let Some(p) = procs.get(&pid) {
|
let _ = p.kill();
|
||||||
let _ = p.kill();
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|
||||||
// Check if process is still running
|
// Check if process is still running
|
||||||
let still_running = if is_steam {
|
let still_running = self.processes.lock().unwrap().contains_key(&pid);
|
||||||
let app_id = session_info.as_ref().and_then(|info| info.steam_app_id);
|
|
||||||
app_id
|
|
||||||
.map(|id| !find_steam_game_pids(id).is_empty())
|
|
||||||
.unwrap_or(false)
|
|
||||||
} else {
|
|
||||||
self.processes.lock().unwrap().contains_key(&pid)
|
|
||||||
};
|
|
||||||
|
|
||||||
if !still_running {
|
if !still_running {
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|
@ -448,44 +292,25 @@ impl HostAdapter for LinuxHost {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
StopMode::Force => {
|
StopMode::Force => {
|
||||||
// Force kill via snap/flatpak cgroup or command name
|
// Force kill via snap cgroup or command name
|
||||||
if let Some(ref info) = session_info {
|
if let Some(ref info) = session_info {
|
||||||
if let Some(ref snap) = info.snap_name {
|
if let Some(ref snap) = info.snap_name {
|
||||||
kill_snap_cgroup(snap, nix::sys::signal::Signal::SIGKILL);
|
kill_snap_cgroup(snap, nix::sys::signal::Signal::SIGKILL);
|
||||||
info!(snap = %snap, "Sent SIGKILL via snap cgroup");
|
info!(snap = %snap, "Sent SIGKILL via snap cgroup");
|
||||||
} else if let Some(app_id) = info.steam_app_id {
|
|
||||||
let _ =
|
|
||||||
kill_steam_game_processes(app_id, nix::sys::signal::Signal::SIGKILL);
|
|
||||||
if let Ok(mut map) = self.steam_sessions.lock() {
|
|
||||||
map.entry(pid).and_modify(|entry| entry.seen_game = true);
|
|
||||||
}
|
|
||||||
info!(
|
|
||||||
steam_app_id = app_id,
|
|
||||||
"Sent SIGKILL to Steam game processes"
|
|
||||||
);
|
|
||||||
} else if let Some(ref app_id) = info.flatpak_app_id {
|
|
||||||
kill_flatpak_cgroup(app_id, nix::sys::signal::Signal::SIGKILL);
|
|
||||||
info!(flatpak = %app_id, "Sent SIGKILL via flatpak cgroup");
|
|
||||||
} else {
|
} else {
|
||||||
kill_by_command(&info.command_name, nix::sys::signal::Signal::SIGKILL);
|
kill_by_command(&info.command_name, nix::sys::signal::Signal::SIGKILL);
|
||||||
info!(command = %info.command_name, "Sent SIGKILL via command name");
|
info!(command = %info.command_name, "Sent SIGKILL via command name");
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Also force kill via process handle (skip for Steam sessions)
|
// Also force kill via process handle
|
||||||
let is_steam = session_info
|
let procs = self.processes.lock().unwrap();
|
||||||
.as_ref()
|
if let Some(p) = procs.get(&pid) {
|
||||||
.and_then(|info| info.steam_app_id)
|
let _ = p.kill();
|
||||||
.is_some();
|
|
||||||
if !is_steam {
|
|
||||||
let procs = self.processes.lock().unwrap();
|
|
||||||
if let Some(p) = procs.get(&pid) {
|
|
||||||
let _ = p.kill();
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Clean up the session info tracking
|
// Clean up the session info tracking
|
||||||
self.session_info.lock().unwrap().remove(&session_id);
|
self.session_info.lock().unwrap().remove(&session_id);
|
||||||
|
|
||||||
|
|
|
||||||
497
crates/shepherd-host-linux/src/connectivity.rs
Normal file
|
|
@ -0,0 +1,497 @@
|
||||||
|
//! Network connectivity monitoring for Linux
|
||||||
|
//!
|
||||||
|
//! This module provides:
|
||||||
|
//! - Periodic connectivity checks to a configurable URL
|
||||||
|
//! - Network interface change detection via netlink
|
||||||
|
//! - Per-entry connectivity status tracking
|
||||||
|
|
||||||
|
#![allow(dead_code)] // Methods on ConnectivityMonitor may be used for future admin commands
|
||||||
|
|
||||||
|
use chrono::{DateTime, Local};
|
||||||
|
use netlink_packet_core::{NetlinkMessage, NetlinkPayload};
|
||||||
|
use netlink_packet_route::RouteNetlinkMessage;
|
||||||
|
use netlink_sys::{protocols::NETLINK_ROUTE, Socket, SocketAddr};
|
||||||
|
use reqwest::Client;
|
||||||
|
use std::collections::HashMap;
|
||||||
|
use std::os::fd::AsRawFd;
|
||||||
|
use std::sync::Arc;
|
||||||
|
use std::time::Duration;
|
||||||
|
use tokio::sync::{mpsc, watch, RwLock};
|
||||||
|
use tracing::{debug, error, info, warn};
|
||||||
|
|
||||||
|
/// Events emitted by the connectivity monitor
|
||||||
|
#[derive(Debug, Clone)]
|
||||||
|
pub enum ConnectivityEvent {
|
||||||
|
/// Global connectivity status changed
|
||||||
|
StatusChanged {
|
||||||
|
connected: bool,
|
||||||
|
check_url: String,
|
||||||
|
},
|
||||||
|
/// Network interface changed (may trigger recheck)
|
||||||
|
InterfaceChanged,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Configuration for the connectivity monitor
|
||||||
|
#[derive(Debug, Clone)]
|
||||||
|
pub struct ConnectivityConfig {
|
||||||
|
/// URL to check for global network connectivity
|
||||||
|
pub check_url: String,
|
||||||
|
/// How often to perform periodic connectivity checks
|
||||||
|
pub check_interval: Duration,
|
||||||
|
/// Timeout for connectivity checks
|
||||||
|
pub check_timeout: Duration,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Cached connectivity check result
|
||||||
|
#[derive(Debug, Clone)]
|
||||||
|
struct CheckResult {
|
||||||
|
connected: bool,
|
||||||
|
checked_at: DateTime<Local>,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Connectivity monitor that tracks network availability
|
||||||
|
pub struct ConnectivityMonitor {
|
||||||
|
/// HTTP client for connectivity checks
|
||||||
|
client: Client,
|
||||||
|
/// Configuration
|
||||||
|
config: ConnectivityConfig,
|
||||||
|
/// Current global connectivity status
|
||||||
|
global_status: Arc<RwLock<Option<CheckResult>>>,
|
||||||
|
/// Cached results for specific URLs (entry-specific checks)
|
||||||
|
url_cache: Arc<RwLock<HashMap<String, CheckResult>>>,
|
||||||
|
/// Channel for sending events
|
||||||
|
event_tx: mpsc::Sender<ConnectivityEvent>,
|
||||||
|
/// Shutdown signal
|
||||||
|
shutdown_rx: watch::Receiver<bool>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl ConnectivityMonitor {
|
||||||
|
/// Create a new connectivity monitor
|
||||||
|
pub fn new(
|
||||||
|
config: ConnectivityConfig,
|
||||||
|
shutdown_rx: watch::Receiver<bool>,
|
||||||
|
) -> (Self, mpsc::Receiver<ConnectivityEvent>) {
|
||||||
|
let (event_tx, event_rx) = mpsc::channel(32);
|
||||||
|
|
||||||
|
let client = Client::builder()
|
||||||
|
.timeout(config.check_timeout)
|
||||||
|
.connect_timeout(config.check_timeout)
|
||||||
|
.build()
|
||||||
|
.expect("Failed to create HTTP client");
|
||||||
|
|
||||||
|
let monitor = Self {
|
||||||
|
client,
|
||||||
|
config,
|
||||||
|
global_status: Arc::new(RwLock::new(None)),
|
||||||
|
url_cache: Arc::new(RwLock::new(HashMap::new())),
|
||||||
|
event_tx,
|
||||||
|
shutdown_rx,
|
||||||
|
};
|
||||||
|
|
||||||
|
(monitor, event_rx)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Start the connectivity monitor (runs until shutdown)
|
||||||
|
pub async fn run(self) {
|
||||||
|
let check_interval = self.config.check_interval;
|
||||||
|
let check_url = self.config.check_url.clone();
|
||||||
|
|
||||||
|
// Spawn periodic check task
|
||||||
|
let periodic_handle = {
|
||||||
|
let client = self.client.clone();
|
||||||
|
let global_status = self.global_status.clone();
|
||||||
|
let event_tx = self.event_tx.clone();
|
||||||
|
let check_url = check_url.clone();
|
||||||
|
let check_timeout = self.config.check_timeout;
|
||||||
|
let mut shutdown = self.shutdown_rx.clone();
|
||||||
|
|
||||||
|
tokio::spawn(async move {
|
||||||
|
let mut interval = tokio::time::interval(check_interval);
|
||||||
|
interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Skip);
|
||||||
|
|
||||||
|
// Do initial check immediately
|
||||||
|
let connected = check_url_reachable(&client, &check_url, check_timeout).await;
|
||||||
|
update_global_status(&global_status, &event_tx, &check_url, connected).await;
|
||||||
|
|
||||||
|
loop {
|
||||||
|
tokio::select! {
|
||||||
|
_ = interval.tick() => {
|
||||||
|
let connected = check_url_reachable(&client, &check_url, check_timeout).await;
|
||||||
|
update_global_status(&global_status, &event_tx, &check_url, connected).await;
|
||||||
|
}
|
||||||
|
_ = shutdown.changed() => {
|
||||||
|
if *shutdown.borrow() {
|
||||||
|
debug!("Periodic check task shutting down");
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
};
|
||||||
|
|
||||||
|
// Spawn netlink monitor task
|
||||||
|
let netlink_handle = {
|
||||||
|
let client = self.client.clone();
|
||||||
|
let global_status = self.global_status.clone();
|
||||||
|
let url_cache = self.url_cache.clone();
|
||||||
|
let event_tx = self.event_tx.clone();
|
||||||
|
let check_url = check_url.clone();
|
||||||
|
let check_timeout = self.config.check_timeout;
|
||||||
|
let mut shutdown = self.shutdown_rx.clone();
|
||||||
|
|
||||||
|
tokio::spawn(async move {
|
||||||
|
if let Err(e) = run_netlink_monitor(
|
||||||
|
&client,
|
||||||
|
&global_status,
|
||||||
|
&url_cache,
|
||||||
|
&event_tx,
|
||||||
|
&check_url,
|
||||||
|
check_timeout,
|
||||||
|
&mut shutdown,
|
||||||
|
)
|
||||||
|
.await
|
||||||
|
{
|
||||||
|
warn!(error = %e, "Netlink monitor failed, network change detection unavailable");
|
||||||
|
}
|
||||||
|
})
|
||||||
|
};
|
||||||
|
|
||||||
|
// Wait for shutdown
|
||||||
|
let mut shutdown = self.shutdown_rx.clone();
|
||||||
|
let _ = shutdown.changed().await;
|
||||||
|
|
||||||
|
// Cancel tasks
|
||||||
|
periodic_handle.abort();
|
||||||
|
netlink_handle.abort();
|
||||||
|
|
||||||
|
info!("Connectivity monitor stopped");
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get the current global connectivity status
|
||||||
|
pub async fn is_connected(&self) -> bool {
|
||||||
|
self.global_status
|
||||||
|
.read()
|
||||||
|
.await
|
||||||
|
.as_ref()
|
||||||
|
.is_some_and(|r| r.connected)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get the last check time
|
||||||
|
pub async fn last_check_time(&self) -> Option<DateTime<Local>> {
|
||||||
|
self.global_status.read().await.as_ref().map(|r| r.checked_at)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Check if a specific URL is reachable (with caching)
|
||||||
|
/// Used for entry-specific network requirements
|
||||||
|
pub async fn check_url(&self, url: &str) -> bool {
|
||||||
|
// Check cache first
|
||||||
|
{
|
||||||
|
let cache = self.url_cache.read().await;
|
||||||
|
if let Some(result) = cache.get(url) {
|
||||||
|
// Cache valid for half the check interval
|
||||||
|
let cache_ttl = self.config.check_interval / 2;
|
||||||
|
let age = shepherd_util::now()
|
||||||
|
.signed_duration_since(result.checked_at)
|
||||||
|
.to_std()
|
||||||
|
.unwrap_or(Duration::MAX);
|
||||||
|
if age < cache_ttl {
|
||||||
|
return result.connected;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Perform check
|
||||||
|
let connected = check_url_reachable(&self.client, url, self.config.check_timeout).await;
|
||||||
|
|
||||||
|
// Update cache
|
||||||
|
{
|
||||||
|
let mut cache = self.url_cache.write().await;
|
||||||
|
cache.insert(
|
||||||
|
url.to_string(),
|
||||||
|
CheckResult {
|
||||||
|
connected,
|
||||||
|
checked_at: shepherd_util::now(),
|
||||||
|
},
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
connected
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Force an immediate connectivity recheck
|
||||||
|
pub async fn trigger_recheck(&self) {
|
||||||
|
let connected =
|
||||||
|
check_url_reachable(&self.client, &self.config.check_url, self.config.check_timeout)
|
||||||
|
.await;
|
||||||
|
update_global_status(
|
||||||
|
&self.global_status,
|
||||||
|
&self.event_tx,
|
||||||
|
&self.config.check_url,
|
||||||
|
connected,
|
||||||
|
)
|
||||||
|
.await;
|
||||||
|
|
||||||
|
// Clear URL cache to force rechecks
|
||||||
|
self.url_cache.write().await.clear();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Check if a URL is reachable
|
||||||
|
async fn check_url_reachable(client: &Client, url: &str, timeout: Duration) -> bool {
|
||||||
|
debug!(url = %url, "Checking connectivity");
|
||||||
|
|
||||||
|
match client
|
||||||
|
.get(url)
|
||||||
|
.timeout(timeout)
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
{
|
||||||
|
Ok(response) => {
|
||||||
|
let status = response.status();
|
||||||
|
let connected = status.is_success() || status.as_u16() == 204;
|
||||||
|
debug!(url = %url, status = %status, connected = connected, "Connectivity check complete");
|
||||||
|
connected
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
debug!(url = %url, error = %e, "Connectivity check failed");
|
||||||
|
false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Update global status and emit event if changed
|
||||||
|
async fn update_global_status(
|
||||||
|
global_status: &Arc<RwLock<Option<CheckResult>>>,
|
||||||
|
event_tx: &mpsc::Sender<ConnectivityEvent>,
|
||||||
|
check_url: &str,
|
||||||
|
connected: bool,
|
||||||
|
) {
|
||||||
|
let mut status = global_status.write().await;
|
||||||
|
let previous = status.as_ref().map(|r| r.connected);
|
||||||
|
|
||||||
|
*status = Some(CheckResult {
|
||||||
|
connected,
|
||||||
|
checked_at: shepherd_util::now(),
|
||||||
|
});
|
||||||
|
|
||||||
|
// Emit event if status changed
|
||||||
|
if previous != Some(connected) {
|
||||||
|
info!(
|
||||||
|
connected = connected,
|
||||||
|
url = %check_url,
|
||||||
|
"Global connectivity status changed"
|
||||||
|
);
|
||||||
|
let _ = event_tx
|
||||||
|
.send(ConnectivityEvent::StatusChanged {
|
||||||
|
connected,
|
||||||
|
check_url: check_url.to_string(),
|
||||||
|
})
|
||||||
|
.await;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Run the netlink monitor to detect network interface changes
|
||||||
|
async fn run_netlink_monitor(
|
||||||
|
client: &Client,
|
||||||
|
global_status: &Arc<RwLock<Option<CheckResult>>>,
|
||||||
|
url_cache: &Arc<RwLock<HashMap<String, CheckResult>>>,
|
||||||
|
event_tx: &mpsc::Sender<ConnectivityEvent>,
|
||||||
|
check_url: &str,
|
||||||
|
check_timeout: Duration,
|
||||||
|
shutdown: &mut watch::Receiver<bool>,
|
||||||
|
) -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
|
||||||
|
// Create netlink socket for route notifications
|
||||||
|
let mut socket = Socket::new(NETLINK_ROUTE)?;
|
||||||
|
|
||||||
|
// Bind to multicast groups for link and address changes
|
||||||
|
// RTMGRP_LINK = 1, RTMGRP_IPV4_IFADDR = 0x10, RTMGRP_IPV6_IFADDR = 0x100
|
||||||
|
let groups = 1 | 0x10 | 0x100;
|
||||||
|
let addr = SocketAddr::new(0, groups);
|
||||||
|
socket.bind(&addr)?;
|
||||||
|
|
||||||
|
// Set non-blocking for async compatibility
|
||||||
|
socket.set_non_blocking(true)?;
|
||||||
|
|
||||||
|
info!("Netlink monitor started");
|
||||||
|
|
||||||
|
let fd = socket.as_raw_fd();
|
||||||
|
let mut buf = vec![0u8; 4096];
|
||||||
|
|
||||||
|
loop {
|
||||||
|
// Use tokio's async fd for the socket
|
||||||
|
let async_fd = tokio::io::unix::AsyncFd::new(fd)?;
|
||||||
|
|
||||||
|
tokio::select! {
|
||||||
|
result = async_fd.readable() => {
|
||||||
|
match result {
|
||||||
|
Ok(mut guard) => {
|
||||||
|
// Try to read from socket
|
||||||
|
match socket.recv(&mut buf, 0) {
|
||||||
|
Ok(len) if len > 0 => {
|
||||||
|
// Parse netlink messages
|
||||||
|
if has_relevant_netlink_event(&buf[..len]) {
|
||||||
|
debug!("Network interface change detected");
|
||||||
|
let _ = event_tx.send(ConnectivityEvent::InterfaceChanged).await;
|
||||||
|
|
||||||
|
// Clear URL cache
|
||||||
|
url_cache.write().await.clear();
|
||||||
|
|
||||||
|
// Recheck connectivity after a short delay
|
||||||
|
// (give network time to stabilize)
|
||||||
|
tokio::time::sleep(Duration::from_millis(500)).await;
|
||||||
|
|
||||||
|
let connected = check_url_reachable(client, check_url, check_timeout).await;
|
||||||
|
update_global_status(global_status, event_tx, check_url, connected).await;
|
||||||
|
}
|
||||||
|
guard.clear_ready();
|
||||||
|
}
|
||||||
|
Ok(_) => {
|
||||||
|
guard.clear_ready();
|
||||||
|
}
|
||||||
|
Err(e) if e.kind() == std::io::ErrorKind::WouldBlock => {
|
||||||
|
guard.clear_ready();
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
error!(error = %e, "Netlink recv error");
|
||||||
|
guard.clear_ready();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
error!(error = %e, "Async fd error");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
_ = shutdown.changed() => {
|
||||||
|
if *shutdown.borrow() {
|
||||||
|
debug!("Netlink monitor shutting down");
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Check if a netlink message buffer contains relevant network events
|
||||||
|
fn has_relevant_netlink_event(buf: &[u8]) -> bool {
|
||||||
|
let mut offset = 0;
|
||||||
|
|
||||||
|
while offset < buf.len() {
|
||||||
|
match NetlinkMessage::<RouteNetlinkMessage>::deserialize(&buf[offset..]) {
|
||||||
|
Ok(msg) => {
|
||||||
|
if let NetlinkPayload::InnerMessage(route_msg) = &msg.payload
|
||||||
|
&& matches!(
|
||||||
|
route_msg,
|
||||||
|
// Link up/down events
|
||||||
|
RouteNetlinkMessage::NewLink(_)
|
||||||
|
| RouteNetlinkMessage::DelLink(_)
|
||||||
|
// Address added/removed
|
||||||
|
| RouteNetlinkMessage::NewAddress(_)
|
||||||
|
| RouteNetlinkMessage::DelAddress(_)
|
||||||
|
// Route changes
|
||||||
|
| RouteNetlinkMessage::NewRoute(_)
|
||||||
|
| RouteNetlinkMessage::DelRoute(_)
|
||||||
|
)
|
||||||
|
{
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Move to next message
|
||||||
|
let len = msg.header.length as usize;
|
||||||
|
if len == 0 {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
offset += len;
|
||||||
|
}
|
||||||
|
Err(_) => break,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
false
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Handle for accessing connectivity status from other parts of the service
|
||||||
|
#[derive(Clone)]
|
||||||
|
pub struct ConnectivityHandle {
|
||||||
|
client: Client,
|
||||||
|
global_status: Arc<RwLock<Option<CheckResult>>>,
|
||||||
|
url_cache: Arc<RwLock<HashMap<String, CheckResult>>>,
|
||||||
|
check_timeout: Duration,
|
||||||
|
cache_ttl: Duration,
|
||||||
|
global_check_url: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl ConnectivityHandle {
|
||||||
|
/// Create a handle from the monitor
|
||||||
|
pub fn from_monitor(monitor: &ConnectivityMonitor) -> Self {
|
||||||
|
Self {
|
||||||
|
client: monitor.client.clone(),
|
||||||
|
global_status: monitor.global_status.clone(),
|
||||||
|
url_cache: monitor.url_cache.clone(),
|
||||||
|
check_timeout: monitor.config.check_timeout,
|
||||||
|
cache_ttl: monitor.config.check_interval / 2,
|
||||||
|
global_check_url: monitor.config.check_url.clone(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get the current global connectivity status
|
||||||
|
pub async fn is_connected(&self) -> bool {
|
||||||
|
self.global_status
|
||||||
|
.read()
|
||||||
|
.await
|
||||||
|
.as_ref()
|
||||||
|
.is_some_and(|r| r.connected)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get the last check time
|
||||||
|
pub async fn last_check_time(&self) -> Option<DateTime<Local>> {
|
||||||
|
self.global_status.read().await.as_ref().map(|r| r.checked_at)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get the global check URL
|
||||||
|
pub fn global_check_url(&self) -> &str {
|
||||||
|
&self.global_check_url
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Check if a specific URL is reachable (with caching)
|
||||||
|
pub async fn check_url(&self, url: &str) -> bool {
|
||||||
|
// If it's the global URL, use global status
|
||||||
|
if url == self.global_check_url {
|
||||||
|
return self.is_connected().await;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check cache first
|
||||||
|
{
|
||||||
|
let cache = self.url_cache.read().await;
|
||||||
|
if let Some(result) = cache.get(url) {
|
||||||
|
let age = shepherd_util::now()
|
||||||
|
.signed_duration_since(result.checked_at)
|
||||||
|
.to_std()
|
||||||
|
.unwrap_or(Duration::MAX);
|
||||||
|
if age < self.cache_ttl {
|
||||||
|
return result.connected;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Perform check
|
||||||
|
let connected = check_url_reachable(&self.client, url, self.check_timeout).await;
|
||||||
|
|
||||||
|
// Update cache
|
||||||
|
{
|
||||||
|
let mut cache = self.url_cache.write().await;
|
||||||
|
cache.insert(
|
||||||
|
url.to_string(),
|
||||||
|
CheckResult {
|
||||||
|
connected,
|
||||||
|
checked_at: shepherd_util::now(),
|
||||||
|
},
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
connected
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
@ -6,11 +6,14 @@
|
||||||
//! - Exit observation
|
//! - Exit observation
|
||||||
//! - stdout/stderr capture
|
//! - stdout/stderr capture
|
||||||
//! - Volume control with auto-detection of sound systems
|
//! - Volume control with auto-detection of sound systems
|
||||||
|
//! - Network connectivity monitoring via netlink
|
||||||
|
|
||||||
mod adapter;
|
mod adapter;
|
||||||
|
mod connectivity;
|
||||||
mod process;
|
mod process;
|
||||||
mod volume;
|
mod volume;
|
||||||
|
|
||||||
pub use adapter::*;
|
pub use adapter::*;
|
||||||
|
pub use connectivity::*;
|
||||||
pub use process::*;
|
pub use process::*;
|
||||||
pub use volume::*;
|
pub use volume::*;
|
||||||
|
|
|
||||||
|
|
@ -38,32 +38,32 @@ pub fn kill_snap_cgroup(snap_name: &str, _signal: Signal) -> bool {
|
||||||
"/sys/fs/cgroup/user.slice/user-{}.slice/user@{}.service/app.slice",
|
"/sys/fs/cgroup/user.slice/user-{}.slice/user@{}.service/app.slice",
|
||||||
uid, uid
|
uid, uid
|
||||||
);
|
);
|
||||||
|
|
||||||
// Find all scope directories matching this snap
|
// Find all scope directories matching this snap
|
||||||
let pattern = format!("snap.{}.{}-", snap_name, snap_name);
|
let pattern = format!("snap.{}.{}-", snap_name, snap_name);
|
||||||
|
|
||||||
let base = std::path::Path::new(&base_path);
|
let base = std::path::Path::new(&base_path);
|
||||||
if !base.exists() {
|
if !base.exists() {
|
||||||
debug!(path = %base_path, "Snap cgroup base path doesn't exist");
|
debug!(path = %base_path, "Snap cgroup base path doesn't exist");
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
let mut stopped_any = false;
|
let mut stopped_any = false;
|
||||||
|
|
||||||
if let Ok(entries) = std::fs::read_dir(base) {
|
if let Ok(entries) = std::fs::read_dir(base) {
|
||||||
for entry in entries.flatten() {
|
for entry in entries.flatten() {
|
||||||
let name = entry.file_name();
|
let name = entry.file_name();
|
||||||
let name_str = name.to_string_lossy();
|
let name_str = name.to_string_lossy();
|
||||||
|
|
||||||
if name_str.starts_with(&pattern) && name_str.ends_with(".scope") {
|
if name_str.starts_with(&pattern) && name_str.ends_with(".scope") {
|
||||||
let scope_name = name_str.to_string();
|
let scope_name = name_str.to_string();
|
||||||
|
|
||||||
// Always use SIGKILL for snap apps to prevent self-restart behavior
|
// Always use SIGKILL for snap apps to prevent self-restart behavior
|
||||||
// Using systemctl kill --signal=KILL sends SIGKILL to all processes in scope
|
// Using systemctl kill --signal=KILL sends SIGKILL to all processes in scope
|
||||||
let result = Command::new("systemctl")
|
let result = Command::new("systemctl")
|
||||||
.args(["--user", "kill", "--signal=KILL", &scope_name])
|
.args(["--user", "kill", "--signal=KILL", &scope_name])
|
||||||
.output();
|
.output();
|
||||||
|
|
||||||
match result {
|
match result {
|
||||||
Ok(output) => {
|
Ok(output) => {
|
||||||
if output.status.success() {
|
if output.status.success() {
|
||||||
|
|
@ -81,140 +81,16 @@ pub fn kill_snap_cgroup(snap_name: &str, _signal: Signal) -> bool {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if stopped_any {
|
if stopped_any {
|
||||||
info!(
|
info!(snap = snap_name, "Killed snap scope(s) via systemctl SIGKILL");
|
||||||
snap = snap_name,
|
|
||||||
"Killed snap scope(s) via systemctl SIGKILL"
|
|
||||||
);
|
|
||||||
} else {
|
} else {
|
||||||
debug!(snap = snap_name, "No snap scope found to kill");
|
debug!(snap = snap_name, "No snap scope found to kill");
|
||||||
}
|
}
|
||||||
|
|
||||||
stopped_any
|
stopped_any
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Kill all processes in a Flatpak app's cgroup using systemd
|
|
||||||
/// Flatpak apps create scopes at: app-flatpak-<app_id>-<number>.scope
|
|
||||||
/// For example: app-flatpak-org.prismlauncher.PrismLauncher-12345.scope
|
|
||||||
/// Similar to snap apps, we use systemctl --user to manage the scopes.
|
|
||||||
pub fn kill_flatpak_cgroup(app_id: &str, _signal: Signal) -> bool {
|
|
||||||
let uid = nix::unistd::getuid().as_raw();
|
|
||||||
let base_path = format!(
|
|
||||||
"/sys/fs/cgroup/user.slice/user-{}.slice/user@{}.service/app.slice",
|
|
||||||
uid, uid
|
|
||||||
);
|
|
||||||
|
|
||||||
// Flatpak uses a different naming pattern than snap
|
|
||||||
// The app_id dots are preserved: app-flatpak-org.example.App-<number>.scope
|
|
||||||
let pattern = format!("app-flatpak-{}-", app_id);
|
|
||||||
|
|
||||||
let base = std::path::Path::new(&base_path);
|
|
||||||
if !base.exists() {
|
|
||||||
debug!(path = %base_path, "Flatpak cgroup base path doesn't exist");
|
|
||||||
return false;
|
|
||||||
}
|
|
||||||
|
|
||||||
let mut stopped_any = false;
|
|
||||||
|
|
||||||
if let Ok(entries) = std::fs::read_dir(base) {
|
|
||||||
for entry in entries.flatten() {
|
|
||||||
let name = entry.file_name();
|
|
||||||
let name_str = name.to_string_lossy();
|
|
||||||
|
|
||||||
if name_str.starts_with(&pattern) && name_str.ends_with(".scope") {
|
|
||||||
let scope_name = name_str.to_string();
|
|
||||||
|
|
||||||
// Always use SIGKILL for flatpak apps to prevent self-restart behavior
|
|
||||||
// Using systemctl kill --signal=KILL sends SIGKILL to all processes in scope
|
|
||||||
let result = Command::new("systemctl")
|
|
||||||
.args(["--user", "kill", "--signal=KILL", &scope_name])
|
|
||||||
.output();
|
|
||||||
|
|
||||||
match result {
|
|
||||||
Ok(output) => {
|
|
||||||
if output.status.success() {
|
|
||||||
info!(scope = %scope_name, "Killed flatpak scope via systemctl SIGKILL");
|
|
||||||
stopped_any = true;
|
|
||||||
} else {
|
|
||||||
let stderr = String::from_utf8_lossy(&output.stderr);
|
|
||||||
warn!(scope = %scope_name, stderr = %stderr, "systemctl kill command failed");
|
|
||||||
}
|
|
||||||
}
|
|
||||||
Err(e) => {
|
|
||||||
warn!(scope = %scope_name, error = %e, "Failed to run systemctl");
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if stopped_any {
|
|
||||||
info!(
|
|
||||||
app_id = app_id,
|
|
||||||
"Killed flatpak scope(s) via systemctl SIGKILL"
|
|
||||||
);
|
|
||||||
} else {
|
|
||||||
debug!(app_id = app_id, "No flatpak scope found to kill");
|
|
||||||
}
|
|
||||||
|
|
||||||
stopped_any
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Find Steam game process IDs by Steam App ID (from environment variables)
|
|
||||||
pub fn find_steam_game_pids(app_id: u32) -> Vec<i32> {
|
|
||||||
let mut pids = Vec::new();
|
|
||||||
let target = app_id.to_string();
|
|
||||||
let keys = ["SteamAppId", "SteamAppID", "STEAM_APP_ID"];
|
|
||||||
|
|
||||||
if let Ok(entries) = std::fs::read_dir("/proc") {
|
|
||||||
for entry in entries.flatten() {
|
|
||||||
let name = entry.file_name();
|
|
||||||
let name_str = name.to_string_lossy();
|
|
||||||
if let Ok(pid) = name_str.parse::<i32>() {
|
|
||||||
let env_path = format!("/proc/{}/environ", pid);
|
|
||||||
let Ok(env_bytes) = std::fs::read(&env_path) else {
|
|
||||||
continue;
|
|
||||||
};
|
|
||||||
|
|
||||||
for var in env_bytes.split(|b| *b == 0) {
|
|
||||||
if var.is_empty() {
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
let Ok(var_str) = std::str::from_utf8(var) else {
|
|
||||||
continue;
|
|
||||||
};
|
|
||||||
for key in &keys {
|
|
||||||
let prefix = format!("{}=", key);
|
|
||||||
if var_str
|
|
||||||
.strip_prefix(&prefix)
|
|
||||||
.is_some_and(|val| val == target)
|
|
||||||
{
|
|
||||||
pids.push(pid);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
pids
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Kill Steam game processes by Steam App ID
|
|
||||||
pub fn kill_steam_game_processes(app_id: u32, signal: Signal) -> bool {
|
|
||||||
let pids = find_steam_game_pids(app_id);
|
|
||||||
if pids.is_empty() {
|
|
||||||
return false;
|
|
||||||
}
|
|
||||||
|
|
||||||
for pid in pids {
|
|
||||||
let _ = signal::kill(Pid::from_raw(pid), signal);
|
|
||||||
}
|
|
||||||
|
|
||||||
true
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Kill processes by command name using pkill
|
/// Kill processes by command name using pkill
|
||||||
pub fn kill_by_command(command_name: &str, signal: Signal) -> bool {
|
pub fn kill_by_command(command_name: &str, signal: Signal) -> bool {
|
||||||
let signal_name = match signal {
|
let signal_name = match signal {
|
||||||
|
|
@ -222,28 +98,21 @@ pub fn kill_by_command(command_name: &str, signal: Signal) -> bool {
|
||||||
Signal::SIGKILL => "KILL",
|
Signal::SIGKILL => "KILL",
|
||||||
_ => "TERM",
|
_ => "TERM",
|
||||||
};
|
};
|
||||||
|
|
||||||
// Use pkill to find and kill processes by command name
|
// Use pkill to find and kill processes by command name
|
||||||
let result = Command::new("pkill")
|
let result = Command::new("pkill")
|
||||||
.args([&format!("-{}", signal_name), "-f", command_name])
|
.args([&format!("-{}", signal_name), "-f", command_name])
|
||||||
.output();
|
.output();
|
||||||
|
|
||||||
match result {
|
match result {
|
||||||
Ok(output) => {
|
Ok(output) => {
|
||||||
// pkill returns 0 if processes were found and signaled
|
// pkill returns 0 if processes were found and signaled
|
||||||
if output.status.success() {
|
if output.status.success() {
|
||||||
info!(
|
info!(command = command_name, signal = signal_name, "Killed processes by command name");
|
||||||
command = command_name,
|
|
||||||
signal = signal_name,
|
|
||||||
"Killed processes by command name"
|
|
||||||
);
|
|
||||||
true
|
true
|
||||||
} else {
|
} else {
|
||||||
// No processes found is not an error
|
// No processes found is not an error
|
||||||
debug!(
|
debug!(command = command_name, "No processes found matching command name");
|
||||||
command = command_name,
|
|
||||||
"No processes found matching command name"
|
|
||||||
);
|
|
||||||
false
|
false
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
@ -256,10 +125,10 @@ pub fn kill_by_command(command_name: &str, signal: Signal) -> bool {
|
||||||
|
|
||||||
impl ManagedProcess {
|
impl ManagedProcess {
|
||||||
/// Spawn a new process in its own process group
|
/// Spawn a new process in its own process group
|
||||||
///
|
///
|
||||||
/// If `snap_name` is provided, the process is treated as a snap app and will use
|
/// If `snap_name` is provided, the process is treated as a snap app and will use
|
||||||
/// systemd scope-based killing instead of signal-based killing.
|
/// systemd scope-based killing instead of signal-based killing.
|
||||||
///
|
///
|
||||||
/// If `log_path` is provided, stdout and stderr will be redirected to that file.
|
/// If `log_path` is provided, stdout and stderr will be redirected to that file.
|
||||||
/// For snap apps, we use `script` to capture output from all child processes
|
/// For snap apps, we use `script` to capture output from all child processes
|
||||||
/// via a pseudo-terminal, since snap child processes don't inherit file descriptors.
|
/// via a pseudo-terminal, since snap child processes don't inherit file descriptors.
|
||||||
|
|
@ -285,16 +154,15 @@ impl ManagedProcess {
|
||||||
{
|
{
|
||||||
warn!(path = %parent.display(), error = %e, "Failed to create log directory");
|
warn!(path = %parent.display(), error = %e, "Failed to create log directory");
|
||||||
}
|
}
|
||||||
|
|
||||||
// Build command: script -q -c "original command" logfile
|
// Build command: script -q -c "original command" logfile
|
||||||
// -q: quiet mode (no start/done messages)
|
// -q: quiet mode (no start/done messages)
|
||||||
// -c: command to run
|
// -c: command to run
|
||||||
let original_cmd = argv
|
let original_cmd = argv.iter()
|
||||||
.iter()
|
|
||||||
.map(|arg| shell_escape::escape(std::borrow::Cow::Borrowed(arg)))
|
.map(|arg| shell_escape::escape(std::borrow::Cow::Borrowed(arg)))
|
||||||
.collect::<Vec<_>>()
|
.collect::<Vec<_>>()
|
||||||
.join(" ");
|
.join(" ");
|
||||||
|
|
||||||
let script_argv = vec![
|
let script_argv = vec![
|
||||||
"script".to_string(),
|
"script".to_string(),
|
||||||
"-q".to_string(),
|
"-q".to_string(),
|
||||||
|
|
@ -302,7 +170,7 @@ impl ManagedProcess {
|
||||||
original_cmd,
|
original_cmd,
|
||||||
log_file.to_string_lossy().to_string(),
|
log_file.to_string_lossy().to_string(),
|
||||||
];
|
];
|
||||||
|
|
||||||
info!(log_path = %log_file.display(), "Using script to capture snap output via pty");
|
info!(log_path = %log_file.display(), "Using script to capture snap output via pty");
|
||||||
(script_argv, None) // script handles the log file itself
|
(script_argv, None) // script handles the log file itself
|
||||||
}
|
}
|
||||||
|
|
@ -317,7 +185,7 @@ impl ManagedProcess {
|
||||||
|
|
||||||
// Set environment
|
// Set environment
|
||||||
cmd.env_clear();
|
cmd.env_clear();
|
||||||
|
|
||||||
// Inherit essential environment variables
|
// Inherit essential environment variables
|
||||||
// These are needed for most Linux applications to work correctly
|
// These are needed for most Linux applications to work correctly
|
||||||
let inherit_vars = [
|
let inherit_vars = [
|
||||||
|
|
@ -383,7 +251,7 @@ impl ManagedProcess {
|
||||||
"DESKTOP_SESSION",
|
"DESKTOP_SESSION",
|
||||||
"GNOME_DESKTOP_SESSION_ID",
|
"GNOME_DESKTOP_SESSION_ID",
|
||||||
];
|
];
|
||||||
|
|
||||||
for var in inherit_vars {
|
for var in inherit_vars {
|
||||||
if let Ok(val) = std::env::var(var) {
|
if let Ok(val) = std::env::var(var) {
|
||||||
cmd.env(var, val);
|
cmd.env(var, val);
|
||||||
|
|
@ -429,7 +297,7 @@ impl ManagedProcess {
|
||||||
{
|
{
|
||||||
warn!(path = %parent.display(), error = %e, "Failed to create log directory");
|
warn!(path = %parent.display(), error = %e, "Failed to create log directory");
|
||||||
}
|
}
|
||||||
|
|
||||||
// Open log file for appending (create if doesn't exist)
|
// Open log file for appending (create if doesn't exist)
|
||||||
match File::create(path) {
|
match File::create(path) {
|
||||||
Ok(file) => {
|
Ok(file) => {
|
||||||
|
|
@ -479,41 +347,37 @@ impl ManagedProcess {
|
||||||
// SAFETY: This is safe in the pre-exec context
|
// SAFETY: This is safe in the pre-exec context
|
||||||
unsafe {
|
unsafe {
|
||||||
cmd.pre_exec(|| {
|
cmd.pre_exec(|| {
|
||||||
nix::unistd::setsid().map_err(|e| std::io::Error::other(e.to_string()))?;
|
nix::unistd::setsid().map_err(|e| {
|
||||||
|
std::io::Error::other(e.to_string())
|
||||||
|
})?;
|
||||||
Ok(())
|
Ok(())
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
let child = cmd
|
let child = cmd.spawn().map_err(|e| {
|
||||||
.spawn()
|
HostError::SpawnFailed(format!("Failed to spawn {}: {}", program, e))
|
||||||
.map_err(|e| HostError::SpawnFailed(format!("Failed to spawn {}: {}", program, e)))?;
|
})?;
|
||||||
|
|
||||||
let pid = child.id();
|
let pid = child.id();
|
||||||
let pgid = pid; // After setsid, pid == pgid
|
let pgid = pid; // After setsid, pid == pgid
|
||||||
|
|
||||||
info!(pid = pid, pgid = pgid, program = %program, snap = ?snap_name, "Process spawned");
|
info!(pid = pid, pgid = pgid, program = %program, snap = ?snap_name, "Process spawned");
|
||||||
|
|
||||||
Ok(Self {
|
Ok(Self { child, pid, pgid, command_name, snap_name })
|
||||||
child,
|
|
||||||
pid,
|
|
||||||
pgid,
|
|
||||||
command_name,
|
|
||||||
snap_name,
|
|
||||||
})
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Get all descendant PIDs of this process using /proc
|
/// Get all descendant PIDs of this process using /proc
|
||||||
fn get_descendant_pids(&self) -> Vec<i32> {
|
fn get_descendant_pids(&self) -> Vec<i32> {
|
||||||
let mut descendants = Vec::new();
|
let mut descendants = Vec::new();
|
||||||
let mut to_check = vec![self.pid as i32];
|
let mut to_check = vec![self.pid as i32];
|
||||||
|
|
||||||
while let Some(parent_pid) = to_check.pop() {
|
while let Some(parent_pid) = to_check.pop() {
|
||||||
// Read /proc to find children of this PID
|
// Read /proc to find children of this PID
|
||||||
if let Ok(entries) = std::fs::read_dir("/proc") {
|
if let Ok(entries) = std::fs::read_dir("/proc") {
|
||||||
for entry in entries.flatten() {
|
for entry in entries.flatten() {
|
||||||
let name = entry.file_name();
|
let name = entry.file_name();
|
||||||
let name_str = name.to_string_lossy();
|
let name_str = name.to_string_lossy();
|
||||||
|
|
||||||
// Skip non-numeric entries (not PIDs)
|
// Skip non-numeric entries (not PIDs)
|
||||||
if let Ok(pid) = name_str.parse::<i32>() {
|
if let Ok(pid) = name_str.parse::<i32>() {
|
||||||
// Read the stat file to get parent PID
|
// Read the stat file to get parent PID
|
||||||
|
|
@ -526,18 +390,17 @@ impl ManagedProcess {
|
||||||
let fields: Vec<&str> = after_comm.split_whitespace().collect();
|
let fields: Vec<&str> = after_comm.split_whitespace().collect();
|
||||||
if fields.len() >= 2
|
if fields.len() >= 2
|
||||||
&& let Ok(ppid) = fields[1].parse::<i32>()
|
&& let Ok(ppid) = fields[1].parse::<i32>()
|
||||||
&& ppid == parent_pid
|
&& ppid == parent_pid {
|
||||||
{
|
descendants.push(pid);
|
||||||
descendants.push(pid);
|
to_check.push(pid);
|
||||||
to_check.push(pid);
|
}
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
descendants
|
descendants
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -548,7 +411,7 @@ impl ManagedProcess {
|
||||||
if self.snap_name.is_none() {
|
if self.snap_name.is_none() {
|
||||||
kill_by_command(&self.command_name, Signal::SIGTERM);
|
kill_by_command(&self.command_name, Signal::SIGTERM);
|
||||||
}
|
}
|
||||||
|
|
||||||
// Also try to kill the process group
|
// Also try to kill the process group
|
||||||
let pgid = Pid::from_raw(-(self.pgid as i32)); // Negative for process group
|
let pgid = Pid::from_raw(-(self.pgid as i32)); // Negative for process group
|
||||||
|
|
||||||
|
|
@ -563,7 +426,7 @@ impl ManagedProcess {
|
||||||
debug!(pgid = self.pgid, error = %e, "Failed to send SIGTERM to process group");
|
debug!(pgid = self.pgid, error = %e, "Failed to send SIGTERM to process group");
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Also kill all descendants (they may have escaped the process group)
|
// Also kill all descendants (they may have escaped the process group)
|
||||||
let descendants = self.get_descendant_pids();
|
let descendants = self.get_descendant_pids();
|
||||||
for pid in &descendants {
|
for pid in &descendants {
|
||||||
|
|
@ -572,7 +435,7 @@ impl ManagedProcess {
|
||||||
if !descendants.is_empty() {
|
if !descendants.is_empty() {
|
||||||
debug!(descendants = ?descendants, "Sent SIGTERM to descendant processes");
|
debug!(descendants = ?descendants, "Sent SIGTERM to descendant processes");
|
||||||
}
|
}
|
||||||
|
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -583,7 +446,7 @@ impl ManagedProcess {
|
||||||
if self.snap_name.is_none() {
|
if self.snap_name.is_none() {
|
||||||
kill_by_command(&self.command_name, Signal::SIGKILL);
|
kill_by_command(&self.command_name, Signal::SIGKILL);
|
||||||
}
|
}
|
||||||
|
|
||||||
// Also try to kill the process group
|
// Also try to kill the process group
|
||||||
let pgid = Pid::from_raw(-(self.pgid as i32));
|
let pgid = Pid::from_raw(-(self.pgid as i32));
|
||||||
|
|
||||||
|
|
@ -598,7 +461,7 @@ impl ManagedProcess {
|
||||||
debug!(pgid = self.pgid, error = %e, "Failed to send SIGKILL to process group");
|
debug!(pgid = self.pgid, error = %e, "Failed to send SIGKILL to process group");
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Also kill all descendants (they may have escaped the process group)
|
// Also kill all descendants (they may have escaped the process group)
|
||||||
let descendants = self.get_descendant_pids();
|
let descendants = self.get_descendant_pids();
|
||||||
for pid in &descendants {
|
for pid in &descendants {
|
||||||
|
|
@ -607,7 +470,7 @@ impl ManagedProcess {
|
||||||
if !descendants.is_empty() {
|
if !descendants.is_empty() {
|
||||||
debug!(descendants = ?descendants, "Sent SIGKILL to descendant processes");
|
debug!(descendants = ?descendants, "Sent SIGKILL to descendant processes");
|
||||||
}
|
}
|
||||||
|
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -666,7 +529,7 @@ impl ManagedProcess {
|
||||||
Err(e) => Err(HostError::Internal(format!("Wait failed: {}", e))),
|
Err(e) => Err(HostError::Internal(format!("Wait failed: {}", e))),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Clean up resources associated with this process
|
/// Clean up resources associated with this process
|
||||||
pub fn cleanup(&self) {
|
pub fn cleanup(&self) {
|
||||||
// Nothing to clean up for systemd scopes - systemd handles it
|
// Nothing to clean up for systemd scopes - systemd handles it
|
||||||
|
|
|
||||||
|
|
@ -148,10 +148,9 @@ impl LinuxVolumeController {
|
||||||
|
|
||||||
// Output: "Volume: front-left: 65536 / 100% / -0.00 dB, front-right: ..."
|
// Output: "Volume: front-left: 65536 / 100% / -0.00 dB, front-right: ..."
|
||||||
if let Some(percent_str) = stdout.split('/').nth(1)
|
if let Some(percent_str) = stdout.split('/').nth(1)
|
||||||
&& let Ok(percent) = percent_str.trim().trim_end_matches('%').parse::<u8>()
|
&& let Ok(percent) = percent_str.trim().trim_end_matches('%').parse::<u8>() {
|
||||||
{
|
status.percent = percent;
|
||||||
status.percent = percent;
|
}
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Check mute status
|
// Check mute status
|
||||||
|
|
@ -186,10 +185,9 @@ impl LinuxVolumeController {
|
||||||
// Extract percentage: [100%]
|
// Extract percentage: [100%]
|
||||||
if let Some(start) = line.find('[')
|
if let Some(start) = line.find('[')
|
||||||
&& let Some(end) = line[start..].find('%')
|
&& let Some(end) = line[start..].find('%')
|
||||||
&& let Ok(percent) = line[start + 1..start + end].parse::<u8>()
|
&& let Ok(percent) = line[start + 1..start + end].parse::<u8>() {
|
||||||
{
|
status.percent = percent;
|
||||||
status.percent = percent;
|
}
|
||||||
}
|
|
||||||
// Check mute status: [on] or [off]
|
// Check mute status: [on] or [off]
|
||||||
status.muted = line.contains("[off]");
|
status.muted = line.contains("[off]");
|
||||||
break;
|
break;
|
||||||
|
|
@ -212,11 +210,7 @@ impl LinuxVolumeController {
|
||||||
/// Set volume via PulseAudio
|
/// Set volume via PulseAudio
|
||||||
fn set_volume_pulseaudio(percent: u8) -> VolumeResult<()> {
|
fn set_volume_pulseaudio(percent: u8) -> VolumeResult<()> {
|
||||||
Command::new("pactl")
|
Command::new("pactl")
|
||||||
.args([
|
.args(["set-sink-volume", "@DEFAULT_SINK@", &format!("{}%", percent)])
|
||||||
"set-sink-volume",
|
|
||||||
"@DEFAULT_SINK@",
|
|
||||||
&format!("{}%", percent),
|
|
||||||
])
|
|
||||||
.status()
|
.status()
|
||||||
.map_err(|e| VolumeError::Backend(e.to_string()))?;
|
.map_err(|e| VolumeError::Backend(e.to_string()))?;
|
||||||
Ok(())
|
Ok(())
|
||||||
|
|
@ -329,10 +323,7 @@ impl VolumeController for LinuxVolumeController {
|
||||||
|
|
||||||
async fn volume_up(&self, step: u8) -> VolumeResult<()> {
|
async fn volume_up(&self, step: u8) -> VolumeResult<()> {
|
||||||
let current = self.get_status().await?;
|
let current = self.get_status().await?;
|
||||||
let new_volume = current
|
let new_volume = current.percent.saturating_add(step).min(self.capabilities.max_volume);
|
||||||
.percent
|
|
||||||
.saturating_add(step)
|
|
||||||
.min(self.capabilities.max_volume);
|
|
||||||
self.set_volume(new_volume).await
|
self.set_volume(new_volume).await
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -240,17 +240,17 @@ fn build_hud_content(state: SharedState) -> gtk4::Box {
|
||||||
// Handle slider value changes with debouncing
|
// Handle slider value changes with debouncing
|
||||||
// Create a channel for volume requests - the worker will debounce them
|
// Create a channel for volume requests - the worker will debounce them
|
||||||
let (volume_tx, volume_rx) = mpsc::channel::<u8>();
|
let (volume_tx, volume_rx) = mpsc::channel::<u8>();
|
||||||
|
|
||||||
// Spawn a dedicated volume worker thread that debounces requests
|
// Spawn a dedicated volume worker thread that debounces requests
|
||||||
std::thread::spawn(move || {
|
std::thread::spawn(move || {
|
||||||
const DEBOUNCE_MS: u64 = 50; // Wait 50ms for more changes before sending
|
const DEBOUNCE_MS: u64 = 50; // Wait 50ms for more changes before sending
|
||||||
|
|
||||||
loop {
|
loop {
|
||||||
// Wait for first volume request
|
// Wait for first volume request
|
||||||
let Ok(mut latest_percent) = volume_rx.recv() else {
|
let Ok(mut latest_percent) = volume_rx.recv() else {
|
||||||
break; // Channel closed
|
break; // Channel closed
|
||||||
};
|
};
|
||||||
|
|
||||||
// Drain any pending requests, keeping only the latest value
|
// Drain any pending requests, keeping only the latest value
|
||||||
// Use a short timeout to debounce rapid changes
|
// Use a short timeout to debounce rapid changes
|
||||||
loop {
|
loop {
|
||||||
|
|
@ -267,24 +267,24 @@ fn build_hud_content(state: SharedState) -> gtk4::Box {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Send only the final value
|
// Send only the final value
|
||||||
if let Err(e) = crate::volume::set_volume(latest_percent) {
|
if let Err(e) = crate::volume::set_volume(latest_percent) {
|
||||||
tracing::error!("Failed to set volume: {}", e);
|
tracing::error!("Failed to set volume: {}", e);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
let slider_changing = std::rc::Rc::new(std::cell::Cell::new(false));
|
let slider_changing = std::rc::Rc::new(std::cell::Cell::new(false));
|
||||||
let slider_changing_clone = slider_changing.clone();
|
let slider_changing_clone = slider_changing.clone();
|
||||||
|
|
||||||
volume_slider.connect_change_value(move |slider, _, value| {
|
volume_slider.connect_change_value(move |slider, _, value| {
|
||||||
slider_changing_clone.set(true);
|
slider_changing_clone.set(true);
|
||||||
let percent = value.clamp(0.0, 100.0) as u8;
|
let percent = value.clamp(0.0, 100.0) as u8;
|
||||||
|
|
||||||
// Send to debounce worker (non-blocking)
|
// Send to debounce worker (non-blocking)
|
||||||
let _ = volume_tx.send(percent);
|
let _ = volume_tx.send(percent);
|
||||||
|
|
||||||
// Allow the slider to update immediately in UI
|
// Allow the slider to update immediately in UI
|
||||||
slider.set_value(value);
|
slider.set_value(value);
|
||||||
glib::Propagation::Stop
|
glib::Propagation::Stop
|
||||||
|
|
@ -414,11 +414,11 @@ fn build_hud_content(state: SharedState) -> gtk4::Box {
|
||||||
let remaining = time_remaining_at_warning.saturating_sub(elapsed);
|
let remaining = time_remaining_at_warning.saturating_sub(elapsed);
|
||||||
time_display_clone.set_remaining(Some(remaining));
|
time_display_clone.set_remaining(Some(remaining));
|
||||||
// Use configuration-defined message if present, otherwise show time-based message
|
// Use configuration-defined message if present, otherwise show time-based message
|
||||||
let warning_text = message
|
let warning_text = message.clone().unwrap_or_else(|| {
|
||||||
.clone()
|
format!("Only {} seconds remaining!", remaining)
|
||||||
.unwrap_or_else(|| format!("Only {} seconds remaining!", remaining));
|
});
|
||||||
warning_label_clone.set_text(&warning_text);
|
warning_label_clone.set_text(&warning_text);
|
||||||
|
|
||||||
// Apply severity-based CSS classes
|
// Apply severity-based CSS classes
|
||||||
warning_box_clone.remove_css_class("warning-info");
|
warning_box_clone.remove_css_class("warning-info");
|
||||||
warning_box_clone.remove_css_class("warning-warn");
|
warning_box_clone.remove_css_class("warning-warn");
|
||||||
|
|
@ -434,7 +434,7 @@ fn build_hud_content(state: SharedState) -> gtk4::Box {
|
||||||
warning_box_clone.add_css_class("warning-critical");
|
warning_box_clone.add_css_class("warning-critical");
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
warning_box_clone.set_visible(true);
|
warning_box_clone.set_visible(true);
|
||||||
}
|
}
|
||||||
SessionState::Ending { reason, .. } => {
|
SessionState::Ending { reason, .. } => {
|
||||||
|
|
@ -457,19 +457,19 @@ fn build_hud_content(state: SharedState) -> gtk4::Box {
|
||||||
if let Some(volume) = state.volume_info() {
|
if let Some(volume) = state.volume_info() {
|
||||||
volume_button_clone.set_icon_name(volume.icon_name());
|
volume_button_clone.set_icon_name(volume.icon_name());
|
||||||
volume_label_clone.set_text(&format!("{}%", volume.percent));
|
volume_label_clone.set_text(&format!("{}%", volume.percent));
|
||||||
|
|
||||||
// Only update slider if user is not actively dragging it
|
// Only update slider if user is not actively dragging it
|
||||||
if !slider_changing_for_update.get() {
|
if !slider_changing_for_update.get() {
|
||||||
volume_slider_clone.set_value(volume.percent as f64);
|
volume_slider_clone.set_value(volume.percent as f64);
|
||||||
}
|
}
|
||||||
// Reset the changing flag after a short delay
|
// Reset the changing flag after a short delay
|
||||||
slider_changing_for_update.set(false);
|
slider_changing_for_update.set(false);
|
||||||
|
|
||||||
// Disable slider when muted or when restrictions don't allow changes
|
// Disable slider when muted or when restrictions don't allow changes
|
||||||
let slider_enabled = !volume.muted && volume.restrictions.allow_change;
|
let slider_enabled = !volume.muted && volume.restrictions.allow_change;
|
||||||
volume_slider_clone.set_sensitive(slider_enabled);
|
volume_slider_clone.set_sensitive(slider_enabled);
|
||||||
volume_button_clone.set_sensitive(volume.restrictions.allow_mute);
|
volume_button_clone.set_sensitive(volume.restrictions.allow_mute);
|
||||||
|
|
||||||
// Update slider range based on restrictions
|
// Update slider range based on restrictions
|
||||||
let min = volume.restrictions.min_volume.unwrap_or(0) as f64;
|
let min = volume.restrictions.min_volume.unwrap_or(0) as f64;
|
||||||
let max = volume.restrictions.max_volume.unwrap_or(100) as f64;
|
let max = volume.restrictions.max_volume.unwrap_or(100) as f64;
|
||||||
|
|
|
||||||
|
|
@ -35,18 +35,16 @@ impl BatteryStatus {
|
||||||
|
|
||||||
// Check for battery
|
// Check for battery
|
||||||
if name_str.starts_with("BAT")
|
if name_str.starts_with("BAT")
|
||||||
&& let Some((percent, charging)) = read_battery_info(&path)
|
&& let Some((percent, charging)) = read_battery_info(&path) {
|
||||||
{
|
status.percent = Some(percent);
|
||||||
status.percent = Some(percent);
|
status.charging = charging;
|
||||||
status.charging = charging;
|
}
|
||||||
}
|
|
||||||
|
|
||||||
// Check for AC adapter
|
// Check for AC adapter
|
||||||
if (name_str.starts_with("AC") || name_str.contains("ADP"))
|
if (name_str.starts_with("AC") || name_str.contains("ADP"))
|
||||||
&& let Some(online) = read_ac_status(&path)
|
&& let Some(online) = read_ac_status(&path) {
|
||||||
{
|
status.ac_connected = online;
|
||||||
status.ac_connected = online;
|
}
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -43,7 +43,8 @@ fn main() -> Result<()> {
|
||||||
// Initialize logging
|
// Initialize logging
|
||||||
tracing_subscriber::fmt()
|
tracing_subscriber::fmt()
|
||||||
.with_env_filter(
|
.with_env_filter(
|
||||||
EnvFilter::try_from_default_env().unwrap_or_else(|_| EnvFilter::new(&args.log_level)),
|
EnvFilter::try_from_default_env()
|
||||||
|
.unwrap_or_else(|_| EnvFilter::new(&args.log_level)),
|
||||||
)
|
)
|
||||||
.init();
|
.init();
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -218,18 +218,17 @@ impl SharedState {
|
||||||
entry_name,
|
entry_name,
|
||||||
..
|
..
|
||||||
} = state
|
} = state
|
||||||
&& sid == session_id
|
&& sid == session_id {
|
||||||
{
|
*state = SessionState::Warning {
|
||||||
*state = SessionState::Warning {
|
session_id: session_id.clone(),
|
||||||
session_id: session_id.clone(),
|
entry_id: entry_id.clone(),
|
||||||
entry_id: entry_id.clone(),
|
entry_name: entry_name.clone(),
|
||||||
entry_name: entry_name.clone(),
|
warning_issued_at: std::time::Instant::now(),
|
||||||
warning_issued_at: std::time::Instant::now(),
|
time_remaining_at_warning: time_remaining.as_secs(),
|
||||||
time_remaining_at_warning: time_remaining.as_secs(),
|
message: message.clone(),
|
||||||
message: message.clone(),
|
severity: *severity,
|
||||||
severity: *severity,
|
};
|
||||||
};
|
}
|
||||||
}
|
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -60,7 +60,9 @@ pub fn toggle_mute() -> anyhow::Result<()> {
|
||||||
shepherd_api::ResponseResult::Ok(ResponsePayload::VolumeDenied { reason }) => {
|
shepherd_api::ResponseResult::Ok(ResponsePayload::VolumeDenied { reason }) => {
|
||||||
Err(anyhow::anyhow!("Volume denied: {}", reason))
|
Err(anyhow::anyhow!("Volume denied: {}", reason))
|
||||||
}
|
}
|
||||||
shepherd_api::ResponseResult::Err(e) => Err(anyhow::anyhow!("Error: {}", e.message)),
|
shepherd_api::ResponseResult::Err(e) => {
|
||||||
|
Err(anyhow::anyhow!("Error: {}", e.message))
|
||||||
|
}
|
||||||
_ => Err(anyhow::anyhow!("Unexpected response")),
|
_ => Err(anyhow::anyhow!("Unexpected response")),
|
||||||
}
|
}
|
||||||
})
|
})
|
||||||
|
|
@ -81,7 +83,9 @@ pub fn set_volume(percent: u8) -> anyhow::Result<()> {
|
||||||
shepherd_api::ResponseResult::Ok(ResponsePayload::VolumeDenied { reason }) => {
|
shepherd_api::ResponseResult::Ok(ResponsePayload::VolumeDenied { reason }) => {
|
||||||
Err(anyhow::anyhow!("Volume denied: {}", reason))
|
Err(anyhow::anyhow!("Volume denied: {}", reason))
|
||||||
}
|
}
|
||||||
shepherd_api::ResponseResult::Err(e) => Err(anyhow::anyhow!("Error: {}", e.message)),
|
shepherd_api::ResponseResult::Err(e) => {
|
||||||
|
Err(anyhow::anyhow!("Error: {}", e.message))
|
||||||
|
}
|
||||||
_ => Err(anyhow::anyhow!("Unexpected response")),
|
_ => Err(anyhow::anyhow!("Unexpected response")),
|
||||||
}
|
}
|
||||||
})
|
})
|
||||||
|
|
|
||||||
|
|
@ -8,7 +8,7 @@ use std::path::{Path, PathBuf};
|
||||||
use std::sync::Arc;
|
use std::sync::Arc;
|
||||||
use tokio::io::{AsyncBufReadExt, AsyncWriteExt, BufReader};
|
use tokio::io::{AsyncBufReadExt, AsyncWriteExt, BufReader};
|
||||||
use tokio::net::{UnixListener, UnixStream};
|
use tokio::net::{UnixListener, UnixStream};
|
||||||
use tokio::sync::{Mutex, RwLock, broadcast, mpsc};
|
use tokio::sync::{broadcast, mpsc, Mutex, RwLock};
|
||||||
use tracing::{debug, error, info, warn};
|
use tracing::{debug, error, info, warn};
|
||||||
|
|
||||||
use crate::{IpcError, IpcResult};
|
use crate::{IpcError, IpcResult};
|
||||||
|
|
@ -75,18 +75,7 @@ impl IpcServer {
|
||||||
let listener = UnixListener::bind(&self.socket_path)?;
|
let listener = UnixListener::bind(&self.socket_path)?;
|
||||||
|
|
||||||
// Set socket permissions (readable/writable by owner and group)
|
// Set socket permissions (readable/writable by owner and group)
|
||||||
if let Err(err) =
|
std::fs::set_permissions(&self.socket_path, std::fs::Permissions::from_mode(0o660))?;
|
||||||
std::fs::set_permissions(&self.socket_path, std::fs::Permissions::from_mode(0o660))
|
|
||||||
{
|
|
||||||
if err.kind() == std::io::ErrorKind::PermissionDenied {
|
|
||||||
warn!(
|
|
||||||
path = %self.socket_path.display(),
|
|
||||||
"Permission denied setting socket permissions; continuing with defaults"
|
|
||||||
);
|
|
||||||
} else {
|
|
||||||
return Err(err.into());
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
info!(path = %self.socket_path.display(), "IPC server listening");
|
info!(path = %self.socket_path.display(), "IPC server listening");
|
||||||
|
|
||||||
|
|
@ -189,8 +178,7 @@ impl IpcServer {
|
||||||
match serde_json::from_str::<Request>(line) {
|
match serde_json::from_str::<Request>(line) {
|
||||||
Ok(request) => {
|
Ok(request) => {
|
||||||
// Check for subscribe command
|
// Check for subscribe command
|
||||||
if matches!(request.command, shepherd_api::Command::SubscribeEvents)
|
if matches!(request.command, shepherd_api::Command::SubscribeEvents) {
|
||||||
{
|
|
||||||
let mut clients = clients.write().await;
|
let mut clients = clients.write().await;
|
||||||
if let Some(handle) = clients.get_mut(&client_id_clone) {
|
if let Some(handle) = clients.get_mut(&client_id_clone) {
|
||||||
handle.subscribed = true;
|
handle.subscribed = true;
|
||||||
|
|
@ -340,18 +328,7 @@ mod tests {
|
||||||
let socket_path = dir.path().join("test.sock");
|
let socket_path = dir.path().join("test.sock");
|
||||||
|
|
||||||
let mut server = IpcServer::new(&socket_path);
|
let mut server = IpcServer::new(&socket_path);
|
||||||
if let Err(err) = server.start().await {
|
server.start().await.unwrap();
|
||||||
if let IpcError::Io(ref io_err) = err
|
|
||||||
&& io_err.kind() == std::io::ErrorKind::PermissionDenied
|
|
||||||
{
|
|
||||||
eprintln!(
|
|
||||||
"Skipping IPC server start test due to permission error: {}",
|
|
||||||
io_err
|
|
||||||
);
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
panic!("IPC server start failed: {err}");
|
|
||||||
}
|
|
||||||
|
|
||||||
assert!(socket_path.exists());
|
assert!(socket_path.exists());
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -24,7 +24,6 @@ tracing-subscriber = { workspace = true }
|
||||||
anyhow = { workspace = true }
|
anyhow = { workspace = true }
|
||||||
chrono = { workspace = true }
|
chrono = { workspace = true }
|
||||||
dirs = "5.0"
|
dirs = "5.0"
|
||||||
gilrs = "0.11"
|
|
||||||
|
|
||||||
[features]
|
[features]
|
||||||
default = []
|
default = []
|
||||||
|
|
|
||||||
|
|
@ -4,10 +4,9 @@ use gtk4::glib;
|
||||||
use gtk4::prelude::*;
|
use gtk4::prelude::*;
|
||||||
use std::path::PathBuf;
|
use std::path::PathBuf;
|
||||||
use std::sync::Arc;
|
use std::sync::Arc;
|
||||||
use std::time::Duration;
|
|
||||||
use tokio::runtime::Runtime;
|
use tokio::runtime::Runtime;
|
||||||
use tokio::sync::mpsc;
|
use tokio::sync::mpsc;
|
||||||
use tracing::{debug, error, info, warn};
|
use tracing::{debug, error, info};
|
||||||
|
|
||||||
use crate::client::{CommandClient, ServiceClient};
|
use crate::client::{CommandClient, ServiceClient};
|
||||||
use crate::grid::LauncherGrid;
|
use crate::grid::LauncherGrid;
|
||||||
|
|
@ -42,13 +41,6 @@ window {
|
||||||
border-color: #4a90d9;
|
border-color: #4a90d9;
|
||||||
}
|
}
|
||||||
|
|
||||||
.launcher-tile:focus,
|
|
||||||
.launcher-tile:focus-visible {
|
|
||||||
background: #1f3460;
|
|
||||||
background-color: #1f3460;
|
|
||||||
border-color: #ffd166;
|
|
||||||
}
|
|
||||||
|
|
||||||
.launcher-tile:active {
|
.launcher-tile:active {
|
||||||
background: #0f3460;
|
background: #0f3460;
|
||||||
background-color: #0f3460;
|
background-color: #0f3460;
|
||||||
|
|
@ -176,14 +168,6 @@ impl LauncherApp {
|
||||||
|
|
||||||
// Create command client for sending commands
|
// Create command client for sending commands
|
||||||
let command_client = Arc::new(CommandClient::new(&socket_path));
|
let command_client = Arc::new(CommandClient::new(&socket_path));
|
||||||
Self::setup_keyboard_input(&window, &grid);
|
|
||||||
Self::setup_gamepad_input(
|
|
||||||
&window,
|
|
||||||
&grid,
|
|
||||||
command_client.clone(),
|
|
||||||
runtime.clone(),
|
|
||||||
state.clone(),
|
|
||||||
);
|
|
||||||
|
|
||||||
// Connect grid launch callback
|
// Connect grid launch callback
|
||||||
let cmd_client = command_client.clone();
|
let cmd_client = command_client.clone();
|
||||||
|
|
@ -312,8 +296,7 @@ impl LauncherApp {
|
||||||
let state_for_client = state.clone();
|
let state_for_client = state.clone();
|
||||||
let socket_for_client = socket_path.clone();
|
let socket_for_client = socket_path.clone();
|
||||||
std::thread::spawn(move || {
|
std::thread::spawn(move || {
|
||||||
let rt = tokio::runtime::Runtime::new()
|
let rt = tokio::runtime::Runtime::new().expect("Failed to create tokio runtime for event loop");
|
||||||
.expect("Failed to create tokio runtime for event loop");
|
|
||||||
rt.block_on(async move {
|
rt.block_on(async move {
|
||||||
let client = ServiceClient::new(socket_for_client, state_for_client, command_rx);
|
let client = ServiceClient::new(socket_for_client, state_for_client, command_rx);
|
||||||
client.run().await;
|
client.run().await;
|
||||||
|
|
@ -359,7 +342,6 @@ impl LauncherApp {
|
||||||
if let Some(grid) = grid {
|
if let Some(grid) = grid {
|
||||||
grid.set_entries(entries);
|
grid.set_entries(entries);
|
||||||
grid.set_tiles_sensitive(true);
|
grid.set_tiles_sensitive(true);
|
||||||
grid.grab_focus();
|
|
||||||
}
|
}
|
||||||
if let Some(ref win) = window {
|
if let Some(ref win) = window {
|
||||||
win.set_visible(true);
|
win.set_visible(true);
|
||||||
|
|
@ -399,199 +381,6 @@ impl LauncherApp {
|
||||||
window.present();
|
window.present();
|
||||||
}
|
}
|
||||||
|
|
||||||
fn setup_keyboard_input(window: >k4::ApplicationWindow, grid: &LauncherGrid) {
|
|
||||||
let key_controller = gtk4::EventControllerKey::new();
|
|
||||||
key_controller.set_propagation_phase(gtk4::PropagationPhase::Capture);
|
|
||||||
let grid_weak = grid.downgrade();
|
|
||||||
key_controller.connect_key_pressed(move |_, key, _, _| {
|
|
||||||
let Some(grid) = grid_weak.upgrade() else {
|
|
||||||
return glib::Propagation::Proceed;
|
|
||||||
};
|
|
||||||
|
|
||||||
let handled = match key {
|
|
||||||
gtk4::gdk::Key::Up | gtk4::gdk::Key::w | gtk4::gdk::Key::W => {
|
|
||||||
grid.move_selection(0, -1);
|
|
||||||
true
|
|
||||||
}
|
|
||||||
gtk4::gdk::Key::Down | gtk4::gdk::Key::s | gtk4::gdk::Key::S => {
|
|
||||||
grid.move_selection(0, 1);
|
|
||||||
true
|
|
||||||
}
|
|
||||||
gtk4::gdk::Key::Left | gtk4::gdk::Key::a | gtk4::gdk::Key::A => {
|
|
||||||
grid.move_selection(-1, 0);
|
|
||||||
true
|
|
||||||
}
|
|
||||||
gtk4::gdk::Key::Right | gtk4::gdk::Key::d | gtk4::gdk::Key::D => {
|
|
||||||
grid.move_selection(1, 0);
|
|
||||||
true
|
|
||||||
}
|
|
||||||
gtk4::gdk::Key::Return | gtk4::gdk::Key::KP_Enter | gtk4::gdk::Key::space => {
|
|
||||||
grid.launch_selected();
|
|
||||||
true
|
|
||||||
}
|
|
||||||
_ => false,
|
|
||||||
};
|
|
||||||
|
|
||||||
if handled {
|
|
||||||
glib::Propagation::Stop
|
|
||||||
} else {
|
|
||||||
glib::Propagation::Proceed
|
|
||||||
}
|
|
||||||
});
|
|
||||||
window.add_controller(key_controller);
|
|
||||||
}
|
|
||||||
|
|
||||||
fn setup_gamepad_input(
|
|
||||||
_window: >k4::ApplicationWindow,
|
|
||||||
grid: &LauncherGrid,
|
|
||||||
command_client: Arc<CommandClient>,
|
|
||||||
runtime: Arc<Runtime>,
|
|
||||||
state: SharedState,
|
|
||||||
) {
|
|
||||||
let mut gilrs = match gilrs::Gilrs::new() {
|
|
||||||
Ok(gilrs) => gilrs,
|
|
||||||
Err(e) => {
|
|
||||||
warn!(error = %e, "Gamepad input unavailable");
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
let grid_weak = grid.downgrade();
|
|
||||||
let cmd_client = command_client.clone();
|
|
||||||
let rt = runtime.clone();
|
|
||||||
let state_clone = state.clone();
|
|
||||||
let mut axis_state = GamepadAxisState::default();
|
|
||||||
|
|
||||||
glib::timeout_add_local(Duration::from_millis(16), move || {
|
|
||||||
while let Some(event) = gilrs.next_event() {
|
|
||||||
let Some(grid) = grid_weak.upgrade() else {
|
|
||||||
return glib::ControlFlow::Break;
|
|
||||||
};
|
|
||||||
|
|
||||||
match event.event {
|
|
||||||
gilrs::EventType::ButtonPressed(button, _) => match button {
|
|
||||||
gilrs::Button::DPadUp => grid.move_selection(0, -1),
|
|
||||||
gilrs::Button::DPadDown => grid.move_selection(0, 1),
|
|
||||||
gilrs::Button::DPadLeft => grid.move_selection(-1, 0),
|
|
||||||
gilrs::Button::DPadRight => grid.move_selection(1, 0),
|
|
||||||
gilrs::Button::South | gilrs::Button::East | gilrs::Button::Start => {
|
|
||||||
grid.launch_selected();
|
|
||||||
}
|
|
||||||
gilrs::Button::Mode => {
|
|
||||||
Self::request_stop_current(
|
|
||||||
cmd_client.clone(),
|
|
||||||
rt.clone(),
|
|
||||||
state_clone.clone(),
|
|
||||||
);
|
|
||||||
}
|
|
||||||
_ => {}
|
|
||||||
},
|
|
||||||
gilrs::EventType::AxisChanged(axis, value, _) => {
|
|
||||||
Self::handle_gamepad_axis(&grid, axis, value, &mut axis_state);
|
|
||||||
}
|
|
||||||
_ => {}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
glib::ControlFlow::Continue
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
fn request_stop_current(
|
|
||||||
command_client: Arc<CommandClient>,
|
|
||||||
runtime: Arc<Runtime>,
|
|
||||||
state: SharedState,
|
|
||||||
) {
|
|
||||||
runtime.spawn(async move {
|
|
||||||
match command_client.stop_current().await {
|
|
||||||
Ok(response) => match response.result {
|
|
||||||
shepherd_api::ResponseResult::Ok(shepherd_api::ResponsePayload::Stopped) => {
|
|
||||||
info!("StopCurrent acknowledged");
|
|
||||||
}
|
|
||||||
shepherd_api::ResponseResult::Err(err) => {
|
|
||||||
debug!(error = %err.message, "StopCurrent request denied");
|
|
||||||
}
|
|
||||||
_ => {
|
|
||||||
debug!("Unexpected StopCurrent response payload");
|
|
||||||
}
|
|
||||||
},
|
|
||||||
Err(e) => {
|
|
||||||
error!(error = %e, "StopCurrent request failed");
|
|
||||||
state.set(LauncherState::Error {
|
|
||||||
message: format!("Failed to stop current activity: {}", e),
|
|
||||||
});
|
|
||||||
}
|
|
||||||
}
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
fn handle_gamepad_axis(
|
|
||||||
grid: &LauncherGrid,
|
|
||||||
axis: gilrs::Axis,
|
|
||||||
value: f32,
|
|
||||||
axis_state: &mut GamepadAxisState,
|
|
||||||
) {
|
|
||||||
const THRESHOLD: f32 = 0.65;
|
|
||||||
|
|
||||||
match axis {
|
|
||||||
gilrs::Axis::LeftStickX | gilrs::Axis::DPadX => {
|
|
||||||
if value <= -THRESHOLD {
|
|
||||||
if !axis_state.left {
|
|
||||||
grid.move_selection(-1, 0);
|
|
||||||
}
|
|
||||||
axis_state.left = true;
|
|
||||||
axis_state.right = false;
|
|
||||||
} else if value >= THRESHOLD {
|
|
||||||
if !axis_state.right {
|
|
||||||
grid.move_selection(1, 0);
|
|
||||||
}
|
|
||||||
axis_state.right = true;
|
|
||||||
axis_state.left = false;
|
|
||||||
} else {
|
|
||||||
axis_state.left = false;
|
|
||||||
axis_state.right = false;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
gilrs::Axis::LeftStickY => {
|
|
||||||
if value <= -THRESHOLD {
|
|
||||||
if !axis_state.down {
|
|
||||||
grid.move_selection(0, 1);
|
|
||||||
}
|
|
||||||
axis_state.down = true;
|
|
||||||
axis_state.up = false;
|
|
||||||
} else if value >= THRESHOLD {
|
|
||||||
if !axis_state.up {
|
|
||||||
grid.move_selection(0, -1);
|
|
||||||
}
|
|
||||||
axis_state.up = true;
|
|
||||||
axis_state.down = false;
|
|
||||||
} else {
|
|
||||||
axis_state.up = false;
|
|
||||||
axis_state.down = false;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
gilrs::Axis::DPadY => {
|
|
||||||
if value <= -THRESHOLD {
|
|
||||||
if !axis_state.up {
|
|
||||||
grid.move_selection(0, -1);
|
|
||||||
}
|
|
||||||
axis_state.up = true;
|
|
||||||
axis_state.down = false;
|
|
||||||
} else if value >= THRESHOLD {
|
|
||||||
if !axis_state.down {
|
|
||||||
grid.move_selection(0, 1);
|
|
||||||
}
|
|
||||||
axis_state.down = true;
|
|
||||||
axis_state.up = false;
|
|
||||||
} else {
|
|
||||||
axis_state.up = false;
|
|
||||||
axis_state.down = false;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
_ => {}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
fn create_loading_view() -> gtk4::Box {
|
fn create_loading_view() -> gtk4::Box {
|
||||||
let container = gtk4::Box::new(gtk4::Orientation::Vertical, 16);
|
let container = gtk4::Box::new(gtk4::Orientation::Vertical, 16);
|
||||||
container.set_halign(gtk4::Align::Center);
|
container.set_halign(gtk4::Align::Center);
|
||||||
|
|
@ -669,11 +458,3 @@ impl LauncherApp {
|
||||||
(container, retry_button)
|
(container, retry_button)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Default)]
|
|
||||||
struct GamepadAxisState {
|
|
||||||
left: bool,
|
|
||||||
right: bool,
|
|
||||||
up: bool,
|
|
||||||
down: bool,
|
|
||||||
}
|
|
||||||
|
|
|
||||||
|
|
@ -57,7 +57,7 @@ impl ServiceClient {
|
||||||
Err(e) => {
|
Err(e) => {
|
||||||
error!(error = %e, "Connection error");
|
error!(error = %e, "Connection error");
|
||||||
self.state.set(LauncherState::Disconnected);
|
self.state.set(LauncherState::Disconnected);
|
||||||
|
|
||||||
// Wait before reconnecting
|
// Wait before reconnecting
|
||||||
sleep(Duration::from_secs(2)).await;
|
sleep(Duration::from_secs(2)).await;
|
||||||
}
|
}
|
||||||
|
|
@ -69,7 +69,7 @@ impl ServiceClient {
|
||||||
self.state.set(LauncherState::Connecting);
|
self.state.set(LauncherState::Connecting);
|
||||||
|
|
||||||
info!(path = %self.socket_path.display(), "Connecting to shepherdd");
|
info!(path = %self.socket_path.display(), "Connecting to shepherdd");
|
||||||
|
|
||||||
let mut client = IpcClient::connect(&self.socket_path)
|
let mut client = IpcClient::connect(&self.socket_path)
|
||||||
.await
|
.await
|
||||||
.context("Failed to connect to shepherdd")?;
|
.context("Failed to connect to shepherdd")?;
|
||||||
|
|
@ -162,17 +162,11 @@ impl ServiceClient {
|
||||||
}
|
}
|
||||||
ResponsePayload::Entries(entries) => {
|
ResponsePayload::Entries(entries) => {
|
||||||
// Only update if we're in idle state
|
// Only update if we're in idle state
|
||||||
if matches!(
|
if matches!(self.state.get(), LauncherState::Idle { .. } | LauncherState::Connecting) {
|
||||||
self.state.get(),
|
|
||||||
LauncherState::Idle { .. } | LauncherState::Connecting
|
|
||||||
) {
|
|
||||||
self.state.set(LauncherState::Idle { entries });
|
self.state.set(LauncherState::Idle { entries });
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
ResponsePayload::LaunchApproved {
|
ResponsePayload::LaunchApproved { session_id, deadline } => {
|
||||||
session_id,
|
|
||||||
deadline,
|
|
||||||
} => {
|
|
||||||
let now = shepherd_util::now();
|
let now = shepherd_util::now();
|
||||||
// For unlimited sessions (deadline=None), time_remaining is None
|
// For unlimited sessions (deadline=None), time_remaining is None
|
||||||
let time_remaining = deadline.and_then(|d| {
|
let time_remaining = deadline.and_then(|d| {
|
||||||
|
|
@ -201,7 +195,9 @@ impl ServiceClient {
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
ResponseResult::Err(e) => {
|
ResponseResult::Err(e) => {
|
||||||
self.state.set(LauncherState::Error { message: e.message });
|
self.state.set(LauncherState::Error {
|
||||||
|
message: e.message,
|
||||||
|
});
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
@ -222,23 +218,17 @@ impl CommandClient {
|
||||||
|
|
||||||
pub async fn launch(&self, entry_id: &EntryId) -> Result<Response> {
|
pub async fn launch(&self, entry_id: &EntryId) -> Result<Response> {
|
||||||
let mut client = IpcClient::connect(&self.socket_path).await?;
|
let mut client = IpcClient::connect(&self.socket_path).await?;
|
||||||
client
|
client.send(Command::Launch {
|
||||||
.send(Command::Launch {
|
entry_id: entry_id.clone(),
|
||||||
entry_id: entry_id.clone(),
|
}).await.map_err(Into::into)
|
||||||
})
|
|
||||||
.await
|
|
||||||
.map_err(Into::into)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
#[allow(dead_code)]
|
#[allow(dead_code)]
|
||||||
pub async fn stop_current(&self) -> Result<Response> {
|
pub async fn stop_current(&self) -> Result<Response> {
|
||||||
let mut client = IpcClient::connect(&self.socket_path).await?;
|
let mut client = IpcClient::connect(&self.socket_path).await?;
|
||||||
client
|
client.send(Command::StopCurrent {
|
||||||
.send(Command::StopCurrent {
|
mode: shepherd_api::StopMode::Graceful,
|
||||||
mode: shepherd_api::StopMode::Graceful,
|
}).await.map_err(Into::into)
|
||||||
})
|
|
||||||
.await
|
|
||||||
.map_err(Into::into)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
pub async fn get_state(&self) -> Result<Response> {
|
pub async fn get_state(&self) -> Result<Response> {
|
||||||
|
|
@ -249,10 +239,7 @@ impl CommandClient {
|
||||||
#[allow(dead_code)]
|
#[allow(dead_code)]
|
||||||
pub async fn list_entries(&self) -> Result<Response> {
|
pub async fn list_entries(&self) -> Result<Response> {
|
||||||
let mut client = IpcClient::connect(&self.socket_path).await?;
|
let mut client = IpcClient::connect(&self.socket_path).await?;
|
||||||
client
|
client.send(Command::ListEntries { at_time: None }).await.map_err(Into::into)
|
||||||
.send(Command::ListEntries { at_time: None })
|
|
||||||
.await
|
|
||||||
.map_err(Into::into)
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -265,6 +252,6 @@ fn reason_to_message(reason: &ReasonCode) -> &'static str {
|
||||||
ReasonCode::SessionActive { .. } => "Another session is active",
|
ReasonCode::SessionActive { .. } => "Another session is active",
|
||||||
ReasonCode::UnsupportedKind { .. } => "Entry type not supported",
|
ReasonCode::UnsupportedKind { .. } => "Entry type not supported",
|
||||||
ReasonCode::Disabled { .. } => "Entry disabled",
|
ReasonCode::Disabled { .. } => "Entry disabled",
|
||||||
ReasonCode::InternetUnavailable { .. } => "Internet connection unavailable",
|
ReasonCode::NetworkUnavailable { .. } => "Network connection required",
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -51,8 +51,7 @@ mod imp {
|
||||||
|
|
||||||
// Configure flow box
|
// Configure flow box
|
||||||
self.flow_box.set_homogeneous(true);
|
self.flow_box.set_homogeneous(true);
|
||||||
self.flow_box
|
self.flow_box.set_selection_mode(gtk4::SelectionMode::None);
|
||||||
.set_selection_mode(gtk4::SelectionMode::Single);
|
|
||||||
self.flow_box.set_max_children_per_line(6);
|
self.flow_box.set_max_children_per_line(6);
|
||||||
self.flow_box.set_min_children_per_line(2);
|
self.flow_box.set_min_children_per_line(2);
|
||||||
self.flow_box.set_row_spacing(24);
|
self.flow_box.set_row_spacing(24);
|
||||||
|
|
@ -61,7 +60,6 @@ mod imp {
|
||||||
self.flow_box.set_valign(gtk4::Align::Center);
|
self.flow_box.set_valign(gtk4::Align::Center);
|
||||||
self.flow_box.set_hexpand(true);
|
self.flow_box.set_hexpand(true);
|
||||||
self.flow_box.set_vexpand(true);
|
self.flow_box.set_vexpand(true);
|
||||||
self.flow_box.set_focusable(true);
|
|
||||||
self.flow_box.add_css_class("launcher-grid");
|
self.flow_box.add_css_class("launcher-grid");
|
||||||
|
|
||||||
// Wrap in a scrolled window
|
// Wrap in a scrolled window
|
||||||
|
|
@ -119,17 +117,14 @@ impl LauncherGrid {
|
||||||
let on_launch = imp.on_launch.clone();
|
let on_launch = imp.on_launch.clone();
|
||||||
tile.connect_clicked(move |tile| {
|
tile.connect_clicked(move |tile| {
|
||||||
if let Some(entry_id) = tile.entry_id()
|
if let Some(entry_id) = tile.entry_id()
|
||||||
&& let Some(callback) = on_launch.borrow().as_ref()
|
&& let Some(callback) = on_launch.borrow().as_ref() {
|
||||||
{
|
callback(entry_id);
|
||||||
callback(entry_id);
|
}
|
||||||
}
|
|
||||||
});
|
});
|
||||||
|
|
||||||
imp.flow_box.insert(&tile, -1);
|
imp.flow_box.insert(&tile, -1);
|
||||||
imp.tiles.borrow_mut().push(tile);
|
imp.tiles.borrow_mut().push(tile);
|
||||||
}
|
}
|
||||||
|
|
||||||
self.select_first();
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Enable or disable all tiles
|
/// Enable or disable all tiles
|
||||||
|
|
@ -138,114 +133,6 @@ impl LauncherGrid {
|
||||||
tile.set_sensitive(sensitive);
|
tile.set_sensitive(sensitive);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn select_first(&self) {
|
|
||||||
let imp = self.imp();
|
|
||||||
if let Some(child) = imp.flow_box.child_at_index(0) {
|
|
||||||
imp.flow_box.select_child(&child);
|
|
||||||
child.grab_focus();
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn move_selection(&self, dx: i32, dy: i32) {
|
|
||||||
let imp = self.imp();
|
|
||||||
if imp.tiles.borrow().is_empty() {
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
let current_child = imp
|
|
||||||
.flow_box
|
|
||||||
.selected_children()
|
|
||||||
.first()
|
|
||||||
.cloned()
|
|
||||||
.or_else(|| imp.flow_box.child_at_index(0));
|
|
||||||
let Some(current_child) = current_child else {
|
|
||||||
return;
|
|
||||||
};
|
|
||||||
|
|
||||||
let current_alloc = current_child.allocation();
|
|
||||||
let current_x = current_alloc.x();
|
|
||||||
let current_y = current_alloc.y();
|
|
||||||
let mut best: Option<(gtk4::FlowBoxChild, i32, i32)> = None;
|
|
||||||
|
|
||||||
let tile_count = imp.tiles.borrow().len() as i32;
|
|
||||||
for idx in 0..tile_count {
|
|
||||||
let Some(candidate) = imp.flow_box.child_at_index(idx) else {
|
|
||||||
continue;
|
|
||||||
};
|
|
||||||
if candidate == current_child {
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
|
|
||||||
let alloc = candidate.allocation();
|
|
||||||
let x = alloc.x();
|
|
||||||
let y = alloc.y();
|
|
||||||
|
|
||||||
let is_direction_match = match (dx, dy) {
|
|
||||||
(-1, 0) => y == current_y && x < current_x,
|
|
||||||
(1, 0) => y == current_y && x > current_x,
|
|
||||||
(0, -1) => y < current_y,
|
|
||||||
(0, 1) => y > current_y,
|
|
||||||
_ => false,
|
|
||||||
};
|
|
||||||
if !is_direction_match {
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
|
|
||||||
let primary_dist = match (dx, dy) {
|
|
||||||
(-1, 0) | (1, 0) => (x - current_x).abs(),
|
|
||||||
(0, -1) | (0, 1) => (y - current_y).abs(),
|
|
||||||
_ => i32::MAX,
|
|
||||||
};
|
|
||||||
let secondary_dist = match (dx, dy) {
|
|
||||||
(-1, 0) | (1, 0) => (y - current_y).abs(),
|
|
||||||
(0, -1) | (0, 1) => (x - current_x).abs(),
|
|
||||||
_ => i32::MAX,
|
|
||||||
};
|
|
||||||
|
|
||||||
let replace = match best {
|
|
||||||
None => true,
|
|
||||||
Some((_, best_primary, best_secondary)) => {
|
|
||||||
primary_dist < best_primary
|
|
||||||
|| (primary_dist == best_primary && secondary_dist < best_secondary)
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
if replace {
|
|
||||||
best = Some((candidate, primary_dist, secondary_dist));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if let Some((child, _, _)) = best {
|
|
||||||
imp.flow_box.select_child(&child);
|
|
||||||
child.grab_focus();
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn launch_selected(&self) {
|
|
||||||
let imp = self.imp();
|
|
||||||
let maybe_child = imp.flow_box.selected_children().first().cloned();
|
|
||||||
let Some(child) = maybe_child else {
|
|
||||||
return;
|
|
||||||
};
|
|
||||||
|
|
||||||
let index = child.index();
|
|
||||||
if index < 0 {
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
let tile = imp.tiles.borrow().get(index as usize).cloned();
|
|
||||||
if let Some(tile) = tile {
|
|
||||||
if !tile.is_sensitive() {
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
if let Some(entry_id) = tile.entry_id()
|
|
||||||
&& let Some(callback) = imp.on_launch.borrow().as_ref()
|
|
||||||
{
|
|
||||||
callback(entry_id);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
impl Default for LauncherGrid {
|
impl Default for LauncherGrid {
|
||||||
|
|
|
||||||
|
|
@ -9,10 +9,8 @@ mod grid;
|
||||||
mod state;
|
mod state;
|
||||||
mod tile;
|
mod tile;
|
||||||
|
|
||||||
use crate::client::CommandClient;
|
|
||||||
use anyhow::Result;
|
use anyhow::Result;
|
||||||
use clap::Parser;
|
use clap::Parser;
|
||||||
use shepherd_api::{ErrorCode, ResponsePayload, ResponseResult};
|
|
||||||
use shepherd_util::default_socket_path;
|
use shepherd_util::default_socket_path;
|
||||||
use std::path::PathBuf;
|
use std::path::PathBuf;
|
||||||
use tracing_subscriber::EnvFilter;
|
use tracing_subscriber::EnvFilter;
|
||||||
|
|
@ -29,10 +27,6 @@ struct Args {
|
||||||
/// Log level
|
/// Log level
|
||||||
#[arg(short, long, default_value = "info")]
|
#[arg(short, long, default_value = "info")]
|
||||||
log_level: String,
|
log_level: String,
|
||||||
|
|
||||||
/// Send StopCurrent to shepherdd and exit (for compositor keybindings)
|
|
||||||
#[arg(long)]
|
|
||||||
stop_current: bool,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
fn main() -> Result<()> {
|
fn main() -> Result<()> {
|
||||||
|
|
@ -41,7 +35,8 @@ fn main() -> Result<()> {
|
||||||
// Initialize logging
|
// Initialize logging
|
||||||
tracing_subscriber::fmt()
|
tracing_subscriber::fmt()
|
||||||
.with_env_filter(
|
.with_env_filter(
|
||||||
EnvFilter::try_from_default_env().unwrap_or_else(|_| EnvFilter::new(&args.log_level)),
|
EnvFilter::try_from_default_env()
|
||||||
|
.unwrap_or_else(|_| EnvFilter::new(&args.log_level)),
|
||||||
)
|
)
|
||||||
.init();
|
.init();
|
||||||
|
|
||||||
|
|
@ -50,33 +45,6 @@ fn main() -> Result<()> {
|
||||||
// Determine socket path with fallback to default
|
// Determine socket path with fallback to default
|
||||||
let socket_path = args.socket.unwrap_or_else(default_socket_path);
|
let socket_path = args.socket.unwrap_or_else(default_socket_path);
|
||||||
|
|
||||||
if args.stop_current {
|
|
||||||
let runtime = tokio::runtime::Runtime::new()?;
|
|
||||||
runtime.block_on(async move {
|
|
||||||
let client = CommandClient::new(&socket_path);
|
|
||||||
match client.stop_current().await {
|
|
||||||
Ok(response) => match response.result {
|
|
||||||
ResponseResult::Ok(ResponsePayload::Stopped) => {
|
|
||||||
tracing::info!("StopCurrent succeeded");
|
|
||||||
Ok(())
|
|
||||||
}
|
|
||||||
ResponseResult::Err(err) if err.code == ErrorCode::NoActiveSession => {
|
|
||||||
tracing::debug!("No active session to stop");
|
|
||||||
Ok(())
|
|
||||||
}
|
|
||||||
ResponseResult::Err(err) => {
|
|
||||||
anyhow::bail!("StopCurrent failed: {}", err.message)
|
|
||||||
}
|
|
||||||
ResponseResult::Ok(payload) => {
|
|
||||||
anyhow::bail!("Unexpected StopCurrent response: {:?}", payload)
|
|
||||||
}
|
|
||||||
},
|
|
||||||
Err(e) => anyhow::bail!("Failed to send StopCurrent: {}", e),
|
|
||||||
}
|
|
||||||
})?;
|
|
||||||
return Ok(());
|
|
||||||
}
|
|
||||||
|
|
||||||
// Run GTK application
|
// Run GTK application
|
||||||
let application = app::LauncherApp::new(socket_path);
|
let application = app::LauncherApp::new(socket_path);
|
||||||
let exit_code = application.run();
|
let exit_code = application.run();
|
||||||
|
|
|
||||||
|
|
@ -1,6 +1,6 @@
|
||||||
//! Launcher application state management
|
//! Launcher application state management
|
||||||
|
|
||||||
use shepherd_api::{EntryView, Event, EventPayload, ServiceStateSnapshot};
|
use shepherd_api::{ServiceStateSnapshot, EntryView, Event, EventPayload};
|
||||||
use shepherd_util::SessionId;
|
use shepherd_util::SessionId;
|
||||||
use std::time::Duration;
|
use std::time::Duration;
|
||||||
use tokio::sync::watch;
|
use tokio::sync::watch;
|
||||||
|
|
@ -18,7 +18,7 @@ pub enum LauncherState {
|
||||||
/// Launch requested, waiting for response
|
/// Launch requested, waiting for response
|
||||||
Launching {
|
Launching {
|
||||||
#[allow(dead_code)]
|
#[allow(dead_code)]
|
||||||
entry_id: String,
|
entry_id: String
|
||||||
},
|
},
|
||||||
/// Session is running
|
/// Session is running
|
||||||
SessionActive {
|
SessionActive {
|
||||||
|
|
@ -62,10 +62,7 @@ impl SharedState {
|
||||||
tracing::info!(event = ?event.payload, "Received event from shepherdd");
|
tracing::info!(event = ?event.payload, "Received event from shepherdd");
|
||||||
match event.payload {
|
match event.payload {
|
||||||
EventPayload::StateChanged(snapshot) => {
|
EventPayload::StateChanged(snapshot) => {
|
||||||
tracing::info!(
|
tracing::info!(has_session = snapshot.current_session.is_some(), "Applying state snapshot");
|
||||||
has_session = snapshot.current_session.is_some(),
|
|
||||||
"Applying state snapshot"
|
|
||||||
);
|
|
||||||
self.apply_snapshot(snapshot);
|
self.apply_snapshot(snapshot);
|
||||||
}
|
}
|
||||||
EventPayload::SessionStarted {
|
EventPayload::SessionStarted {
|
||||||
|
|
@ -90,12 +87,7 @@ impl SharedState {
|
||||||
time_remaining,
|
time_remaining,
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
EventPayload::SessionEnded {
|
EventPayload::SessionEnded { session_id, entry_id, reason, .. } => {
|
||||||
session_id,
|
|
||||||
entry_id,
|
|
||||||
reason,
|
|
||||||
..
|
|
||||||
} => {
|
|
||||||
tracing::info!(session_id = %session_id, entry_id = %entry_id, reason = ?reason, "Session ended event - setting Connecting");
|
tracing::info!(session_id = %session_id, entry_id = %entry_id, reason = ?reason, "Session ended event - setting Connecting");
|
||||||
// Will be followed by StateChanged, but set to connecting
|
// Will be followed by StateChanged, but set to connecting
|
||||||
// to ensure grid reloads
|
// to ensure grid reloads
|
||||||
|
|
@ -125,6 +117,10 @@ impl SharedState {
|
||||||
EventPayload::VolumeChanged { .. } => {
|
EventPayload::VolumeChanged { .. } => {
|
||||||
// Volume events are handled by HUD
|
// Volume events are handled by HUD
|
||||||
}
|
}
|
||||||
|
EventPayload::ConnectivityChanged { .. } => {
|
||||||
|
// Connectivity changes may affect entry availability - request fresh state
|
||||||
|
self.set(LauncherState::Connecting);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -79,8 +79,6 @@ impl LauncherTile {
|
||||||
let fallback_icon = match entry.kind_tag {
|
let fallback_icon = match entry.kind_tag {
|
||||||
shepherd_api::EntryKindTag::Process => "application-x-executable",
|
shepherd_api::EntryKindTag::Process => "application-x-executable",
|
||||||
shepherd_api::EntryKindTag::Snap => "application-x-executable",
|
shepherd_api::EntryKindTag::Snap => "application-x-executable",
|
||||||
shepherd_api::EntryKindTag::Steam => "application-x-executable",
|
|
||||||
shepherd_api::EntryKindTag::Flatpak => "application-x-executable",
|
|
||||||
shepherd_api::EntryKindTag::Vm => "computer",
|
shepherd_api::EntryKindTag::Vm => "computer",
|
||||||
shepherd_api::EntryKindTag::Media => "video-x-generic",
|
shepherd_api::EntryKindTag::Media => "video-x-generic",
|
||||||
shepherd_api::EntryKindTag::Custom => "applications-other",
|
shepherd_api::EntryKindTag::Custom => "applications-other",
|
||||||
|
|
@ -89,7 +87,7 @@ impl LauncherTile {
|
||||||
// Set icon, first trying to load as an image file, then as an icon name
|
// Set icon, first trying to load as an image file, then as an icon name
|
||||||
if let Some(ref icon_ref) = entry.icon_ref {
|
if let Some(ref icon_ref) = entry.icon_ref {
|
||||||
let mut loaded = false;
|
let mut loaded = false;
|
||||||
|
|
||||||
// First, try to load as an image file (JPG, PNG, etc.)
|
// First, try to load as an image file (JPG, PNG, etc.)
|
||||||
// Expand ~ to home directory if present
|
// Expand ~ to home directory if present
|
||||||
let expanded_path = if icon_ref.starts_with("~/") {
|
let expanded_path = if icon_ref.starts_with("~/") {
|
||||||
|
|
@ -101,14 +99,14 @@ impl LauncherTile {
|
||||||
} else {
|
} else {
|
||||||
icon_ref.clone()
|
icon_ref.clone()
|
||||||
};
|
};
|
||||||
|
|
||||||
let path = std::path::Path::new(&expanded_path);
|
let path = std::path::Path::new(&expanded_path);
|
||||||
if path.exists() && path.is_file() {
|
if path.exists() && path.is_file() {
|
||||||
// Try to load as an image file
|
// Try to load as an image file
|
||||||
imp.icon.set_from_file(Some(path));
|
imp.icon.set_from_file(Some(path));
|
||||||
loaded = true;
|
loaded = true;
|
||||||
}
|
}
|
||||||
|
|
||||||
// If not loaded as a file, try as an icon name from the theme
|
// If not loaded as a file, try as an icon name from the theme
|
||||||
if !loaded {
|
if !loaded {
|
||||||
let icon_theme = gtk4::IconTheme::for_display(&self.display());
|
let icon_theme = gtk4::IconTheme::for_display(&self.display());
|
||||||
|
|
@ -143,11 +141,7 @@ impl LauncherTile {
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn entry_id(&self) -> Option<shepherd_util::EntryId> {
|
pub fn entry_id(&self) -> Option<shepherd_util::EntryId> {
|
||||||
self.imp()
|
self.imp().entry.borrow().as_ref().map(|e| e.entry_id.clone())
|
||||||
.entry
|
|
||||||
.borrow()
|
|
||||||
.as_ref()
|
|
||||||
.map(|e| e.entry_id.clone())
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1,7 +1,7 @@
|
||||||
//! SQLite-based store implementation
|
//! SQLite-based store implementation
|
||||||
|
|
||||||
use chrono::{DateTime, Local, NaiveDate};
|
use chrono::{DateTime, Local, NaiveDate};
|
||||||
use rusqlite::{Connection, OptionalExtension, params};
|
use rusqlite::{params, Connection, OptionalExtension};
|
||||||
use shepherd_util::EntryId;
|
use shepherd_util::EntryId;
|
||||||
use std::path::Path;
|
use std::path::Path;
|
||||||
use std::sync::Mutex;
|
use std::sync::Mutex;
|
||||||
|
|
@ -98,8 +98,9 @@ impl Store for SqliteStore {
|
||||||
fn get_recent_audits(&self, limit: usize) -> StoreResult<Vec<AuditEvent>> {
|
fn get_recent_audits(&self, limit: usize) -> StoreResult<Vec<AuditEvent>> {
|
||||||
let conn = self.conn.lock().unwrap();
|
let conn = self.conn.lock().unwrap();
|
||||||
|
|
||||||
let mut stmt = conn
|
let mut stmt = conn.prepare(
|
||||||
.prepare("SELECT id, timestamp, event_json FROM audit_log ORDER BY id DESC LIMIT ?")?;
|
"SELECT id, timestamp, event_json FROM audit_log ORDER BY id DESC LIMIT ?",
|
||||||
|
)?;
|
||||||
|
|
||||||
let rows = stmt.query_map([limit], |row| {
|
let rows = stmt.query_map([limit], |row| {
|
||||||
let id: i64 = row.get(0)?;
|
let id: i64 = row.get(0)?;
|
||||||
|
|
@ -180,7 +181,11 @@ impl Store for SqliteStore {
|
||||||
Ok(result)
|
Ok(result)
|
||||||
}
|
}
|
||||||
|
|
||||||
fn set_cooldown_until(&self, entry_id: &EntryId, until: DateTime<Local>) -> StoreResult<()> {
|
fn set_cooldown_until(
|
||||||
|
&self,
|
||||||
|
entry_id: &EntryId,
|
||||||
|
until: DateTime<Local>,
|
||||||
|
) -> StoreResult<()> {
|
||||||
let conn = self.conn.lock().unwrap();
|
let conn = self.conn.lock().unwrap();
|
||||||
|
|
||||||
conn.execute(
|
conn.execute(
|
||||||
|
|
@ -199,10 +204,7 @@ impl Store for SqliteStore {
|
||||||
|
|
||||||
fn clear_cooldown(&self, entry_id: &EntryId) -> StoreResult<()> {
|
fn clear_cooldown(&self, entry_id: &EntryId) -> StoreResult<()> {
|
||||||
let conn = self.conn.lock().unwrap();
|
let conn = self.conn.lock().unwrap();
|
||||||
conn.execute(
|
conn.execute("DELETE FROM cooldowns WHERE entry_id = ?", [entry_id.as_str()])?;
|
||||||
"DELETE FROM cooldowns WHERE entry_id = ?",
|
|
||||||
[entry_id.as_str()],
|
|
||||||
)?;
|
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -210,11 +212,9 @@ impl Store for SqliteStore {
|
||||||
let conn = self.conn.lock().unwrap();
|
let conn = self.conn.lock().unwrap();
|
||||||
|
|
||||||
let json: Option<String> = conn
|
let json: Option<String> = conn
|
||||||
.query_row(
|
.query_row("SELECT snapshot_json FROM snapshot WHERE id = 1", [], |row| {
|
||||||
"SELECT snapshot_json FROM snapshot WHERE id = 1",
|
row.get(0)
|
||||||
[],
|
})
|
||||||
|row| row.get(0),
|
|
||||||
)
|
|
||||||
.optional()?;
|
.optional()?;
|
||||||
|
|
||||||
match json {
|
match json {
|
||||||
|
|
@ -246,7 +246,9 @@ impl Store for SqliteStore {
|
||||||
|
|
||||||
fn is_healthy(&self) -> bool {
|
fn is_healthy(&self) -> bool {
|
||||||
match self.conn.lock() {
|
match self.conn.lock() {
|
||||||
Ok(conn) => conn.query_row("SELECT 1", [], |_| Ok(())).is_ok(),
|
Ok(conn) => {
|
||||||
|
conn.query_row("SELECT 1", [], |_| Ok(())).is_ok()
|
||||||
|
}
|
||||||
Err(_) => {
|
Err(_) => {
|
||||||
warn!("Store lock poisoned");
|
warn!("Store lock poisoned");
|
||||||
false
|
false
|
||||||
|
|
|
||||||
|
|
@ -30,7 +30,11 @@ pub trait Store: Send + Sync {
|
||||||
fn get_cooldown_until(&self, entry_id: &EntryId) -> StoreResult<Option<DateTime<Local>>>;
|
fn get_cooldown_until(&self, entry_id: &EntryId) -> StoreResult<Option<DateTime<Local>>>;
|
||||||
|
|
||||||
/// Set cooldown expiry time for an entry
|
/// Set cooldown expiry time for an entry
|
||||||
fn set_cooldown_until(&self, entry_id: &EntryId, until: DateTime<Local>) -> StoreResult<()>;
|
fn set_cooldown_until(
|
||||||
|
&self,
|
||||||
|
entry_id: &EntryId,
|
||||||
|
until: DateTime<Local>,
|
||||||
|
) -> StoreResult<()>;
|
||||||
|
|
||||||
/// Clear cooldown for an entry
|
/// Clear cooldown for an entry
|
||||||
fn clear_cooldown(&self, entry_id: &EntryId) -> StoreResult<()>;
|
fn clear_cooldown(&self, entry_id: &EntryId) -> StoreResult<()>;
|
||||||
|
|
|
||||||
|
|
@ -40,9 +40,7 @@ pub fn default_socket_path() -> PathBuf {
|
||||||
pub fn socket_path_without_env() -> PathBuf {
|
pub fn socket_path_without_env() -> PathBuf {
|
||||||
// Try XDG_RUNTIME_DIR first (typically /run/user/<uid>)
|
// Try XDG_RUNTIME_DIR first (typically /run/user/<uid>)
|
||||||
if let Ok(runtime_dir) = std::env::var("XDG_RUNTIME_DIR") {
|
if let Ok(runtime_dir) = std::env::var("XDG_RUNTIME_DIR") {
|
||||||
return PathBuf::from(runtime_dir)
|
return PathBuf::from(runtime_dir).join(APP_DIR).join(SOCKET_FILENAME);
|
||||||
.join(APP_DIR)
|
|
||||||
.join(SOCKET_FILENAME);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Fallback to /tmp with username
|
// Fallback to /tmp with username
|
||||||
|
|
@ -111,13 +109,10 @@ pub fn default_log_dir() -> PathBuf {
|
||||||
/// Get the parent directory of the socket (for creating it)
|
/// Get the parent directory of the socket (for creating it)
|
||||||
pub fn socket_dir() -> PathBuf {
|
pub fn socket_dir() -> PathBuf {
|
||||||
let socket_path = socket_path_without_env();
|
let socket_path = socket_path_without_env();
|
||||||
socket_path
|
socket_path.parent().map(|p| p.to_path_buf()).unwrap_or_else(|| {
|
||||||
.parent()
|
// Should never happen with our paths, but just in case
|
||||||
.map(|p| p.to_path_buf())
|
PathBuf::from("/tmp").join(APP_DIR)
|
||||||
.unwrap_or_else(|| {
|
})
|
||||||
// Should never happen with our paths, but just in case
|
|
||||||
PathBuf::from("/tmp").join(APP_DIR)
|
|
||||||
})
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Configuration subdirectory name (uses "shepherd" not "shepherdd")
|
/// Configuration subdirectory name (uses "shepherd" not "shepherdd")
|
||||||
|
|
|
||||||
|
|
@ -42,13 +42,10 @@ impl RateLimiter {
|
||||||
pub fn check(&mut self, client_id: &ClientId) -> bool {
|
pub fn check(&mut self, client_id: &ClientId) -> bool {
|
||||||
let now = Instant::now();
|
let now = Instant::now();
|
||||||
|
|
||||||
let bucket = self
|
let bucket = self.clients.entry(client_id.clone()).or_insert(ClientBucket {
|
||||||
.clients
|
tokens: self.max_tokens,
|
||||||
.entry(client_id.clone())
|
last_refill: now,
|
||||||
.or_insert(ClientBucket {
|
});
|
||||||
tokens: self.max_tokens,
|
|
||||||
last_refill: now,
|
|
||||||
});
|
|
||||||
|
|
||||||
// Refill tokens if interval has passed
|
// Refill tokens if interval has passed
|
||||||
let elapsed = now.duration_since(bucket.last_refill);
|
let elapsed = now.duration_since(bucket.last_refill);
|
||||||
|
|
@ -75,8 +72,9 @@ impl RateLimiter {
|
||||||
/// Clean up stale client entries
|
/// Clean up stale client entries
|
||||||
pub fn cleanup(&mut self, stale_after: Duration) {
|
pub fn cleanup(&mut self, stale_after: Duration) {
|
||||||
let now = Instant::now();
|
let now = Instant::now();
|
||||||
self.clients
|
self.clients.retain(|_, bucket| {
|
||||||
.retain(|_, bucket| now.duration_since(bucket.last_refill) < stale_after);
|
now.duration_since(bucket.last_refill) < stale_after
|
||||||
|
});
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -37,9 +37,7 @@ fn get_mock_time_offset() -> Option<chrono::Duration> {
|
||||||
{
|
{
|
||||||
if let Ok(mock_time_str) = std::env::var(MOCK_TIME_ENV_VAR) {
|
if let Ok(mock_time_str) = std::env::var(MOCK_TIME_ENV_VAR) {
|
||||||
// Parse the mock time string
|
// Parse the mock time string
|
||||||
if let Ok(naive_dt) =
|
if let Ok(naive_dt) = NaiveDateTime::parse_from_str(&mock_time_str, "%Y-%m-%d %H:%M:%S") {
|
||||||
NaiveDateTime::parse_from_str(&mock_time_str, "%Y-%m-%d %H:%M:%S")
|
|
||||||
{
|
|
||||||
if let Some(mock_dt) = Local.from_local_datetime(&naive_dt).single() {
|
if let Some(mock_dt) = Local.from_local_datetime(&naive_dt).single() {
|
||||||
let real_now = chrono::Local::now();
|
let real_now = chrono::Local::now();
|
||||||
let offset = mock_dt.signed_duration_since(real_now);
|
let offset = mock_dt.signed_duration_since(real_now);
|
||||||
|
|
@ -85,7 +83,7 @@ pub fn is_mock_time_active() -> bool {
|
||||||
#[allow(clippy::disallowed_methods)] // This is the wrapper that provides mock time support
|
#[allow(clippy::disallowed_methods)] // This is the wrapper that provides mock time support
|
||||||
pub fn now() -> DateTime<Local> {
|
pub fn now() -> DateTime<Local> {
|
||||||
let real_now = chrono::Local::now();
|
let real_now = chrono::Local::now();
|
||||||
|
|
||||||
if let Some(offset) = get_mock_time_offset() {
|
if let Some(offset) = get_mock_time_offset() {
|
||||||
real_now + offset
|
real_now + offset
|
||||||
} else {
|
} else {
|
||||||
|
|
@ -203,8 +201,9 @@ impl DaysOfWeek {
|
||||||
pub const SATURDAY: u8 = 1 << 5;
|
pub const SATURDAY: u8 = 1 << 5;
|
||||||
pub const SUNDAY: u8 = 1 << 6;
|
pub const SUNDAY: u8 = 1 << 6;
|
||||||
|
|
||||||
pub const WEEKDAYS: DaysOfWeek =
|
pub const WEEKDAYS: DaysOfWeek = DaysOfWeek(
|
||||||
DaysOfWeek(Self::MONDAY | Self::TUESDAY | Self::WEDNESDAY | Self::THURSDAY | Self::FRIDAY);
|
Self::MONDAY | Self::TUESDAY | Self::WEDNESDAY | Self::THURSDAY | Self::FRIDAY,
|
||||||
|
);
|
||||||
pub const WEEKENDS: DaysOfWeek = DaysOfWeek(Self::SATURDAY | Self::SUNDAY);
|
pub const WEEKENDS: DaysOfWeek = DaysOfWeek(Self::SATURDAY | Self::SUNDAY);
|
||||||
pub const ALL_DAYS: DaysOfWeek = DaysOfWeek(0x7F);
|
pub const ALL_DAYS: DaysOfWeek = DaysOfWeek(0x7F);
|
||||||
pub const NONE: DaysOfWeek = DaysOfWeek(0);
|
pub const NONE: DaysOfWeek = DaysOfWeek(0);
|
||||||
|
|
@ -447,14 +446,14 @@ mod tests {
|
||||||
fn test_parse_mock_time_invalid_formats() {
|
fn test_parse_mock_time_invalid_formats() {
|
||||||
// Test that invalid formats are rejected
|
// Test that invalid formats are rejected
|
||||||
let invalid_formats = [
|
let invalid_formats = [
|
||||||
"2025-12-25", // Missing time
|
"2025-12-25", // Missing time
|
||||||
"14:30:00", // Missing date
|
"14:30:00", // Missing date
|
||||||
"2025/12/25 14:30:00", // Wrong date separator
|
"2025/12/25 14:30:00", // Wrong date separator
|
||||||
"2025-12-25T14:30:00", // ISO format (not supported)
|
"2025-12-25T14:30:00", // ISO format (not supported)
|
||||||
"Dec 25, 2025 14:30", // Wrong format
|
"Dec 25, 2025 14:30", // Wrong format
|
||||||
"25-12-2025 14:30:00", // Wrong date order
|
"25-12-2025 14:30:00", // Wrong date order
|
||||||
"", // Empty string
|
"", // Empty string
|
||||||
"not a date", // Invalid string
|
"not a date", // Invalid string
|
||||||
];
|
];
|
||||||
|
|
||||||
for format_str in &invalid_formats {
|
for format_str in &invalid_formats {
|
||||||
|
|
@ -475,12 +474,12 @@ mod tests {
|
||||||
let naive_dt = NaiveDateTime::parse_from_str(mock_time_str, "%Y-%m-%d %H:%M:%S").unwrap();
|
let naive_dt = NaiveDateTime::parse_from_str(mock_time_str, "%Y-%m-%d %H:%M:%S").unwrap();
|
||||||
let mock_dt = Local.from_local_datetime(&naive_dt).single().unwrap();
|
let mock_dt = Local.from_local_datetime(&naive_dt).single().unwrap();
|
||||||
let real_now = chrono::Local::now();
|
let real_now = chrono::Local::now();
|
||||||
|
|
||||||
let offset = mock_dt.signed_duration_since(real_now);
|
let offset = mock_dt.signed_duration_since(real_now);
|
||||||
|
|
||||||
// The offset should be applied correctly
|
// The offset should be applied correctly
|
||||||
let simulated_now = real_now + offset;
|
let simulated_now = real_now + offset;
|
||||||
|
|
||||||
// The simulated time should be very close to the mock time
|
// The simulated time should be very close to the mock time
|
||||||
// (within a second, accounting for test execution time)
|
// (within a second, accounting for test execution time)
|
||||||
let diff = (simulated_now - mock_dt).num_seconds().abs();
|
let diff = (simulated_now - mock_dt).num_seconds().abs();
|
||||||
|
|
@ -496,25 +495,25 @@ mod tests {
|
||||||
fn test_mock_time_advances_with_real_time() {
|
fn test_mock_time_advances_with_real_time() {
|
||||||
// Test that mock time advances at the same rate as real time
|
// Test that mock time advances at the same rate as real time
|
||||||
// This tests the concept, not the actual implementation (since OnceLock is static)
|
// This tests the concept, not the actual implementation (since OnceLock is static)
|
||||||
|
|
||||||
let mock_time_str = "2025-12-25 14:30:00";
|
let mock_time_str = "2025-12-25 14:30:00";
|
||||||
let naive_dt = NaiveDateTime::parse_from_str(mock_time_str, "%Y-%m-%d %H:%M:%S").unwrap();
|
let naive_dt = NaiveDateTime::parse_from_str(mock_time_str, "%Y-%m-%d %H:%M:%S").unwrap();
|
||||||
let mock_dt = Local.from_local_datetime(&naive_dt).single().unwrap();
|
let mock_dt = Local.from_local_datetime(&naive_dt).single().unwrap();
|
||||||
|
|
||||||
let real_t1 = chrono::Local::now();
|
let real_t1 = chrono::Local::now();
|
||||||
let offset = mock_dt.signed_duration_since(real_t1);
|
let offset = mock_dt.signed_duration_since(real_t1);
|
||||||
|
|
||||||
// Simulate time passing
|
// Simulate time passing
|
||||||
std::thread::sleep(Duration::from_millis(100));
|
std::thread::sleep(Duration::from_millis(100));
|
||||||
|
|
||||||
let real_t2 = chrono::Local::now();
|
let real_t2 = chrono::Local::now();
|
||||||
let simulated_t1 = real_t1 + offset;
|
let simulated_t1 = real_t1 + offset;
|
||||||
let simulated_t2 = real_t2 + offset;
|
let simulated_t2 = real_t2 + offset;
|
||||||
|
|
||||||
// The simulated times should have advanced by the same amount as real times
|
// The simulated times should have advanced by the same amount as real times
|
||||||
let real_elapsed = real_t2.signed_duration_since(real_t1);
|
let real_elapsed = real_t2.signed_duration_since(real_t1);
|
||||||
let simulated_elapsed = simulated_t2.signed_duration_since(simulated_t1);
|
let simulated_elapsed = simulated_t2.signed_duration_since(simulated_t1);
|
||||||
|
|
||||||
assert_eq!(
|
assert_eq!(
|
||||||
real_elapsed.num_milliseconds(),
|
real_elapsed.num_milliseconds(),
|
||||||
simulated_elapsed.num_milliseconds(),
|
simulated_elapsed.num_milliseconds(),
|
||||||
|
|
@ -526,33 +525,24 @@ mod tests {
|
||||||
fn test_availability_with_specific_time() {
|
fn test_availability_with_specific_time() {
|
||||||
// Test that availability windows work correctly with a specific time
|
// Test that availability windows work correctly with a specific time
|
||||||
// This validates that the mock time would affect availability checks
|
// This validates that the mock time would affect availability checks
|
||||||
|
|
||||||
let window = TimeWindow::new(
|
let window = TimeWindow::new(
|
||||||
DaysOfWeek::ALL_DAYS,
|
DaysOfWeek::ALL_DAYS,
|
||||||
WallClock::new(14, 0).unwrap(), // 2 PM
|
WallClock::new(14, 0).unwrap(), // 2 PM
|
||||||
WallClock::new(18, 0).unwrap(), // 6 PM
|
WallClock::new(18, 0).unwrap(), // 6 PM
|
||||||
);
|
);
|
||||||
|
|
||||||
// Time within window
|
// Time within window
|
||||||
let in_window = Local.with_ymd_and_hms(2025, 12, 25, 15, 0, 0).unwrap();
|
let in_window = Local.with_ymd_and_hms(2025, 12, 25, 15, 0, 0).unwrap();
|
||||||
assert!(
|
assert!(window.contains(&in_window), "15:00 should be within 14:00-18:00 window");
|
||||||
window.contains(&in_window),
|
|
||||||
"15:00 should be within 14:00-18:00 window"
|
|
||||||
);
|
|
||||||
|
|
||||||
// Time before window
|
// Time before window
|
||||||
let before_window = Local.with_ymd_and_hms(2025, 12, 25, 10, 0, 0).unwrap();
|
let before_window = Local.with_ymd_and_hms(2025, 12, 25, 10, 0, 0).unwrap();
|
||||||
assert!(
|
assert!(!window.contains(&before_window), "10:00 should be before 14:00-18:00 window");
|
||||||
!window.contains(&before_window),
|
|
||||||
"10:00 should be before 14:00-18:00 window"
|
|
||||||
);
|
|
||||||
|
|
||||||
// Time after window
|
// Time after window
|
||||||
let after_window = Local.with_ymd_and_hms(2025, 12, 25, 20, 0, 0).unwrap();
|
let after_window = Local.with_ymd_and_hms(2025, 12, 25, 20, 0, 0).unwrap();
|
||||||
assert!(
|
assert!(!window.contains(&after_window), "20:00 should be after 14:00-18:00 window");
|
||||||
!window.contains(&after_window),
|
|
||||||
"20:00 should be after 14:00-18:00 window"
|
|
||||||
);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
|
|
@ -563,27 +553,18 @@ mod tests {
|
||||||
WallClock::new(14, 0).unwrap(),
|
WallClock::new(14, 0).unwrap(),
|
||||||
WallClock::new(18, 0).unwrap(),
|
WallClock::new(18, 0).unwrap(),
|
||||||
);
|
);
|
||||||
|
|
||||||
// Thursday at 3 PM - should be available (weekday, in time window)
|
// Thursday at 3 PM - should be available (weekday, in time window)
|
||||||
let thursday = Local.with_ymd_and_hms(2025, 12, 25, 15, 0, 0).unwrap(); // Christmas 2025 is Thursday
|
let thursday = Local.with_ymd_and_hms(2025, 12, 25, 15, 0, 0).unwrap(); // Christmas 2025 is Thursday
|
||||||
assert!(
|
assert!(window.contains(&thursday), "Thursday 15:00 should be in weekday afternoon window");
|
||||||
window.contains(&thursday),
|
|
||||||
"Thursday 15:00 should be in weekday afternoon window"
|
|
||||||
);
|
|
||||||
|
|
||||||
// Saturday at 3 PM - should NOT be available (weekend)
|
// Saturday at 3 PM - should NOT be available (weekend)
|
||||||
let saturday = Local.with_ymd_and_hms(2025, 12, 27, 15, 0, 0).unwrap();
|
let saturday = Local.with_ymd_and_hms(2025, 12, 27, 15, 0, 0).unwrap();
|
||||||
assert!(
|
assert!(!window.contains(&saturday), "Saturday should not be in weekday window");
|
||||||
!window.contains(&saturday),
|
|
||||||
"Saturday should not be in weekday window"
|
|
||||||
);
|
|
||||||
|
|
||||||
// Sunday at 3 PM - should NOT be available (weekend)
|
// Sunday at 3 PM - should NOT be available (weekend)
|
||||||
let sunday = Local.with_ymd_and_hms(2025, 12, 28, 15, 0, 0).unwrap();
|
let sunday = Local.with_ymd_and_hms(2025, 12, 28, 15, 0, 0).unwrap();
|
||||||
assert!(
|
assert!(!window.contains(&sunday), "Sunday should not be in weekday window");
|
||||||
!window.contains(&sunday),
|
|
||||||
"Sunday should not be in weekday window"
|
|
||||||
);
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -596,7 +577,7 @@ mod mock_time_integration_tests {
|
||||||
/// This test documents the expected behavior of the mock time feature.
|
/// This test documents the expected behavior of the mock time feature.
|
||||||
/// Due to the static OnceLock, actual integration testing requires
|
/// Due to the static OnceLock, actual integration testing requires
|
||||||
/// running with the environment variable set externally.
|
/// running with the environment variable set externally.
|
||||||
///
|
///
|
||||||
/// To manually test:
|
/// To manually test:
|
||||||
/// ```bash
|
/// ```bash
|
||||||
/// SHEPHERD_MOCK_TIME="2025-12-25 14:30:00" cargo test
|
/// SHEPHERD_MOCK_TIME="2025-12-25 14:30:00" cargo test
|
||||||
|
|
@ -605,7 +586,7 @@ mod mock_time_integration_tests {
|
||||||
fn test_mock_time_documentation() {
|
fn test_mock_time_documentation() {
|
||||||
// This test verifies the mock time constants and expected behavior
|
// This test verifies the mock time constants and expected behavior
|
||||||
assert_eq!(MOCK_TIME_ENV_VAR, "SHEPHERD_MOCK_TIME");
|
assert_eq!(MOCK_TIME_ENV_VAR, "SHEPHERD_MOCK_TIME");
|
||||||
|
|
||||||
// The expected format is documented
|
// The expected format is documented
|
||||||
let expected_format = "%Y-%m-%d %H:%M:%S";
|
let expected_format = "%Y-%m-%d %H:%M:%S";
|
||||||
let example = "2025-12-25 14:30:00";
|
let example = "2025-12-25 14:30:00";
|
||||||
|
|
@ -627,10 +608,10 @@ mod mock_time_integration_tests {
|
||||||
let t1 = now();
|
let t1 = now();
|
||||||
std::thread::sleep(Duration::from_millis(50));
|
std::thread::sleep(Duration::from_millis(50));
|
||||||
let t2 = now();
|
let t2 = now();
|
||||||
|
|
||||||
// t2 should be after t1
|
// t2 should be after t1
|
||||||
assert!(t2 > t1, "Time should advance forward");
|
assert!(t2 > t1, "Time should advance forward");
|
||||||
|
|
||||||
// The difference should be approximately 50ms (with some tolerance)
|
// The difference should be approximately 50ms (with some tolerance)
|
||||||
let diff = t2.signed_duration_since(t1);
|
let diff = t2.signed_duration_since(t1);
|
||||||
assert!(
|
assert!(
|
||||||
|
|
|
||||||
|
|
@ -27,6 +27,7 @@ tracing-subscriber = { workspace = true }
|
||||||
tokio = { workspace = true }
|
tokio = { workspace = true }
|
||||||
anyhow = { workspace = true }
|
anyhow = { workspace = true }
|
||||||
clap = { version = "4.5", features = ["derive", "env"] }
|
clap = { version = "4.5", features = ["derive", "env"] }
|
||||||
|
nix = { workspace = true }
|
||||||
|
|
||||||
[dev-dependencies]
|
[dev-dependencies]
|
||||||
tempfile = { workspace = true }
|
tempfile = { workspace = true }
|
||||||
|
|
|
||||||
|
|
@ -1,100 +0,0 @@
|
||||||
//! Internet connectivity monitoring for shepherdd.
|
|
||||||
|
|
||||||
use shepherd_config::{InternetCheckScheme, InternetCheckTarget, Policy};
|
|
||||||
use shepherd_core::CoreEngine;
|
|
||||||
use std::sync::Arc;
|
|
||||||
use std::time::Duration;
|
|
||||||
use tokio::net::TcpStream;
|
|
||||||
use tokio::sync::Mutex;
|
|
||||||
use tokio::time;
|
|
||||||
use tracing::{debug, warn};
|
|
||||||
|
|
||||||
pub struct InternetMonitor {
|
|
||||||
targets: Vec<InternetCheckTarget>,
|
|
||||||
interval: Duration,
|
|
||||||
timeout: Duration,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl InternetMonitor {
|
|
||||||
pub fn from_policy(policy: &Policy) -> Option<Self> {
|
|
||||||
let mut targets = Vec::new();
|
|
||||||
|
|
||||||
if let Some(check) = policy.service.internet.check.clone() {
|
|
||||||
targets.push(check);
|
|
||||||
}
|
|
||||||
|
|
||||||
for entry in &policy.entries {
|
|
||||||
if entry.internet.required
|
|
||||||
&& let Some(check) = entry.internet.check.clone()
|
|
||||||
&& !targets.contains(&check)
|
|
||||||
{
|
|
||||||
targets.push(check);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if targets.is_empty() {
|
|
||||||
return None;
|
|
||||||
}
|
|
||||||
|
|
||||||
Some(Self {
|
|
||||||
targets,
|
|
||||||
interval: policy.service.internet.interval,
|
|
||||||
timeout: policy.service.internet.timeout,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
pub async fn run(self, engine: Arc<Mutex<CoreEngine>>) {
|
|
||||||
// Initial check
|
|
||||||
self.check_all(&engine).await;
|
|
||||||
|
|
||||||
let mut interval = time::interval(self.interval);
|
|
||||||
loop {
|
|
||||||
interval.tick().await;
|
|
||||||
self.check_all(&engine).await;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn check_all(&self, engine: &Arc<Mutex<CoreEngine>>) {
|
|
||||||
for target in &self.targets {
|
|
||||||
let available = check_target(target, self.timeout).await;
|
|
||||||
let changed = {
|
|
||||||
let mut eng = engine.lock().await;
|
|
||||||
eng.set_internet_status(target.clone(), available)
|
|
||||||
};
|
|
||||||
|
|
||||||
if changed {
|
|
||||||
debug!(
|
|
||||||
check = %target.original,
|
|
||||||
available,
|
|
||||||
"Internet connectivity status changed"
|
|
||||||
);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn check_target(target: &InternetCheckTarget, timeout: Duration) -> bool {
|
|
||||||
match target.scheme {
|
|
||||||
InternetCheckScheme::Tcp | InternetCheckScheme::Http | InternetCheckScheme::Https => {
|
|
||||||
let connect = TcpStream::connect((target.host.as_str(), target.port));
|
|
||||||
match time::timeout(timeout, connect).await {
|
|
||||||
Ok(Ok(stream)) => {
|
|
||||||
drop(stream);
|
|
||||||
true
|
|
||||||
}
|
|
||||||
Ok(Err(err)) => {
|
|
||||||
debug!(
|
|
||||||
check = %target.original,
|
|
||||||
error = %err,
|
|
||||||
"Internet check failed"
|
|
||||||
);
|
|
||||||
false
|
|
||||||
}
|
|
||||||
Err(_) => {
|
|
||||||
warn!(check = %target.original, "Internet check timed out");
|
|
||||||
false
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
@ -8,30 +8,33 @@
|
||||||
//! - Host adapter (Linux)
|
//! - Host adapter (Linux)
|
||||||
//! - IPC server
|
//! - IPC server
|
||||||
//! - Volume control
|
//! - Volume control
|
||||||
|
//! - Network connectivity monitoring
|
||||||
|
|
||||||
use anyhow::{Context, Result};
|
use anyhow::{Context, Result};
|
||||||
use clap::Parser;
|
use clap::Parser;
|
||||||
use shepherd_api::{
|
use shepherd_api::{
|
||||||
Command, ErrorCode, ErrorInfo, Event, EventPayload, HealthStatus, Response, ResponsePayload,
|
Command, ConnectivityStatus, ErrorCode, ErrorInfo, Event, EventPayload, HealthStatus,
|
||||||
SessionEndReason, StopMode, VolumeInfo, VolumeRestrictions,
|
ReasonCode, Response, ResponsePayload, SessionEndReason, StopMode, VolumeInfo,
|
||||||
|
VolumeRestrictions,
|
||||||
};
|
};
|
||||||
use shepherd_config::{VolumePolicy, load_config};
|
use shepherd_config::{load_config, VolumePolicy};
|
||||||
use shepherd_core::{CoreEngine, CoreEvent, LaunchDecision, StopDecision};
|
use shepherd_core::{CoreEngine, CoreEvent, LaunchDecision, StopDecision};
|
||||||
use shepherd_host_api::{HostAdapter, HostEvent, StopMode as HostStopMode, VolumeController};
|
use shepherd_host_api::{HostAdapter, HostEvent, StopMode as HostStopMode, VolumeController};
|
||||||
use shepherd_host_linux::{LinuxHost, LinuxVolumeController};
|
use shepherd_host_linux::{
|
||||||
|
ConnectivityConfig, ConnectivityEvent, ConnectivityHandle, ConnectivityMonitor, LinuxHost,
|
||||||
|
LinuxVolumeController,
|
||||||
|
};
|
||||||
use shepherd_ipc::{IpcServer, ServerMessage};
|
use shepherd_ipc::{IpcServer, ServerMessage};
|
||||||
use shepherd_store::{AuditEvent, AuditEventType, SqliteStore, Store};
|
use shepherd_store::{AuditEvent, AuditEventType, SqliteStore, Store};
|
||||||
use shepherd_util::{ClientId, MonotonicInstant, RateLimiter, default_config_path};
|
use shepherd_util::{default_config_path, ClientId, MonotonicInstant, RateLimiter};
|
||||||
use std::path::PathBuf;
|
use std::path::PathBuf;
|
||||||
use std::sync::Arc;
|
use std::sync::Arc;
|
||||||
use std::time::Duration;
|
use std::time::Duration;
|
||||||
use tokio::signal::unix::{SignalKind, signal};
|
use tokio::signal::unix::{signal, SignalKind};
|
||||||
use tokio::sync::Mutex;
|
use tokio::sync::{mpsc, watch, Mutex};
|
||||||
use tracing::{debug, error, info, warn};
|
use tracing::{debug, error, info, warn};
|
||||||
use tracing_subscriber::EnvFilter;
|
use tracing_subscriber::EnvFilter;
|
||||||
|
|
||||||
mod internet;
|
|
||||||
|
|
||||||
/// shepherdd - Policy enforcement service for child-focused computing
|
/// shepherdd - Policy enforcement service for child-focused computing
|
||||||
#[derive(Parser, Debug)]
|
#[derive(Parser, Debug)]
|
||||||
#[command(name = "shepherdd")]
|
#[command(name = "shepherdd")]
|
||||||
|
|
@ -62,11 +65,12 @@ struct Service {
|
||||||
ipc: Arc<IpcServer>,
|
ipc: Arc<IpcServer>,
|
||||||
store: Arc<dyn Store>,
|
store: Arc<dyn Store>,
|
||||||
rate_limiter: RateLimiter,
|
rate_limiter: RateLimiter,
|
||||||
internet_monitor: Option<internet::InternetMonitor>,
|
connectivity: ConnectivityHandle,
|
||||||
|
shutdown_tx: watch::Sender<bool>,
|
||||||
}
|
}
|
||||||
|
|
||||||
impl Service {
|
impl Service {
|
||||||
async fn new(args: &Args) -> Result<Self> {
|
async fn new(args: &Args) -> Result<(Self, mpsc::Receiver<ConnectivityEvent>)> {
|
||||||
// Load configuration
|
// Load configuration
|
||||||
let policy = load_config(&args.config)
|
let policy = load_config(&args.config)
|
||||||
.with_context(|| format!("Failed to load config from {:?}", args.config))?;
|
.with_context(|| format!("Failed to load config from {:?}", args.config))?;
|
||||||
|
|
@ -119,11 +123,9 @@ impl Service {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Initialize core engine
|
// Initialize core engine
|
||||||
|
let network_policy = policy.network.clone();
|
||||||
let engine = CoreEngine::new(policy, store.clone(), host.capabilities().clone());
|
let engine = CoreEngine::new(policy, store.clone(), host.capabilities().clone());
|
||||||
|
|
||||||
// Initialize internet connectivity monitor (if configured)
|
|
||||||
let internet_monitor = internet::InternetMonitor::from_policy(engine.policy());
|
|
||||||
|
|
||||||
// Initialize IPC server
|
// Initialize IPC server
|
||||||
let mut ipc = IpcServer::new(&socket_path);
|
let mut ipc = IpcServer::new(&socket_path);
|
||||||
ipc.start().await?;
|
ipc.start().await?;
|
||||||
|
|
@ -133,18 +135,43 @@ impl Service {
|
||||||
// Rate limiter: 30 requests per second per client
|
// Rate limiter: 30 requests per second per client
|
||||||
let rate_limiter = RateLimiter::new(30, Duration::from_secs(1));
|
let rate_limiter = RateLimiter::new(30, Duration::from_secs(1));
|
||||||
|
|
||||||
Ok(Self {
|
// Initialize connectivity monitor
|
||||||
engine,
|
let (shutdown_tx, shutdown_rx) = watch::channel(false);
|
||||||
host,
|
let connectivity_config = ConnectivityConfig {
|
||||||
volume,
|
check_url: network_policy.check_url,
|
||||||
ipc: Arc::new(ipc),
|
check_interval: network_policy.check_interval,
|
||||||
store,
|
check_timeout: network_policy.check_timeout,
|
||||||
rate_limiter,
|
};
|
||||||
internet_monitor,
|
let (connectivity_monitor, connectivity_events) =
|
||||||
})
|
ConnectivityMonitor::new(connectivity_config, shutdown_rx);
|
||||||
|
let connectivity = ConnectivityHandle::from_monitor(&connectivity_monitor);
|
||||||
|
|
||||||
|
// Spawn connectivity monitor task
|
||||||
|
tokio::spawn(async move {
|
||||||
|
connectivity_monitor.run().await;
|
||||||
|
});
|
||||||
|
|
||||||
|
info!(
|
||||||
|
check_url = %connectivity.global_check_url(),
|
||||||
|
"Connectivity monitor started"
|
||||||
|
);
|
||||||
|
|
||||||
|
Ok((
|
||||||
|
Self {
|
||||||
|
engine,
|
||||||
|
host,
|
||||||
|
volume,
|
||||||
|
ipc: Arc::new(ipc),
|
||||||
|
store,
|
||||||
|
rate_limiter,
|
||||||
|
connectivity,
|
||||||
|
shutdown_tx,
|
||||||
|
},
|
||||||
|
connectivity_events,
|
||||||
|
))
|
||||||
}
|
}
|
||||||
|
|
||||||
async fn run(self) -> Result<()> {
|
async fn run(self, mut connectivity_events: mpsc::Receiver<ConnectivityEvent>) -> Result<()> {
|
||||||
// Start host process monitor
|
// Start host process monitor
|
||||||
let _monitor_handle = self.host.start_monitor();
|
let _monitor_handle = self.host.start_monitor();
|
||||||
|
|
||||||
|
|
@ -162,14 +189,8 @@ impl Service {
|
||||||
let host = self.host.clone();
|
let host = self.host.clone();
|
||||||
let volume = self.volume.clone();
|
let volume = self.volume.clone();
|
||||||
let store = self.store.clone();
|
let store = self.store.clone();
|
||||||
|
let connectivity = self.connectivity.clone();
|
||||||
// Start internet connectivity monitoring (if configured)
|
let shutdown_tx = self.shutdown_tx.clone();
|
||||||
if let Some(monitor) = self.internet_monitor {
|
|
||||||
let engine_ref = engine.clone();
|
|
||||||
tokio::spawn(async move {
|
|
||||||
monitor.run(engine_ref).await;
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
// Spawn IPC accept task
|
// Spawn IPC accept task
|
||||||
let ipc_accept = ipc_ref.clone();
|
let ipc_accept = ipc_ref.clone();
|
||||||
|
|
@ -180,11 +201,12 @@ impl Service {
|
||||||
});
|
});
|
||||||
|
|
||||||
// Set up signal handlers
|
// Set up signal handlers
|
||||||
let mut sigterm =
|
let mut sigterm = signal(SignalKind::terminate())
|
||||||
signal(SignalKind::terminate()).context("Failed to create SIGTERM handler")?;
|
.context("Failed to create SIGTERM handler")?;
|
||||||
let mut sigint =
|
let mut sigint = signal(SignalKind::interrupt())
|
||||||
signal(SignalKind::interrupt()).context("Failed to create SIGINT handler")?;
|
.context("Failed to create SIGINT handler")?;
|
||||||
let mut sighup = signal(SignalKind::hangup()).context("Failed to create SIGHUP handler")?;
|
let mut sighup = signal(SignalKind::hangup())
|
||||||
|
.context("Failed to create SIGHUP handler")?;
|
||||||
|
|
||||||
// Main event loop
|
// Main event loop
|
||||||
let tick_interval = Duration::from_millis(100);
|
let tick_interval = Duration::from_millis(100);
|
||||||
|
|
@ -232,7 +254,12 @@ impl Service {
|
||||||
|
|
||||||
// IPC messages
|
// IPC messages
|
||||||
Some(msg) = ipc_messages.recv() => {
|
Some(msg) = ipc_messages.recv() => {
|
||||||
Self::handle_ipc_message(&engine, &host, &volume, &ipc_ref, &store, &rate_limiter, msg).await;
|
Self::handle_ipc_message(&engine, &host, &volume, &ipc_ref, &store, &rate_limiter, &connectivity, msg).await;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Connectivity events
|
||||||
|
Some(conn_event) = connectivity_events.recv() => {
|
||||||
|
Self::handle_connectivity_event(&engine, &ipc_ref, &connectivity, conn_event).await;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
@ -240,21 +267,17 @@ impl Service {
|
||||||
// Graceful shutdown
|
// Graceful shutdown
|
||||||
info!("Shutting down shepherdd");
|
info!("Shutting down shepherdd");
|
||||||
|
|
||||||
|
// Signal connectivity monitor to stop
|
||||||
|
let _ = shutdown_tx.send(true);
|
||||||
|
|
||||||
// Stop all running sessions
|
// Stop all running sessions
|
||||||
{
|
{
|
||||||
let engine = engine.lock().await;
|
let engine = engine.lock().await;
|
||||||
if let Some(session) = engine.current_session() {
|
if let Some(session) = engine.current_session() {
|
||||||
info!(session_id = %session.plan.session_id, "Stopping active session");
|
info!(session_id = %session.plan.session_id, "Stopping active session");
|
||||||
if let Some(handle) = &session.host_handle
|
if let Some(handle) = &session.host_handle && let Err(e) = host.stop(handle, HostStopMode::Graceful {
|
||||||
&& let Err(e) = host
|
timeout: Duration::from_secs(5),
|
||||||
.stop(
|
}).await {
|
||||||
handle,
|
|
||||||
HostStopMode::Graceful {
|
|
||||||
timeout: Duration::from_secs(5),
|
|
||||||
},
|
|
||||||
)
|
|
||||||
.await
|
|
||||||
{
|
|
||||||
warn!(error = %e, "Failed to stop session gracefully");
|
warn!(error = %e, "Failed to stop session gracefully");
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
@ -307,7 +330,9 @@ impl Service {
|
||||||
// Get the host handle and stop it
|
// Get the host handle and stop it
|
||||||
let handle = {
|
let handle = {
|
||||||
let engine = engine.lock().await;
|
let engine = engine.lock().await;
|
||||||
engine.current_session().and_then(|s| s.host_handle.clone())
|
engine
|
||||||
|
.current_session()
|
||||||
|
.and_then(|s| s.host_handle.clone())
|
||||||
};
|
};
|
||||||
|
|
||||||
if let Some(handle) = handle
|
if let Some(handle) = handle
|
||||||
|
|
@ -319,10 +344,10 @@ impl Service {
|
||||||
},
|
},
|
||||||
)
|
)
|
||||||
.await
|
.await
|
||||||
{
|
{
|
||||||
warn!(error = %e, "Failed to stop session gracefully, forcing");
|
warn!(error = %e, "Failed to stop session gracefully, forcing");
|
||||||
let _ = host.stop(&handle, HostStopMode::Force).await;
|
let _ = host.stop(&handle, HostStopMode::Force).await;
|
||||||
}
|
}
|
||||||
|
|
||||||
ipc.broadcast_event(Event::new(EventPayload::SessionExpiring {
|
ipc.broadcast_event(Event::new(EventPayload::SessionExpiring {
|
||||||
session_id: session_id.clone(),
|
session_id: session_id.clone(),
|
||||||
|
|
@ -409,10 +434,7 @@ impl Service {
|
||||||
engine.notify_session_exited(status.code, now_mono, now)
|
engine.notify_session_exited(status.code, now_mono, now)
|
||||||
};
|
};
|
||||||
|
|
||||||
info!(
|
info!(has_event = core_event.is_some(), "notify_session_exited result");
|
||||||
has_event = core_event.is_some(),
|
|
||||||
"notify_session_exited result"
|
|
||||||
);
|
|
||||||
|
|
||||||
if let Some(CoreEvent::SessionEnded {
|
if let Some(CoreEvent::SessionEnded {
|
||||||
session_id,
|
session_id,
|
||||||
|
|
@ -420,29 +442,29 @@ impl Service {
|
||||||
reason,
|
reason,
|
||||||
duration,
|
duration,
|
||||||
}) = core_event
|
}) = core_event
|
||||||
{
|
{
|
||||||
info!(
|
info!(
|
||||||
session_id = %session_id,
|
session_id = %session_id,
|
||||||
entry_id = %entry_id,
|
entry_id = %entry_id,
|
||||||
reason = ?reason,
|
reason = ?reason,
|
||||||
duration_secs = duration.as_secs(),
|
duration_secs = duration.as_secs(),
|
||||||
"Broadcasting SessionEnded"
|
"Broadcasting SessionEnded"
|
||||||
);
|
);
|
||||||
ipc.broadcast_event(Event::new(EventPayload::SessionEnded {
|
ipc.broadcast_event(Event::new(EventPayload::SessionEnded {
|
||||||
session_id,
|
session_id,
|
||||||
entry_id,
|
entry_id,
|
||||||
reason,
|
reason,
|
||||||
duration,
|
duration,
|
||||||
}));
|
}));
|
||||||
|
|
||||||
// Broadcast state change
|
// Broadcast state change
|
||||||
let state = {
|
let state = {
|
||||||
let engine = engine.lock().await;
|
let engine = engine.lock().await;
|
||||||
engine.get_state()
|
engine.get_state()
|
||||||
};
|
};
|
||||||
info!("Broadcasting StateChanged");
|
info!("Broadcasting StateChanged");
|
||||||
ipc.broadcast_event(Event::new(EventPayload::StateChanged(state)));
|
ipc.broadcast_event(Event::new(EventPayload::StateChanged(state)));
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
HostEvent::WindowReady { handle } => {
|
HostEvent::WindowReady { handle } => {
|
||||||
|
|
@ -455,6 +477,45 @@ impl Service {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
async fn handle_connectivity_event(
|
||||||
|
engine: &Arc<Mutex<CoreEngine>>,
|
||||||
|
ipc: &Arc<IpcServer>,
|
||||||
|
connectivity: &ConnectivityHandle,
|
||||||
|
event: ConnectivityEvent,
|
||||||
|
) {
|
||||||
|
match event {
|
||||||
|
ConnectivityEvent::StatusChanged {
|
||||||
|
connected,
|
||||||
|
check_url,
|
||||||
|
} => {
|
||||||
|
info!(connected = connected, url = %check_url, "Connectivity status changed");
|
||||||
|
|
||||||
|
// Broadcast connectivity change event
|
||||||
|
ipc.broadcast_event(Event::new(EventPayload::ConnectivityChanged {
|
||||||
|
connected,
|
||||||
|
check_url,
|
||||||
|
}));
|
||||||
|
|
||||||
|
// Also broadcast state change so clients can update entry availability
|
||||||
|
let state = {
|
||||||
|
let eng = engine.lock().await;
|
||||||
|
let mut state = eng.get_state();
|
||||||
|
state.connectivity = ConnectivityStatus {
|
||||||
|
connected: connectivity.is_connected().await,
|
||||||
|
check_url: Some(connectivity.global_check_url().to_string()),
|
||||||
|
last_check: connectivity.last_check_time().await,
|
||||||
|
};
|
||||||
|
state
|
||||||
|
};
|
||||||
|
ipc.broadcast_event(Event::new(EventPayload::StateChanged(state)));
|
||||||
|
}
|
||||||
|
|
||||||
|
ConnectivityEvent::InterfaceChanged => {
|
||||||
|
debug!("Network interface changed, connectivity recheck in progress");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
async fn handle_ipc_message(
|
async fn handle_ipc_message(
|
||||||
engine: &Arc<Mutex<CoreEngine>>,
|
engine: &Arc<Mutex<CoreEngine>>,
|
||||||
host: &Arc<LinuxHost>,
|
host: &Arc<LinuxHost>,
|
||||||
|
|
@ -462,6 +523,7 @@ impl Service {
|
||||||
ipc: &Arc<IpcServer>,
|
ipc: &Arc<IpcServer>,
|
||||||
store: &Arc<dyn Store>,
|
store: &Arc<dyn Store>,
|
||||||
rate_limiter: &Arc<Mutex<RateLimiter>>,
|
rate_limiter: &Arc<Mutex<RateLimiter>>,
|
||||||
|
connectivity: &ConnectivityHandle,
|
||||||
msg: ServerMessage,
|
msg: ServerMessage,
|
||||||
) {
|
) {
|
||||||
match msg {
|
match msg {
|
||||||
|
|
@ -479,17 +541,9 @@ impl Service {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
let response = Self::handle_command(
|
let response =
|
||||||
engine,
|
Self::handle_command(engine, host, volume, ipc, store, connectivity, &client_id, request.request_id, request.command)
|
||||||
host,
|
.await;
|
||||||
volume,
|
|
||||||
ipc,
|
|
||||||
store,
|
|
||||||
&client_id,
|
|
||||||
request.request_id,
|
|
||||||
request.command,
|
|
||||||
)
|
|
||||||
.await;
|
|
||||||
|
|
||||||
let _ = ipc.send_response(&client_id, response).await;
|
let _ = ipc.send_response(&client_id, response).await;
|
||||||
}
|
}
|
||||||
|
|
@ -502,19 +556,23 @@ impl Service {
|
||||||
"Client connected"
|
"Client connected"
|
||||||
);
|
);
|
||||||
|
|
||||||
let _ = store.append_audit(AuditEvent::new(AuditEventType::ClientConnected {
|
let _ = store.append_audit(AuditEvent::new(
|
||||||
client_id: client_id.to_string(),
|
AuditEventType::ClientConnected {
|
||||||
role: format!("{:?}", info.role),
|
client_id: client_id.to_string(),
|
||||||
uid: info.uid,
|
role: format!("{:?}", info.role),
|
||||||
}));
|
uid: info.uid,
|
||||||
|
},
|
||||||
|
));
|
||||||
}
|
}
|
||||||
|
|
||||||
ServerMessage::ClientDisconnected { client_id } => {
|
ServerMessage::ClientDisconnected { client_id } => {
|
||||||
debug!(client_id = %client_id, "Client disconnected");
|
debug!(client_id = %client_id, "Client disconnected");
|
||||||
|
|
||||||
let _ = store.append_audit(AuditEvent::new(AuditEventType::ClientDisconnected {
|
let _ = store.append_audit(AuditEvent::new(
|
||||||
client_id: client_id.to_string(),
|
AuditEventType::ClientDisconnected {
|
||||||
}));
|
client_id: client_id.to_string(),
|
||||||
|
},
|
||||||
|
));
|
||||||
|
|
||||||
// Clean up rate limiter
|
// Clean up rate limiter
|
||||||
let mut limiter = rate_limiter.lock().await;
|
let mut limiter = rate_limiter.lock().await;
|
||||||
|
|
@ -530,6 +588,7 @@ impl Service {
|
||||||
volume: &Arc<LinuxVolumeController>,
|
volume: &Arc<LinuxVolumeController>,
|
||||||
ipc: &Arc<IpcServer>,
|
ipc: &Arc<IpcServer>,
|
||||||
store: &Arc<dyn Store>,
|
store: &Arc<dyn Store>,
|
||||||
|
connectivity: &ConnectivityHandle,
|
||||||
client_id: &ClientId,
|
client_id: &ClientId,
|
||||||
request_id: u64,
|
request_id: u64,
|
||||||
command: Command,
|
command: Command,
|
||||||
|
|
@ -539,7 +598,13 @@ impl Service {
|
||||||
|
|
||||||
match command {
|
match command {
|
||||||
Command::GetState => {
|
Command::GetState => {
|
||||||
let state = engine.lock().await.get_state();
|
let mut state = engine.lock().await.get_state();
|
||||||
|
// Add connectivity status
|
||||||
|
state.connectivity = ConnectivityStatus {
|
||||||
|
connected: connectivity.is_connected().await,
|
||||||
|
check_url: Some(connectivity.global_check_url().to_string()),
|
||||||
|
last_check: connectivity.last_check_time().await,
|
||||||
|
};
|
||||||
Response::success(request_id, ResponsePayload::State(state))
|
Response::success(request_id, ResponsePayload::State(state))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -552,13 +617,40 @@ impl Service {
|
||||||
Command::Launch { entry_id } => {
|
Command::Launch { entry_id } => {
|
||||||
let mut eng = engine.lock().await;
|
let mut eng = engine.lock().await;
|
||||||
|
|
||||||
|
// First check if the entry requires network and if it's available
|
||||||
|
if let Some(entry) = eng.policy().get_entry(&entry_id)
|
||||||
|
&& entry.network.required
|
||||||
|
{
|
||||||
|
let check_url = entry.network.effective_check_url(&eng.policy().network);
|
||||||
|
let network_ok = connectivity.check_url(check_url).await;
|
||||||
|
|
||||||
|
if !network_ok {
|
||||||
|
info!(
|
||||||
|
entry_id = %entry_id,
|
||||||
|
check_url = %check_url,
|
||||||
|
"Launch denied: network connectivity check failed"
|
||||||
|
);
|
||||||
|
return Response::success(
|
||||||
|
request_id,
|
||||||
|
ResponsePayload::LaunchDenied {
|
||||||
|
reasons: vec![ReasonCode::NetworkUnavailable {
|
||||||
|
check_url: check_url.to_string(),
|
||||||
|
}],
|
||||||
|
},
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
match eng.request_launch(&entry_id, now) {
|
match eng.request_launch(&entry_id, now) {
|
||||||
LaunchDecision::Approved(plan) => {
|
LaunchDecision::Approved(plan) => {
|
||||||
// Start the session in the engine
|
// Start the session in the engine
|
||||||
let event = eng.start_session(plan.clone(), now, now_mono);
|
let event = eng.start_session(plan.clone(), now, now_mono);
|
||||||
|
|
||||||
// Get the entry kind for spawning
|
// Get the entry kind for spawning
|
||||||
let entry_kind = eng.policy().get_entry(&entry_id).map(|e| e.kind.clone());
|
let entry_kind = eng
|
||||||
|
.policy()
|
||||||
|
.get_entry(&entry_id)
|
||||||
|
.map(|e| e.kind.clone());
|
||||||
|
|
||||||
// Build spawn options with log path if capture_child_output is enabled
|
// Build spawn options with log path if capture_child_output is enabled
|
||||||
let spawn_options = if eng.policy().service.capture_child_output {
|
let spawn_options = if eng.policy().service.capture_child_output {
|
||||||
|
|
@ -585,7 +677,11 @@ impl Service {
|
||||||
|
|
||||||
if let Some(kind) = entry_kind {
|
if let Some(kind) = entry_kind {
|
||||||
match host
|
match host
|
||||||
.spawn(plan.session_id.clone(), &kind, spawn_options)
|
.spawn(
|
||||||
|
plan.session_id.clone(),
|
||||||
|
&kind,
|
||||||
|
spawn_options,
|
||||||
|
)
|
||||||
.await
|
.await
|
||||||
{
|
{
|
||||||
Ok(handle) => {
|
Ok(handle) => {
|
||||||
|
|
@ -601,14 +697,12 @@ impl Service {
|
||||||
deadline,
|
deadline,
|
||||||
} = event
|
} = event
|
||||||
{
|
{
|
||||||
ipc.broadcast_event(Event::new(
|
ipc.broadcast_event(Event::new(EventPayload::SessionStarted {
|
||||||
EventPayload::SessionStarted {
|
session_id: session_id.clone(),
|
||||||
session_id: session_id.clone(),
|
entry_id,
|
||||||
entry_id,
|
label,
|
||||||
label,
|
deadline,
|
||||||
deadline,
|
}));
|
||||||
},
|
|
||||||
));
|
|
||||||
|
|
||||||
Response::success(
|
Response::success(
|
||||||
request_id,
|
request_id,
|
||||||
|
|
@ -620,10 +714,7 @@ impl Service {
|
||||||
} else {
|
} else {
|
||||||
Response::error(
|
Response::error(
|
||||||
request_id,
|
request_id,
|
||||||
ErrorInfo::new(
|
ErrorInfo::new(ErrorCode::InternalError, "Unexpected event"),
|
||||||
ErrorCode::InternalError,
|
|
||||||
"Unexpected event",
|
|
||||||
),
|
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
@ -636,22 +727,18 @@ impl Service {
|
||||||
reason,
|
reason,
|
||||||
duration,
|
duration,
|
||||||
}) = eng.notify_session_exited(Some(-1), now_mono, now)
|
}) = eng.notify_session_exited(Some(-1), now_mono, now)
|
||||||
{
|
{
|
||||||
ipc.broadcast_event(Event::new(
|
ipc.broadcast_event(Event::new(EventPayload::SessionEnded {
|
||||||
EventPayload::SessionEnded {
|
|
||||||
session_id,
|
session_id,
|
||||||
entry_id,
|
entry_id,
|
||||||
reason,
|
reason,
|
||||||
duration,
|
duration,
|
||||||
},
|
}));
|
||||||
));
|
|
||||||
|
|
||||||
// Broadcast state change so clients return to idle
|
// Broadcast state change so clients return to idle
|
||||||
let state = eng.get_state();
|
let state = eng.get_state();
|
||||||
ipc.broadcast_event(Event::new(
|
ipc.broadcast_event(Event::new(EventPayload::StateChanged(state)));
|
||||||
EventPayload::StateChanged(state),
|
}
|
||||||
));
|
|
||||||
}
|
|
||||||
|
|
||||||
Response::error(
|
Response::error(
|
||||||
request_id,
|
request_id,
|
||||||
|
|
@ -679,7 +766,9 @@ impl Service {
|
||||||
let mut eng = engine.lock().await;
|
let mut eng = engine.lock().await;
|
||||||
|
|
||||||
// Get handle before stopping in engine
|
// Get handle before stopping in engine
|
||||||
let handle = eng.current_session().and_then(|s| s.host_handle.clone());
|
let handle = eng
|
||||||
|
.current_session()
|
||||||
|
.and_then(|s| s.host_handle.clone());
|
||||||
|
|
||||||
let reason = match mode {
|
let reason = match mode {
|
||||||
StopMode::Graceful => SessionEndReason::UserStop,
|
StopMode::Graceful => SessionEndReason::UserStop,
|
||||||
|
|
@ -730,13 +819,12 @@ impl Service {
|
||||||
Command::ReloadConfig => {
|
Command::ReloadConfig => {
|
||||||
// Check permission
|
// Check permission
|
||||||
if let Some(info) = ipc.get_client_info(client_id).await
|
if let Some(info) = ipc.get_client_info(client_id).await
|
||||||
&& !info.role.can_reload_config()
|
&& !info.role.can_reload_config() {
|
||||||
{
|
return Response::error(
|
||||||
return Response::error(
|
request_id,
|
||||||
request_id,
|
ErrorInfo::new(ErrorCode::PermissionDenied, "Admin role required"),
|
||||||
ErrorInfo::new(ErrorCode::PermissionDenied, "Admin role required"),
|
);
|
||||||
);
|
}
|
||||||
}
|
|
||||||
|
|
||||||
// TODO: Reload from original config path
|
// TODO: Reload from original config path
|
||||||
Response::error(
|
Response::error(
|
||||||
|
|
@ -745,12 +833,14 @@ impl Service {
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
Command::SubscribeEvents => Response::success(
|
Command::SubscribeEvents => {
|
||||||
request_id,
|
Response::success(
|
||||||
ResponsePayload::Subscribed {
|
request_id,
|
||||||
client_id: client_id.clone(),
|
ResponsePayload::Subscribed {
|
||||||
},
|
client_id: client_id.clone(),
|
||||||
),
|
},
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
Command::UnsubscribeEvents => {
|
Command::UnsubscribeEvents => {
|
||||||
Response::success(request_id, ResponsePayload::Unsubscribed)
|
Response::success(request_id, ResponsePayload::Unsubscribed)
|
||||||
|
|
@ -771,28 +861,21 @@ impl Service {
|
||||||
Command::ExtendCurrent { by } => {
|
Command::ExtendCurrent { by } => {
|
||||||
// Check permission
|
// Check permission
|
||||||
if let Some(info) = ipc.get_client_info(client_id).await
|
if let Some(info) = ipc.get_client_info(client_id).await
|
||||||
&& !info.role.can_extend()
|
&& !info.role.can_extend() {
|
||||||
{
|
return Response::error(
|
||||||
return Response::error(
|
request_id,
|
||||||
request_id,
|
ErrorInfo::new(ErrorCode::PermissionDenied, "Admin role required"),
|
||||||
ErrorInfo::new(ErrorCode::PermissionDenied, "Admin role required"),
|
);
|
||||||
);
|
}
|
||||||
}
|
|
||||||
|
|
||||||
let mut eng = engine.lock().await;
|
let mut eng = engine.lock().await;
|
||||||
match eng.extend_current(by, now_mono, now) {
|
match eng.extend_current(by, now_mono, now) {
|
||||||
Some(new_deadline) => Response::success(
|
Some(new_deadline) => {
|
||||||
request_id,
|
Response::success(request_id, ResponsePayload::Extended { new_deadline: Some(new_deadline) })
|
||||||
ResponsePayload::Extended {
|
}
|
||||||
new_deadline: Some(new_deadline),
|
|
||||||
},
|
|
||||||
),
|
|
||||||
None => Response::error(
|
None => Response::error(
|
||||||
request_id,
|
request_id,
|
||||||
ErrorInfo::new(
|
ErrorInfo::new(ErrorCode::NoActiveSession, "No active session or session is unlimited"),
|
||||||
ErrorCode::NoActiveSession,
|
|
||||||
"No active session or session is unlimited",
|
|
||||||
),
|
|
||||||
),
|
),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
@ -930,15 +1013,14 @@ impl Service {
|
||||||
engine: &Arc<Mutex<CoreEngine>>,
|
engine: &Arc<Mutex<CoreEngine>>,
|
||||||
) -> VolumeRestrictions {
|
) -> VolumeRestrictions {
|
||||||
let eng = engine.lock().await;
|
let eng = engine.lock().await;
|
||||||
|
|
||||||
// Check if there's an active session with volume restrictions
|
// Check if there's an active session with volume restrictions
|
||||||
if let Some(session) = eng.current_session()
|
if let Some(session) = eng.current_session()
|
||||||
&& let Some(entry) = eng.policy().get_entry(&session.plan.entry_id)
|
&& let Some(entry) = eng.policy().get_entry(&session.plan.entry_id)
|
||||||
&& let Some(ref vol_policy) = entry.volume
|
&& let Some(ref vol_policy) = entry.volume {
|
||||||
{
|
return Self::convert_volume_policy(vol_policy);
|
||||||
return Self::convert_volume_policy(vol_policy);
|
}
|
||||||
}
|
|
||||||
|
|
||||||
// Fall back to global policy
|
// Fall back to global policy
|
||||||
Self::convert_volume_policy(&eng.policy().volume)
|
Self::convert_volume_policy(&eng.policy().volume)
|
||||||
}
|
}
|
||||||
|
|
@ -958,17 +1040,20 @@ async fn main() -> Result<()> {
|
||||||
let args = Args::parse();
|
let args = Args::parse();
|
||||||
|
|
||||||
// Initialize logging
|
// Initialize logging
|
||||||
let filter =
|
let filter = EnvFilter::try_from_default_env()
|
||||||
EnvFilter::try_from_default_env().unwrap_or_else(|_| EnvFilter::new(&args.log_level));
|
.unwrap_or_else(|_| EnvFilter::new(&args.log_level));
|
||||||
|
|
||||||
tracing_subscriber::fmt()
|
tracing_subscriber::fmt()
|
||||||
.with_env_filter(filter)
|
.with_env_filter(filter)
|
||||||
.with_target(true)
|
.with_target(true)
|
||||||
.init();
|
.init();
|
||||||
|
|
||||||
info!(version = env!("CARGO_PKG_VERSION"), "shepherdd starting");
|
info!(
|
||||||
|
version = env!("CARGO_PKG_VERSION"),
|
||||||
|
"shepherdd starting"
|
||||||
|
);
|
||||||
|
|
||||||
// Create and run the service
|
// Create and run the service
|
||||||
let service = Service::new(&args).await?;
|
let (service, connectivity_events) = Service::new(&args).await?;
|
||||||
service.run().await
|
service.run(connectivity_events).await
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -3,7 +3,7 @@
|
||||||
//! These tests verify the end-to-end behavior of shepherdd.
|
//! These tests verify the end-to-end behavior of shepherdd.
|
||||||
|
|
||||||
use shepherd_api::{EntryKind, WarningSeverity, WarningThreshold};
|
use shepherd_api::{EntryKind, WarningSeverity, WarningThreshold};
|
||||||
use shepherd_config::{AvailabilityPolicy, Entry, LimitsPolicy, Policy};
|
use shepherd_config::{AvailabilityPolicy, Entry, LimitsPolicy, NetworkRequirement, Policy};
|
||||||
use shepherd_core::{CoreEngine, CoreEvent, LaunchDecision};
|
use shepherd_core::{CoreEngine, CoreEvent, LaunchDecision};
|
||||||
use shepherd_host_api::{HostCapabilities, MockHost};
|
use shepherd_host_api::{HostCapabilities, MockHost};
|
||||||
use shepherd_store::{SqliteStore, Store};
|
use shepherd_store::{SqliteStore, Store};
|
||||||
|
|
@ -15,45 +15,48 @@ use std::time::Duration;
|
||||||
fn make_test_policy() -> Policy {
|
fn make_test_policy() -> Policy {
|
||||||
Policy {
|
Policy {
|
||||||
service: Default::default(),
|
service: Default::default(),
|
||||||
entries: vec![Entry {
|
entries: vec![
|
||||||
id: EntryId::new("test-game"),
|
Entry {
|
||||||
label: "Test Game".into(),
|
id: EntryId::new("test-game"),
|
||||||
icon_ref: None,
|
label: "Test Game".into(),
|
||||||
kind: EntryKind::Process {
|
icon_ref: None,
|
||||||
command: "sleep".into(),
|
kind: EntryKind::Process {
|
||||||
args: vec!["999".into()],
|
command: "sleep".into(),
|
||||||
env: HashMap::new(),
|
args: vec!["999".into()],
|
||||||
cwd: None,
|
env: HashMap::new(),
|
||||||
},
|
cwd: None,
|
||||||
availability: AvailabilityPolicy {
|
|
||||||
windows: vec![],
|
|
||||||
always: true,
|
|
||||||
},
|
|
||||||
limits: LimitsPolicy {
|
|
||||||
max_run: Some(Duration::from_secs(10)), // Short for testing
|
|
||||||
daily_quota: None,
|
|
||||||
cooldown: None,
|
|
||||||
},
|
|
||||||
warnings: vec![
|
|
||||||
WarningThreshold {
|
|
||||||
seconds_before: 5,
|
|
||||||
severity: WarningSeverity::Warn,
|
|
||||||
message_template: Some("5 seconds left".into()),
|
|
||||||
},
|
},
|
||||||
WarningThreshold {
|
availability: AvailabilityPolicy {
|
||||||
seconds_before: 2,
|
windows: vec![],
|
||||||
severity: WarningSeverity::Critical,
|
always: true,
|
||||||
message_template: Some("2 seconds left!".into()),
|
|
||||||
},
|
},
|
||||||
],
|
limits: LimitsPolicy {
|
||||||
volume: None,
|
max_run: Some(Duration::from_secs(10)), // Short for testing
|
||||||
disabled: false,
|
daily_quota: None,
|
||||||
disabled_reason: None,
|
cooldown: None,
|
||||||
internet: Default::default(),
|
},
|
||||||
}],
|
warnings: vec![
|
||||||
|
WarningThreshold {
|
||||||
|
seconds_before: 5,
|
||||||
|
severity: WarningSeverity::Warn,
|
||||||
|
message_template: Some("5 seconds left".into()),
|
||||||
|
},
|
||||||
|
WarningThreshold {
|
||||||
|
seconds_before: 2,
|
||||||
|
severity: WarningSeverity::Critical,
|
||||||
|
message_template: Some("2 seconds left!".into()),
|
||||||
|
},
|
||||||
|
],
|
||||||
|
volume: None,
|
||||||
|
network: NetworkRequirement::default(),
|
||||||
|
disabled: false,
|
||||||
|
disabled_reason: None,
|
||||||
|
},
|
||||||
|
],
|
||||||
default_warnings: vec![],
|
default_warnings: vec![],
|
||||||
default_max_run: Some(Duration::from_secs(3600)),
|
default_max_run: Some(Duration::from_secs(3600)),
|
||||||
volume: Default::default(),
|
volume: Default::default(),
|
||||||
|
network: Default::default(),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -89,9 +92,7 @@ fn test_launch_approval() {
|
||||||
let entry_id = EntryId::new("test-game");
|
let entry_id = EntryId::new("test-game");
|
||||||
let decision = engine.request_launch(&entry_id, shepherd_util::now());
|
let decision = engine.request_launch(&entry_id, shepherd_util::now());
|
||||||
|
|
||||||
assert!(
|
assert!(matches!(decision, LaunchDecision::Approved(plan) if plan.max_duration == Some(Duration::from_secs(10))));
|
||||||
matches!(decision, LaunchDecision::Approved(plan) if plan.max_duration == Some(Duration::from_secs(10)))
|
|
||||||
);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
|
|
@ -150,26 +151,14 @@ fn test_warning_emission() {
|
||||||
let at_6s = now + chrono::Duration::seconds(6);
|
let at_6s = now + chrono::Duration::seconds(6);
|
||||||
let events = engine.tick(at_6s_mono, at_6s);
|
let events = engine.tick(at_6s_mono, at_6s);
|
||||||
assert_eq!(events.len(), 1);
|
assert_eq!(events.len(), 1);
|
||||||
assert!(matches!(
|
assert!(matches!(&events[0], CoreEvent::Warning { threshold_seconds: 5, .. }));
|
||||||
&events[0],
|
|
||||||
CoreEvent::Warning {
|
|
||||||
threshold_seconds: 5,
|
|
||||||
..
|
|
||||||
}
|
|
||||||
));
|
|
||||||
|
|
||||||
// At 9 seconds (1 second remaining), 2-second warning should fire
|
// At 9 seconds (1 second remaining), 2-second warning should fire
|
||||||
let at_9s_mono = now_mono + Duration::from_secs(9);
|
let at_9s_mono = now_mono + Duration::from_secs(9);
|
||||||
let at_9s = now + chrono::Duration::seconds(9);
|
let at_9s = now + chrono::Duration::seconds(9);
|
||||||
let events = engine.tick(at_9s_mono, at_9s);
|
let events = engine.tick(at_9s_mono, at_9s);
|
||||||
assert_eq!(events.len(), 1);
|
assert_eq!(events.len(), 1);
|
||||||
assert!(matches!(
|
assert!(matches!(&events[0], CoreEvent::Warning { threshold_seconds: 2, .. }));
|
||||||
&events[0],
|
|
||||||
CoreEvent::Warning {
|
|
||||||
threshold_seconds: 2,
|
|
||||||
..
|
|
||||||
}
|
|
||||||
));
|
|
||||||
|
|
||||||
// Warnings shouldn't repeat
|
// Warnings shouldn't repeat
|
||||||
let events = engine.tick(at_9s_mono, at_9s);
|
let events = engine.tick(at_9s_mono, at_9s);
|
||||||
|
|
@ -200,9 +189,7 @@ fn test_session_expiry() {
|
||||||
let events = engine.tick(at_11s_mono, at_11s);
|
let events = engine.tick(at_11s_mono, at_11s);
|
||||||
|
|
||||||
// Should have both remaining warnings + expiry
|
// Should have both remaining warnings + expiry
|
||||||
let has_expiry = events
|
let has_expiry = events.iter().any(|e| matches!(e, CoreEvent::ExpireDue { .. }));
|
||||||
.iter()
|
|
||||||
.any(|e| matches!(e, CoreEvent::ExpireDue { .. }));
|
|
||||||
assert!(has_expiry, "Expected ExpireDue event");
|
assert!(has_expiry, "Expected ExpireDue event");
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -305,18 +292,9 @@ fn test_config_parsing() {
|
||||||
let policy = parse_config(config).unwrap();
|
let policy = parse_config(config).unwrap();
|
||||||
assert_eq!(policy.entries.len(), 1);
|
assert_eq!(policy.entries.len(), 1);
|
||||||
assert_eq!(policy.entries[0].id.as_str(), "scummvm");
|
assert_eq!(policy.entries[0].id.as_str(), "scummvm");
|
||||||
assert_eq!(
|
assert_eq!(policy.entries[0].limits.max_run, Some(Duration::from_secs(3600)));
|
||||||
policy.entries[0].limits.max_run,
|
assert_eq!(policy.entries[0].limits.daily_quota, Some(Duration::from_secs(7200)));
|
||||||
Some(Duration::from_secs(3600))
|
assert_eq!(policy.entries[0].limits.cooldown, Some(Duration::from_secs(300)));
|
||||||
);
|
|
||||||
assert_eq!(
|
|
||||||
policy.entries[0].limits.daily_quota,
|
|
||||||
Some(Duration::from_secs(7200))
|
|
||||||
);
|
|
||||||
assert_eq!(
|
|
||||||
policy.entries[0].limits.cooldown,
|
|
||||||
Some(Duration::from_secs(300))
|
|
||||||
);
|
|
||||||
assert_eq!(policy.entries[0].warnings.len(), 1);
|
assert_eq!(policy.entries[0].warnings.len(), 1);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -339,11 +317,7 @@ fn test_session_extension() {
|
||||||
engine.start_session(plan, now, now_mono);
|
engine.start_session(plan, now, now_mono);
|
||||||
|
|
||||||
// Get original deadline (should be Some for this test)
|
// Get original deadline (should be Some for this test)
|
||||||
let original_deadline = engine
|
let original_deadline = engine.current_session().unwrap().deadline.expect("Expected deadline");
|
||||||
.current_session()
|
|
||||||
.unwrap()
|
|
||||||
.deadline
|
|
||||||
.expect("Expected deadline");
|
|
||||||
|
|
||||||
// Extend by 5 minutes
|
// Extend by 5 minutes
|
||||||
let new_deadline = engine.extend_current(Duration::from_secs(300), now_mono, now);
|
let new_deadline = engine.extend_current(Duration::from_secs(300), now_mono, now);
|
||||||
|
|
|
||||||
|
|
@ -1,21 +0,0 @@
|
||||||
# Internet Connection Gating
|
|
||||||
|
|
||||||
Issue: <https://github.com/aarmea/shepherd-launcher/issues/9>
|
|
||||||
|
|
||||||
Summary:
|
|
||||||
- Added internet connectivity configuration in the shepherdd config schema and policy, with global checks and per-entry requirements.
|
|
||||||
- Implemented a connectivity monitor in shepherdd and enforced internet-required gating in the core engine.
|
|
||||||
- Added a new ReasonCode for internet-unavailable, updated launcher UI message mapping, and refreshed docs/examples.
|
|
||||||
|
|
||||||
Key files:
|
|
||||||
- crates/shepherd-config/src/schema.rs
|
|
||||||
- crates/shepherd-config/src/policy.rs
|
|
||||||
- crates/shepherd-config/src/internet.rs
|
|
||||||
- crates/shepherd-config/src/validation.rs
|
|
||||||
- crates/shepherd-core/src/engine.rs
|
|
||||||
- crates/shepherdd/src/internet.rs
|
|
||||||
- crates/shepherdd/src/main.rs
|
|
||||||
- crates/shepherd-api/src/types.rs
|
|
||||||
- crates/shepherd-launcher-ui/src/client.rs
|
|
||||||
- config.example.toml
|
|
||||||
- crates/shepherd-config/README.md
|
|
||||||
|
|
@ -1,22 +0,0 @@
|
||||||
# Controller And Keyboard Launching
|
|
||||||
|
|
||||||
Issue: <https://github.com/aarmea/shepherd-launcher/issues/20>
|
|
||||||
|
|
||||||
Prompt summary:
|
|
||||||
- Launching activities required pointer input.
|
|
||||||
- Requested non-pointer controls:
|
|
||||||
- Selection via arrow keys, WASD, D-pad, or analog stick
|
|
||||||
- Launch via Enter, Space, controller A/B/Start
|
|
||||||
- Exit via Alt+F4, Ctrl+W, controller home
|
|
||||||
- Goal was better accessibility and support for pointer-less handheld systems.
|
|
||||||
|
|
||||||
Implemented summary:
|
|
||||||
- Added keyboard navigation and activation support in launcher UI grid.
|
|
||||||
- Added explicit keyboard exit shortcuts at the window level.
|
|
||||||
- Added gamepad input handling via `gilrs` for D-pad, analog stick, A/B/Start launch, and home exit.
|
|
||||||
- Added focused tile styling so non-pointer selection is visible.
|
|
||||||
|
|
||||||
Key files:
|
|
||||||
- crates/shepherd-launcher-ui/src/app.rs
|
|
||||||
- crates/shepherd-launcher-ui/src/grid.rs
|
|
||||||
- crates/shepherd-launcher-ui/Cargo.toml
|
|
||||||
|
|
@ -1,15 +0,0 @@
|
||||||
# Sway StopCurrent Keybinds
|
|
||||||
|
|
||||||
Prompt summary:
|
|
||||||
- Move keyboard exit handling out of launcher UI code and into `sway.conf`.
|
|
||||||
- Keep controller home behavior as-is.
|
|
||||||
- Ensure "exit" uses the API (`StopCurrent`) rather than closing windows directly.
|
|
||||||
|
|
||||||
Implemented summary:
|
|
||||||
- Added a `--stop-current` mode to `shepherd-launcher` that sends `StopCurrent` to shepherdd over IPC and exits.
|
|
||||||
- Added Sway keybindings for `Alt+F4`, `Ctrl+W`, and `Home` that execute `shepherd-launcher --stop-current`.
|
|
||||||
- Kept controller home behavior in launcher UI unchanged.
|
|
||||||
|
|
||||||
Key files:
|
|
||||||
- `sway.conf`
|
|
||||||
- `crates/shepherd-launcher-ui/src/main.rs`
|
|
||||||
|
Before Width: | Height: | Size: 109 KiB After Width: | Height: | Size: 131 B |
|
Before Width: | Height: | Size: 131 KiB After Width: | Height: | Size: 131 B |
|
Before Width: | Height: | Size: 98 KiB After Width: | Height: | Size: 131 B |
|
Before Width: | Height: | Size: 256 KiB After Width: | Height: | Size: 131 B |
|
Before Width: | Height: | Size: 26 KiB After Width: | Height: | Size: 130 B |
|
Before Width: | Height: | Size: 105 KiB After Width: | Height: | Size: 131 B |
|
|
@ -17,7 +17,6 @@ libgdk-pixbuf-xlib-2.0-dev
|
||||||
# Wayland development libraries
|
# Wayland development libraries
|
||||||
libwayland-dev
|
libwayland-dev
|
||||||
libxkbcommon-dev
|
libxkbcommon-dev
|
||||||
libudev-dev
|
|
||||||
|
|
||||||
# X11 (for XWayland support)
|
# X11 (for XWayland support)
|
||||||
libx11-dev
|
libx11-dev
|
||||||
|
|
|
||||||
|
|
@ -13,4 +13,3 @@ xdg-desktop-portal-wlr
|
||||||
libgtk-4-1
|
libgtk-4-1
|
||||||
libadwaita-1-0
|
libadwaita-1-0
|
||||||
libgtk4-layer-shell0
|
libgtk4-layer-shell0
|
||||||
libudev1
|
|
||||||
|
|
|
||||||
|
|
@ -113,7 +113,7 @@ install_desktop_entry() {
|
||||||
[Desktop Entry]
|
[Desktop Entry]
|
||||||
Name=Shepherd Kiosk
|
Name=Shepherd Kiosk
|
||||||
Comment=Shepherd game launcher kiosk mode
|
Comment=Shepherd game launcher kiosk mode
|
||||||
Exec=sway -c $SWAY_CONFIG_DIR/$SHEPHERD_SWAY_CONFIG --unsupported-gpu
|
Exec=sway -c $SWAY_CONFIG_DIR/$SHEPHERD_SWAY_CONFIG
|
||||||
Type=Application
|
Type=Application
|
||||||
DesktopNames=shepherd
|
DesktopNames=shepherd
|
||||||
EOF
|
EOF
|
||||||
|
|
@ -127,10 +127,9 @@ EOF
|
||||||
install_config() {
|
install_config() {
|
||||||
local user="${1:-}"
|
local user="${1:-}"
|
||||||
local source_config="${2:-}"
|
local source_config="${2:-}"
|
||||||
local force="${3:-false}"
|
|
||||||
|
|
||||||
if [[ -z "$user" ]]; then
|
if [[ -z "$user" ]]; then
|
||||||
die "Usage: shepherd install config --user USER [--source CONFIG] [--force]"
|
die "Usage: shepherd install config --user USER [--source CONFIG]"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
validate_user "$user"
|
validate_user "$user"
|
||||||
|
|
@ -162,15 +161,7 @@ install_config() {
|
||||||
|
|
||||||
# Check if config already exists
|
# Check if config already exists
|
||||||
if maybe_sudo test -f "$dst_config"; then
|
if maybe_sudo test -f "$dst_config"; then
|
||||||
if [[ "$force" == "true" ]]; then
|
warn "Config file already exists at $dst_config, skipping"
|
||||||
warn "Overwriting existing config at $dst_config"
|
|
||||||
maybe_sudo cp "$source_config" "$dst_config"
|
|
||||||
maybe_sudo chown "$user:$user" "$dst_config"
|
|
||||||
maybe_sudo chmod 0644 "$dst_config"
|
|
||||||
success "Overwrote user configuration for $user"
|
|
||||||
else
|
|
||||||
warn "Config file already exists at $dst_config, skipping (use --force to overwrite)"
|
|
||||||
fi
|
|
||||||
else
|
else
|
||||||
# Copy config file
|
# Copy config file
|
||||||
maybe_sudo cp "$source_config" "$dst_config"
|
maybe_sudo cp "$source_config" "$dst_config"
|
||||||
|
|
@ -184,10 +175,9 @@ install_config() {
|
||||||
install_all() {
|
install_all() {
|
||||||
local user="${1:-}"
|
local user="${1:-}"
|
||||||
local prefix="${2:-$DEFAULT_PREFIX}"
|
local prefix="${2:-$DEFAULT_PREFIX}"
|
||||||
local force="${3:-false}"
|
|
||||||
|
|
||||||
if [[ -z "$user" ]]; then
|
if [[ -z "$user" ]]; then
|
||||||
die "Usage: shepherd install all --user USER [--prefix PREFIX] [--force]"
|
die "Usage: shepherd install all --user USER [--prefix PREFIX]"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
require_root
|
require_root
|
||||||
|
|
@ -198,7 +188,7 @@ install_all() {
|
||||||
install_bins "$prefix"
|
install_bins "$prefix"
|
||||||
install_sway_config "$prefix"
|
install_sway_config "$prefix"
|
||||||
install_desktop_entry "$prefix"
|
install_desktop_entry "$prefix"
|
||||||
install_config "$user" "" "$force"
|
install_config "$user"
|
||||||
|
|
||||||
success "Installation complete!"
|
success "Installation complete!"
|
||||||
info ""
|
info ""
|
||||||
|
|
@ -216,7 +206,6 @@ install_main() {
|
||||||
local user=""
|
local user=""
|
||||||
local prefix="$DEFAULT_PREFIX"
|
local prefix="$DEFAULT_PREFIX"
|
||||||
local source_config=""
|
local source_config=""
|
||||||
local force="false"
|
|
||||||
|
|
||||||
# Parse remaining arguments
|
# Parse remaining arguments
|
||||||
while [[ $# -gt 0 ]]; do
|
while [[ $# -gt 0 ]]; do
|
||||||
|
|
@ -233,10 +222,6 @@ install_main() {
|
||||||
source_config="$2"
|
source_config="$2"
|
||||||
shift 2
|
shift 2
|
||||||
;;
|
;;
|
||||||
--force|-f)
|
|
||||||
force="true"
|
|
||||||
shift
|
|
||||||
;;
|
|
||||||
*)
|
*)
|
||||||
die "Unknown option: $1"
|
die "Unknown option: $1"
|
||||||
;;
|
;;
|
||||||
|
|
@ -248,7 +233,7 @@ install_main() {
|
||||||
install_bins "$prefix"
|
install_bins "$prefix"
|
||||||
;;
|
;;
|
||||||
config)
|
config)
|
||||||
install_config "$user" "$source_config" "$force"
|
install_config "$user" "$source_config"
|
||||||
;;
|
;;
|
||||||
sway-config)
|
sway-config)
|
||||||
install_sway_config "$prefix"
|
install_sway_config "$prefix"
|
||||||
|
|
@ -257,7 +242,7 @@ install_main() {
|
||||||
install_desktop_entry "$prefix"
|
install_desktop_entry "$prefix"
|
||||||
;;
|
;;
|
||||||
all)
|
all)
|
||||||
install_all "$user" "$prefix" "$force"
|
install_all "$user" "$prefix"
|
||||||
;;
|
;;
|
||||||
""|help|-h|--help)
|
""|help|-h|--help)
|
||||||
cat <<EOF
|
cat <<EOF
|
||||||
|
|
@ -274,7 +259,6 @@ Options:
|
||||||
--user USER Target user for config deployment (required for config/all)
|
--user USER Target user for config deployment (required for config/all)
|
||||||
--prefix PREFIX Installation prefix (default: $DEFAULT_PREFIX)
|
--prefix PREFIX Installation prefix (default: $DEFAULT_PREFIX)
|
||||||
--source CONFIG Source config file (default: config.example.toml)
|
--source CONFIG Source config file (default: config.example.toml)
|
||||||
--force, -f Overwrite existing configuration files
|
|
||||||
|
|
||||||
Environment:
|
Environment:
|
||||||
DESTDIR Installation root for packaging (default: empty)
|
DESTDIR Installation root for packaging (default: empty)
|
||||||
|
|
@ -282,7 +266,6 @@ Environment:
|
||||||
Examples:
|
Examples:
|
||||||
shepherd install bins --prefix /usr/local
|
shepherd install bins --prefix /usr/local
|
||||||
shepherd install config --user kiosk
|
shepherd install config --user kiosk
|
||||||
shepherd install config --user kiosk --force
|
|
||||||
shepherd install all --user kiosk --prefix /usr
|
shepherd install all --user kiosk --prefix /usr
|
||||||
EOF
|
EOF
|
||||||
;;
|
;;
|
||||||
|
|
|
||||||
|
|
@ -100,7 +100,7 @@ sway_start_nested() {
|
||||||
trap sway_cleanup EXIT
|
trap sway_cleanup EXIT
|
||||||
|
|
||||||
# Start sway with wayland backend (nested in current session)
|
# Start sway with wayland backend (nested in current session)
|
||||||
WLR_BACKENDS=wayland WLR_LIBINPUT_NO_DEVICES=1 sway -c "$sway_config" --unsupported-gpu &
|
WLR_BACKENDS=wayland WLR_LIBINPUT_NO_DEVICES=1 sway -c "$sway_config" &
|
||||||
SWAY_PID=$!
|
SWAY_PID=$!
|
||||||
|
|
||||||
info "Sway started with PID $SWAY_PID"
|
info "Sway started with PID $SWAY_PID"
|
||||||
|
|
|
||||||
|
|
@ -11,7 +11,6 @@ exec_always dbus-update-activation-environment --systemd \
|
||||||
### Variables
|
### Variables
|
||||||
set $launcher ./target/debug/shepherd-launcher
|
set $launcher ./target/debug/shepherd-launcher
|
||||||
set $hud ./target/debug/shepherd-hud
|
set $hud ./target/debug/shepherd-hud
|
||||||
set $stop_current $launcher --stop-current
|
|
||||||
|
|
||||||
### Output configuration
|
### Output configuration
|
||||||
# Set up displays (adjust as needed for your hardware)
|
# Set up displays (adjust as needed for your hardware)
|
||||||
|
|
@ -130,11 +129,8 @@ focus_follows_mouse no
|
||||||
# Hide any title/tab text by using minimal font size
|
# Hide any title/tab text by using minimal font size
|
||||||
font pango:monospace 1
|
font pango:monospace 1
|
||||||
|
|
||||||
# Session stop keybindings via shepherdd API (does not close windows directly)
|
# Prevent window closing via keybindings (no Alt+F4)
|
||||||
# Handled in sway so they work regardless of which client currently has focus.
|
# Windows can only be closed by the application itself
|
||||||
bindsym --locked Alt+F4 exec $stop_current
|
|
||||||
bindsym --locked Ctrl+w exec $stop_current
|
|
||||||
bindsym --locked Home exec $stop_current
|
|
||||||
|
|
||||||
# Hide mouse cursor after inactivity
|
# Hide mouse cursor after inactivity
|
||||||
seat * hide_cursor 5000
|
seat * hide_cursor 5000
|
||||||
|
|
|
||||||