Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
22 changes: 22 additions & 0 deletions Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 2 additions & 0 deletions crates/sshx-server/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,8 @@ deadpool-redis = "0.18.0"
futures-util = { version = "0.3.28", features = ["sink"] }
hmac = "0.12.1"
http = "1.2.0"
include_dir = "0.7.4"
mime_guess = "2.0.4"
parking_lot = "0.12.1"
prost.workspace = true
rand.workspace = true
Expand Down
125 changes: 112 additions & 13 deletions crates/sshx-server/src/web.rs
Original file line number Diff line number Diff line change
Expand Up @@ -2,33 +2,132 @@

use std::sync::Arc;

use axum::routing::{any, get_service};
use axum::body::Body;
use axum::extract::Request;
use axum::http::{header, StatusCode};
use axum::response::{IntoResponse, Response};
use axum::routing::any;
use axum::Router;
use tower_http::services::{ServeDir, ServeFile};
use include_dir::{include_dir, Dir};

use crate::ServerState;

pub mod protocol;
mod socket;

/// The SvelteKit static build, embedded at compile time.
/// Ensure `npm run build` has been run from the workspace root before compiling.
static BUILD_DIR: Dir<'static> = include_dir!("$CARGO_MANIFEST_DIR/../../build");
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Avoid compile-time dependency on an untracked build dir

Embedding ../../build with include_dir! makes Rust compilation depend on frontend artifacts that are not checked into git (git ls-tree for this commit has no build/ entries), so a clean checkout now fails unless npm run build is run first. This regresses existing Rust-only flows (e.g., .github/workflows/ci.yaml rust jobs run cargo test/cargo clippy without a web build step), and it also breaks source installs in environments that only expect Cargo to be required.

Useful? React with 👍 / 👎.


Comment on lines +18 to +21
Copy link

Copilot AI Mar 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

include_dir!("$CARGO_MANIFEST_DIR/../../build") requires the build/ directory to exist at compile time, but the repo .gitignore excludes /build. On a clean clone, cargo build / cargo test for sshx-server will fail unless a manual npm run build was run first. Consider adding a build.rs to generate (or at least validate and emit a clear error about) the frontend build, or gating the embedded-asset path behind a feature and keeping the previous filesystem ServeDir fallback for dev builds.

Copilot uses AI. Check for mistakes.
/// Returns the web application server, routed with Axum.
pub fn app() -> Router<Arc<ServerState>> {
let root_spa = ServeFile::new("build/spa.html")
.precompressed_gzip()
.precompressed_br();

// Serves static SvelteKit build files.
let static_files = ServeDir::new("build")
.precompressed_gzip()
.precompressed_br()
.fallback(root_spa);

Router::new()
.nest("/api", backend())
.fallback_service(get_service(static_files))
.fallback(serve_static)
}

/// Routes for the backend web API server.
fn backend() -> Router<Arc<ServerState>> {
Router::new().route("/s/{name}", any(socket::get_session_ws))
}

/// Serve an embedded static file with content-negotiation for precompressed variants.
///
/// Resolution order for a request path `P`:
/// 1. `P` with brotli encoding (`P.br`) if client accepts `br`
/// 2. `P` with gzip encoding (`P.gz`) if client accepts `gzip`
/// 3. `P` raw
/// 4. SPA fallback: `spa.html` (same compression priority) for unknown paths
async fn serve_static(req: Request) -> Response {
let path = req.uri().path().trim_start_matches('/');

// Empty path → "index.html", which SvelteKit puts at root.
let path = if path.is_empty() { "index.html" } else { path };

Comment on lines 23 to +46
Copy link

Copilot AI Mar 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Router::fallback(serve_static) will route all HTTP methods (POST/PUT/…) to the SPA/static handler. Previously, the static handler was wrapped in get_service(...), which limited it to GET/HEAD behavior. To avoid surprising behavior and potential security footguns, consider returning 405 Method Not Allowed for non-GET/HEAD requests in serve_static (or route static handling via method-specific routing).

Copilot uses AI. Check for mistakes.
// Detect which encodings the client accepts.
let accept_enc = req
.headers()
.get(header::ACCEPT_ENCODING)
.and_then(|v| v.to_str().ok())
.unwrap_or("");
let accept_br = accept_enc.contains("br");
let accept_gz = accept_enc.contains("gzip");

// Try to find and serve the file (with optional precompressed variant).
if let Some(resp) = try_serve(path, accept_br, accept_gz) {
return resp;
}

// SPA fallback: unknown paths are handled by the SvelteKit router client-side.
if let Some(resp) = try_serve("spa.html", accept_br, accept_gz) {
return resp;
}

(StatusCode::NOT_FOUND, "Not found").into_response()
}

/// Try to serve `path` from the embedded build dir, preferring compressed variants.
fn try_serve(path: &str, accept_br: bool, accept_gz: bool) -> Option<Response> {
let content_type = mime_guess::from_path(path)
.first()
.map(|m| m.to_string())
.unwrap_or_else(|| "application/octet-stream".to_string());

// Cache-control: immutable for content-addressed _app/ assets, no-store for SPA HTML.
let cache_control = if path.starts_with("_app/immutable/") {
"public, max-age=31536000, immutable"
} else if path.ends_with(".html") {
"no-cache, no-store"
} else {
"public, max-age=3600"
};

// Brotli preferred.
if accept_br {
let br_path = format!("{path}.br");
if let Some(file) = BUILD_DIR.get_file(&br_path) {
return Some(build_response(
file.contents(),
&content_type,
Some("br"),
cache_control,
));
}
}

// Gzip fallback.
if accept_gz {
let gz_path = format!("{path}.gz");
if let Some(file) = BUILD_DIR.get_file(&gz_path) {
return Some(build_response(
file.contents(),
&content_type,
Some("gzip"),
cache_control,
));
}
}

// Plain file.
BUILD_DIR.get_file(path).map(|file| {
build_response(file.contents(), &content_type, None, cache_control)
})
}

fn build_response(
body: &'static [u8],
content_type: &str,
encoding: Option<&str>,
cache_control: &str,
) -> Response {
let mut builder = Response::builder()
.status(StatusCode::OK)
.header(header::CONTENT_TYPE, content_type)
.header(header::CACHE_CONTROL, cache_control);

if let Some(enc) = encoding {
builder = builder.header(header::CONTENT_ENCODING, enc);
}
Comment on lines +117 to +130
Copy link

Copilot AI Mar 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Static responses are content-negotiated on Accept-Encoding, but the response does not set Vary: Accept-Encoding. Without Vary, intermediaries (CDNs/proxies/browsers) can incorrectly cache and serve a brotli/gzip-encoded response to clients that don't support it. Add a Vary: Accept-Encoding header (at least when a compressed variant is served, and ideally always for these static routes).

Copilot uses AI. Check for mistakes.

builder.body(Body::from(body)).unwrap()
}
1 change: 1 addition & 0 deletions crates/sshx/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@ tonic.workspace = true
tracing.workspace = true
tracing-subscriber.workspace = true
whoami = { version = "1.5.1", default-features = false }
sshx-server = { path = "../sshx-server" }

[target.'cfg(unix)'.dependencies]
close_fds = "0.3.2"
Expand Down
2 changes: 2 additions & 0 deletions crates/sshx/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -10,3 +10,5 @@ pub mod controller;
pub mod encrypt;
pub mod runner;
pub mod terminal;
/// Cloudflare tunnel provider for self-hosted mode.
pub mod tunnel;
26 changes: 24 additions & 2 deletions crates/sshx/src/main.rs
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ use std::process::ExitCode;
use ansi_term::Color::{Cyan, Fixed, Green};
use anyhow::Result;
use clap::Parser;
use sshx::{controller::Controller, runner::Runner, terminal::get_default_shell};
use sshx::{controller::Controller, runner::Runner, terminal::get_default_shell, tunnel};
use tokio::signal;
use tracing::error;

Expand Down Expand Up @@ -31,6 +31,15 @@ struct Args {
/// editors.
#[clap(long)]
enable_readers: bool,

/// Tunnel provider to expose the local server publicly (bypasses --server).
#[clap(long, value_enum)]
tunnel: Option<TunnelProvider>,
}

#[derive(clap::ValueEnum, Clone, Debug)]
enum TunnelProvider {
Cloudflare,
}

fn print_greeting(shell: &str, controller: &Controller) {
Expand Down Expand Up @@ -89,8 +98,21 @@ async fn start(args: Args) -> Result<()> {
name
});

let _tunnel_guard = if let Some(TunnelProvider::Cloudflare) = args.tunnel {
let guard = tunnel::start_cloudflare_tunnel().await?;
Some(guard)
} else {
None
};

let server_addr = if let Some(guard) = &_tunnel_guard {
guard.local_endpoint.clone()
} else {
args.server.clone()
};

let runner = Runner::Shell(shell.clone());
let mut controller = Controller::new(&args.server, &name, runner, args.enable_readers).await?;
let mut controller = Controller::new(&server_addr, &name, runner, args.enable_readers).await?;
if args.quiet {
if let Some(write_url) = controller.write_url() {
println!("{}", write_url);
Expand Down
113 changes: 113 additions & 0 deletions crates/sshx/src/tunnel.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,113 @@
use std::process::Stdio;
use std::time::Duration;

use anyhow::{anyhow, Context, Result};
use sshx_server::{Server, ServerOptions};
use tokio::io::{AsyncBufReadExt, BufReader};
use tokio::net::TcpListener;
use tokio::process::{Child, Command};
use tokio::sync::{mpsc, oneshot};
use tokio::task::JoinHandle;
use tokio::time::timeout;
use tracing::{info, warn};

/// A guard that manages the lifetime of the local server and cloudflared tunnel.
pub struct TunnelGuard {
/// The unencrypted HTTP local endpoint bounding the server.
pub local_endpoint: String,
/// The public HTTPS URL from Cloudflare.
pub public_url: String,
server_task: JoinHandle<()>,
_child: Child,
}

impl Drop for TunnelGuard {
fn drop(&mut self) {
self.server_task.abort();
}
}
Comment on lines +24 to +28
Copy link

Copilot AI Mar 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TunnelGuard::drop aborts the server_task, but sshx_server::Server::listen spawns additional background tasks (e.g. listen_for_transfers / close_old_sessions) that are not tied to the join handle and will keep running unless Server::shutdown() is called. This can leak tasks after the guard is dropped. Consider keeping a shutdown handle (e.g., store an Arc<Server> inside TunnelGuard) and calling server.shutdown() in Drop, letting the server task exit cleanly (optionally with a timeout before aborting).

Copilot uses AI. Check for mistakes.

/// Spawns a local sshx server and exposes it via a Cloudflare quick tunnel.
pub async fn start_cloudflare_tunnel() -> Result<TunnelGuard> {
let listener = TcpListener::bind("127.0.0.1:0")
.await
.context("failed to bind ephemeral port for local server")?;
let local_addr = listener.local_addr()?;
let local_endpoint = format!("http://127.0.0.1:{}", local_addr.port());

info!("Spawning cloudflared tunnel...");
let mut child = Command::new("cloudflared")
.arg("tunnel")
.arg("--url")
.arg(&local_endpoint)
.stderr(Stdio::piped())
.kill_on_drop(true)
.spawn()
.context("failed to execute `cloudflared`; make sure it is installed and in your PATH")?;

let stderr = child.stderr.take().unwrap();
let mut reader = BufReader::new(stderr).lines();

let (url_tx, mut url_rx) = mpsc::channel(1);

// Drain cloudflared stderr in background so it never gets a broken pipe.
// We send the URL once we find it but keep reading to keep the pipe open.
tokio::spawn(async move {
let mut found = false;
while let Ok(Some(line)) = reader.next_line().await {
tracing::debug!("[cloudflared] {}", line);
if !found {
if let Some(idx) = line.find("https://") {
let sub = &line[idx..];
let end_idx = sub
.find(|c: char| c.is_whitespace() || c == '|' || c == ']')
.unwrap_or(sub.len());
let url = &sub[..end_idx];
if url.ends_with(".trycloudflare.com") || url.ends_with(".cloudflare.com") {
let _ = url_tx.send(url.to_string()).await;
found = true;
// keep reading — don't break, so cloudflared's pipe stays open
}
}
}
}
});

let public_url = match timeout(Duration::from_secs(15), url_rx.recv()).await {
Ok(Some(url)) => url,
Ok(None) => return Err(anyhow!("cloudflared closed stderr without printing a tunnel URL")),
Err(_) => return Err(anyhow!("timeout waiting for cloudflared public URL")),
};

info!("Tunnel public URL: {}", public_url);

let mut options = ServerOptions::default();
options.override_origin = Some(public_url.clone());

let (tx, rx) = oneshot::channel();
let local_endpoint_clone = local_endpoint.clone();
let server_task = tokio::spawn(async move {
let server = match Server::new(options) {
Ok(s) => s,
Err(e) => {
let _ = tx.send(Err(e));
return;
}
};
let _ = tx.send(Ok(()));

info!("Local sshx server listening on {}", local_endpoint_clone);
if let Err(err) = server.listen(listener).await {
warn!("Local server exited with error: {:?}", err);
}
});

rx.await.context("local server failed to start")??;

Ok(TunnelGuard {
local_endpoint,
public_url,
server_task,
_child: child,
})
}
8 changes: 0 additions & 8 deletions package-lock.json

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

Loading