Sigil is a purpose-built, AI-native Linux OS for professional software engineers. A unified shell where every developer tool lives in one frame, governed by a self-tuning daemon that learns how you work — and removes friction from everything around the code.
Terminal. Editor. Browser. Slack. Four windows, four mental models, zero shared context. Every switch costs focus. Your OS sees processes — it doesn't see you shipping.
Copilot, Cursor, Claude Code — powerful, isolated. They see the file you have open. They don't see your build failing for the third time, the container you spun up an hour ago, or the pattern that connects both.
The information needed to actually understand a developer's workflow exists — in /proc, in inotify, in git history, in build logs. No OS has ever done anything useful with all of it together. Until now.
Developers don't need AI that replaces their work —
they need a single cohesive environment that learns how they work
and removes friction from everything around the code.
The Sigil Shell is a single full-screen application that is your entire desktop. Keyboard-first. Dark. Monospace. But with the spatial affordances of a real GUI — because you shouldn't have to choose between power and polish.
56px strip. Six embedded tools: Terminal, Editor, Browser, Git, Containers, and the Daemon Insights view. ⌘1–6 to switch. Status indicators show inference mode and daemon health.
Full PTY terminal. Neovim via PTY (the real thing, not a subset). Minimal WebView browser. Git log and diffs. Container status. Daemon analytics. Split-pane via ⌘\.
The daemon's passive voice. A rotating feed of contextual insights — not generic tips, but observations about your current session. Tab accepts. Esc dismisses. Never interrupts.
Two modes, one keystroke apart. $ Shell mode — a real terminal prompt. ✦ AI mode — natural language to the entire system, with full daemon context. ⌥Tab to toggle. You never context-switch between doing and asking.
sigild is a Go binary running as a systemd service. It watches everything your OS sees —
file events, process activity, git commits, build results, shell commands — and builds
a local model of how you work. All data stays on your machine. The intelligence
lives at the OS level, where it belongs.
Observation layer
fsnotify — file events/proc — process activityIntelligence layer — two tiers
Statistical heuristics. Pattern detection. Frequency tables. Temporal analysis. Runs via local LLM (llama.cpp + LFM2-24B-A2B) — no network, sub-120ms.
Complex reasoning. Weekly workflow summaries. Stuck-detection. Frontier models via cloud APIs (Anthropic, OpenAI) — only when needed, previewed before sending.
Action layer
Suggestion bar — contextual insights surfaced as you work. Five notification levels from silent to autonomous, configured per user.
Auto-split pane on build. Pre-warm containers. Reversible actions with undo window. Progressive AI disclosure as your usage patterns mature.
Sigil owns its inference layer. A managed llama-server process (from llama.cpp) runs quantized LLMs directly on your hardware. The daemon routes between local and cloud models automatically — four routing modes, one API, zero third-party runtime dependencies.
Telemetry-averse developers are the hardest audience to earn trust from. That's intentional. The privacy architecture isn't a checkbox — it's the product. Here's exactly what happens to your data.
Engineering leadership has one question: are our engineers actually using the AI tools we're paying for? Sigil answers it — without surveilling anyone.
Query volumes, acceptance rates, adoption tier distribution, query categories. Watch the org move from Observer to Integrator over time.
Correlate AI adoption tiers with commit cadence, build success rates, and PR cycle time. Prove the tool works — anonymously, at the team level.
Exact cloud API spend. Local-vs-cloud routing ratios. Cost per accepted suggestion. Turn "we route intelligently" into a procurement-ready number.
Which models are in use. What percentage hits approved endpoints. Data residency confirmation. One page your security team and auditors will love.
Individual engineer
Engineering organizations
Sigil is being built by a single staff-level engineer with 15 years in FinTech and 6 years of Go. The core product is open-source because the target audience reads source code before they trust anything. That's not a constraint — that's the go-to-market strategy.
| Layer | Technology | Why |
|---|---|---|
| Base OS | NixOS |
Declarative, reproducible, rollback-safe |
| Compositor | Hyprland (Wayland) |
GPU substrate, IPC, multi-monitor, pop-out windows |
| Shell | Tauri · Rust + Preact |
~5–10MB binary, native backend, TypeScript frontend |
| Terminal | xterm.js + PTY |
Mature, full PTY — real Neovim, not a subset |
| Inference | llama.cpp + LFM2-24B-A2B |
Managed llama-server, MoE ~2B active params, Q4_K_M quantization |
| Daemon | Go — sigild |
Low memory, fast startup, <50MB target |
| Local store | SQLite (WAL mode) |
Zero-config, concurrent reads, pure Go driver |
| IPC | Unix socket + JSON |
Fast, standard, no dependencies |
| Config | Nix flakes |
Single declarative spec for the entire stack |
| Fleet | Go + PostgreSQL + Helm |
Deployable on org infra, no Sigil-hosted cloud |
NixOS + Hyprland on 2017 MacBook Pro. Custom ISO with Broadcom Wi-Fi. Local inference on hardware.
sigild as a systemd service. Collector, Analyzer, Notifier. Hybrid inference engine. SQLite store. sigilctl CLI with 10+ commands. VS Code extension. <50MB RSS verified.
Tauri app full-screen on Hyprland. All six tool views. Daemon socket connection. Live suggestion bar. AI mode input. 20+ socket API methods for shell integration.
Enhanced heuristic model. Pop-out windows. Command palette. Split-pane orchestration. Theme customization. VS Code extension polish.
Fleet Reporter subsystem. Anonymous aggregate metrics. Adoption tier classification. Leadership dashboard views. Opt-in/opt-out controls with data preview.
Open source release. Downloadable ISO. Enterprise pilot. Documentation and community.
Sigil is in active development. Join the waitlist to get early access when individual installs open up, or to talk enterprise when the fleet layer ships.