The Service Watchdog Reigns Supreme
Should I ship multiple daemons as independent services? Or should I deploy a single core daemon with multiple threaded services and an internal watchdog to manage lifecycle and uptime?
That question came up constantly while building Vaulthalla.
Not once. Every time something broke.
And depending on what broke, the “right” answer changed with it.
So this isn’t a universal rule. Some systems should be multi-daemon. Some absolutely shouldn’t.
Vaulthalla desperately wanted to be single-daemon.
TL;DR
Vaulthalla runs as:
- one daemon
- many threaded services
- one watchdog enforcing lifecycle
Not because it’s cleaner — because the runtime demanded:
- shared hot caches
- shared dependencies
- deterministic startup
- low-latency coordination
At some point, process boundaries stopped helping and started taxing the hot path.
Where multi-daemon wins
Hard isolation
Separate processes give you real containment:
- crashes don’t cascade
- memory corruption is isolated
- security boundaries are enforceable
Independent scaling
If services are truly independent:
- scale components separately
- distribute across machines
- isolate workload characteristics
Runtime flexibility
Different tools for different jobs:
- C++ core
- Go workers
- Python jobs
- Node frontends
All valid.
Where it breaks down
The model falls apart when your services aren’t actually independent.
Vaulthalla is tightly coupled around:
- filesystem state
- permissions
- caching
- real-time session context
Splitting that across daemons didn’t simplify things — it introduced coordination overhead everywhere.
Why single-daemon won
Shared memory beats serialization
No IPC. No encoding/decoding. No translation layers.
State stays hot and directly accessible.
Shared caches stay real
Multiple services need the same data:
- FUSE
- CLI
- WebSocket lifecycle
- HTTP previews
- sync workers
Multi-daemon → duplicated caches → sync → IPC → latency → drift
Single-daemon → one cache → done
Startup becomes deterministic
No more:
- repeated initialization
- duplicate DB pools
- “is service X ready yet?”
Just:
initialize once → wire → run
The watchdog is the turning point
This is where single-daemon stops being “simple” and becomes powerful.
You keep service boundaries — without process boundaries.
Each service still has:
- its own lifecycle
- failure detection
- restart capability
But it’s all coordinated internally.
A simplified watchdog loop:
1while (watchdogRunning) {2 std::vector<std::string> downServices;3 4 {5 std::scoped_lock lock(mutex_);6 for (const auto& [name, service] : services_)7 if (service && !service->isRunning())8 downServices.push_back(name);9 }10 11 for (const auto& name : downServices)12 restartService(name);13 14 std::this_thread::sleep_for(kWatchdogInterval);15}
That unlock-before-restart pattern matters.
Hold the lock while restarting and you risk deadlocking your own recovery path. That’s not theory — that’s the kind of coordination bug that shows up once the runtime itself becomes a system.
The real shift: control plane complexity
At some point, the complexity moves.
Not into business logic — into the seams:
- startup ordering
- shutdown ordering
- health monitoring
- restart semantics
- dependency wiring
Even a small runtime manager becomes a control plane.
Example — ordered startup with dependency handoff:
1for (const auto& entry : entries) {2 tryStart(entry);3 if (entry.name == "FUSE" && fuseService)4 Deps::get().setFuseSession(fuseService->session());5}
That’s not boilerplate.
That’s enforcing runtime truth.
Centralized dependencies (done intentionally)
Vaulthalla uses a single runtime dependency registry:
1void Deps::init() {2 auto& deps = get();3 if (deps.initialized()) return;4 5 deps.storageManager = std::make_shared<storage::Manager>();6 deps.apiKeyManager = std::make_shared<vault::APIKeyManager>();7 deps.authManager = std::make_shared<auth::Manager>();8 deps.sessionManager = std::make_shared<auth::session::Manager>();9 deps.secretsManager = std::make_shared<crypto::secrets::Manager>();10 deps.fsCache = std::make_shared<fs::cache::Registry>();11 deps.shellUsageManager = std::make_shared<protocols::shell::UsageManager>();12 deps.httpCacheStats = std::make_shared<stats::model::CacheStats>();13}
Normally, global state is a red flag.
Here, it defines the boundary:
- one daemon
- one runtime
- one authoritative dependency graph
Anything else would be artificial separation.
What multi-daemon cost Vaulthalla
The pattern was consistent:
- duplicate caches
- sync them
- introduce IPC
- add latency
- increase caching pressure
- drift + memory overhead
UNIX sockets didn’t fix it. They just moved complexity into the hot path.
Final form
Vaulthalla became:
- one daemon
- many threaded services
- a watchdog enforcing lifecycle
- a shared dependency graph
- hot caches that never leave memory
Not because it’s “correct.”
Because it stopped fighting the system.
If your services are constantly:
- talking
- sharing state
- paying coordination costs
they may not be separate services.
You may just be paying extra to pretend they are.
Knowledge is power. In Vaulthalla, it is sacred.
