Real-Time IoT Digital Twins: VirtuNode's Redis Architecture

How I built a multi-worker FastAPI backend that streams live sensor logs to WebSocket clients — and the async event-loop deadlock that only appeared under concurrent load.

The Problem Worth Solving

IoT development is painful. You write firmware, flash it to hardware, run a test, and wait to see if the sensor logic behaves as expected. If something goes wrong, you're debugging hardware and software simultaneously — a slow, expensive feedback loop. Physical devices are fragile, hard to parallelize, and unavailable until the hardware ships.

VirtuNode's premise is simple: replace the physical device with a software twin. Upload your sensor logic, run it inside an isolated Docker container, and watch the output stream live to your browser. No hardware required. No reflashing. Instant feedback.

The engineering challenge is making that "watch the output stream live" part feel real. Sub-second latency, reliable delivery across multiple concurrent users, and zero state leakage between isolated sandboxes. That's what this post is about.

The Architecture

The core pipeline has three stages: execution, transport, and delivery.

Why Redis pub/sub? The backend runs with 4 Uvicorn workers for throughput. Any worker can spawn a container; any worker can serve the WebSocket for that session. Redis pub/sub provides the shared message bus that decouples these two — the publisher and subscriber don't need to be the same process.

The Deadlock I Didn't See Coming

The first version worked perfectly in single-worker tests. The container spawned, logs streamed, the WebSocket client received updates. Then I switched to 4 workers and things got strange.

Under concurrent load, some sessions would silently hang. The container was running — I could see it in docker ps. The WebSocket connection stayed open. But no logs arrived. The session appeared frozen.

The root cause was async event-loop contention. The original implementation used asyncio.create_subprocess_exec to run the container, then awaited stdout in the same coroutine as the Redis publisher. When multiple concurrent sessions competed for the event loop, the stdout reader and the Redis publisher would deadlock waiting for each other — each blocking on I/O that the other needed the event loop to process.

The fix was to move the blocking container interaction into a thread pool executor via loop.run_in_executor(). The executor runs the container stdout reader in a separate thread, completely off the async event loop. The results are passed back via a queue that the async Redis publisher consumes. This decouples the I/O paths and eliminates the contention.

Key insight: In a multi-worker async system, any blocking I/O operation — even one that appears fast in isolation — can become a deadlock source under load if it competes with other async operations for event loop time. Run blocking I/O in executors, not in coroutines.

The WebSocket Manager

The WebSocket manager maintains a registry of active connections keyed by session ID. When a client connects, it's registered with its session's channel. When a message arrives on that channel via Redis pub/sub, the manager routes it to the correct WebSocket connection.

The tricky part is handling disconnections gracefully. If a client disconnects while a container is still running, the session needs to be cleaned up — the container killed, the Redis subscription closed, and the session state marked complete. I implemented a container_ready synchronization flag that coordinates the lifecycle between the executor (which manages the container) and the WebSocket manager (which manages the client connection).

Live Digital Twin Sensor State

Beyond raw log streaming, VirtuNode maintains a live digital twin — a structured representation of the sensor's current state that updates in real-time as the container runs. Each log line is parsed for structured sensor readings (temperature, humidity, pressure, custom fields) and stored in Redis with a TTL. The WebSocket delivers both raw logs and structured state updates simultaneously, so the frontend can render both a console output and a live dashboard visualization.

Sandbox Isolation

Each container runs with:

What's Next

The backend is complete. The frontend (React + Vite + Tailwind + Framer Motion + Zustand) is in active development. The next phase adds a code editor (Monaco), a sensor state visualization dashboard, and scenario configuration — letting you define which sensors a twin exposes and what ranges they should simulate.

Longer term, I want to add a scenario library — pre-built IoT environments (temperature logger, motion detector, air quality monitor) that developers can fork and customize rather than starting from scratch.

Takeaway: Real-time streaming systems are deceptively simple until you add concurrency. The architecture that works for one user rarely survives ten concurrent users without explicit thought about shared state, I/O decoupling, and lifecycle management. Redis pub/sub isn't magic — it's just a clean way to make your message bus visible and testable.