YM
Yashaswi Mishra
January 2026
Tech Stack
Backend
Rust
Completion Status
Project completion85%
This project is still under active development.
Cineyma is a lightweight actor model framework for Rust, inspired by Erlang/OTP, Akka, and actix. It provides fault-tolerant, distributed concurrency with minimal overhead.
Design Philosophy
Cineyma prioritizes:
- Explicit supervision over silent recovery
- Typed messaging over dynamic routing
- Sequential state ownership over shared concurrency
- Minimal magic, maximal control
If you want HTTP-first or macro-heavy ergonomics, use actix. If you want OTP-style fault tolerance in Rust, use Cineyma.
Features
Core Actor System
- Async/await native - Built on Tokio for high-performance async I/O
- Typed messages - Compile-time message safety with zero runtime overhead
- Bounded mailboxes - Default capacity of 256 messages prevents OOM from slow consumers
- Sequential processing - Messages processed one-at-a-time, eliminating data races
Supervision & Fault Tolerance
- Supervisor hierarchies - Parent actors monitor and restart children
- Restart strategies - Restart, Stop, Escalate (OTP-style)
- Panic boundaries - Panics caught at actor boundaries, never crash runtime
- Isolated failures - One actor's failure doesn't affect siblings
Advanced Features
- Timers -
run_laterandrun_intervalscheduling - Streams - Process external data streams within actors
- Registry - Name-based actor lookup with automatic cleanup
- Async handlers - Non-blocking I/O in message handlers
Remote & Clustering
- Remote actors - TCP transport with Protocol Buffers serialization
- Cluster support - Gossip protocol for membership and failure detection
- Distributed registry - Look up actors across cluster nodes
- Message routing - Route messages to remote actors transparently
Architecture
Actor Lifecycle
- Spawn - Actor created with initial state and mailbox
- Run - Actor processes messages sequentially from mailbox
- Supervise - Parent monitors child for failures
- Restart/Stop - On failure, supervisor applies restart strategy
- Cleanup - Actor releases resources on termination
Message Flow
Client → Address → Mailbox (bounded) → Actor (sequential) → Response
Supervision Tree
Root Supervisor
├── Worker Actor 1
├── Worker Actor 2
└── Mid-level Supervisor
├── Worker Actor 3
└── Worker Actor 4
Quick Start
rustuse cinema::{Actor, Handler, Message, ActorSystem, Context}; // Define a message struct Greet(String); impl Message for Greet { type Result = String; } // Define an actor struct Greeter; impl Actor for Greeter {} impl Handler<Greet> for Greeter { fn handle(&mut self, msg: Greet, _ctx: &mut Context<Self>) -> String { format!("Hello, {}!", msg.0) } } #[tokio::main] async fn main() { let system = ActorSystem::new(); let addr = system.spawn(Greeter); // Fire and forget (async, applies backpressure if mailbox full) addr.do_send(Greet("World".into())).await.unwrap(); // Request-response let response = addr.send(Greet("Cinema".into())).await.unwrap(); println!("{}", response); // "Hello, Cinema!" }
Supervision Example
rustuse cinema::{Actor, Supervisor, RestartStrategy}; // Worker that might fail struct Worker; impl Actor for Worker { fn started(&mut self, ctx: &mut Context<Self>) { // Simulate work that might panic } } // Supervisor with restart strategy struct WorkerSupervisor; impl Actor for WorkerSupervisor {} impl Supervisor<Worker> for WorkerSupervisor { fn restart_strategy(&self) -> RestartStrategy { RestartStrategy::Restart // Auto-restart on panic } }
Remote Actors
rustuse cinema::{ActorSystem, RemoteConfig}; // Node 1: Start server let system = ActorSystem::with_remote( RemoteConfig::new("127.0.0.1:8080") ); let greeter = system.spawn(Greeter); system.register("greeter", greeter); // Node 2: Connect and send message let system = ActorSystem::with_remote( RemoteConfig::new("127.0.0.1:8081") .connect_to("127.0.0.1:8080") ); let remote_greeter = system.lookup("greeter").await.unwrap(); let response = remote_greeter.send(Greet("Remote".into())).await;
Clustering
rustuse cinema::{ActorSystem, ClusterConfig}; // Start cluster node let system = ActorSystem::with_cluster( ClusterConfig::new("127.0.0.1:7000") .seed_nodes(vec!["127.0.0.1:7001", "127.0.0.1:7002"]) ); // Gossip protocol handles: // - Membership tracking // - Failure detection // - Distributed actor registry
Performance
Benchmarks (M1 Max, 10 cores):
- Local messaging: ~1.5M messages/sec
- Remote messaging: ~350K messages/sec (over loopback)
- Supervision overhead: <5% compared to unsupervised actors
Memory:
- Base actor: ~200 bytes
- Mailbox (256 capacity): ~2KB per actor
Comparison
| Feature | Cinema | Actix | Tokio Tasks |
|---|---|---|---|
| Typed messages | ✓ | ✗ | ✗ |
| Supervision | ✓ | ✓ | ✗ |
| Remote actors | ✓ | ✗ | ✗ |
| Clustering | ✓ | ✗ | ✗ |
| HTTP-first | ✗ | ✓ | ✗ |
| Macro-heavy | ✗ | ✓ | ✗ |
Technical Highlights
Key design decisions:
- Bounded mailboxes - Prevents OOM from slow consumers, applies backpressure
- Panic boundaries - Actors catch panics, supervisor decides restart strategy
- Typed handlers - Compile-time safety, zero-cost abstractions
- Sequential execution - Eliminates lock contention, simplifies state management
- Protocol Buffers - Efficient serialization for remote messaging
Testing
- Unit tests for actor lifecycle and message handling
- Integration tests for supervision and clustering
- Benchmark suite for performance validation
- Fault injection for testing restart strategies
Repository
Follow my journey
Buy me a coffeeSupport