We’re thrilled to share that our portfolio company, Helm.ai, has officially launched Helm.ai Driver — a transformative new real-time path prediction system built on a vision-only transformer neural network architecture.
Key Highlights:
-Designed for L2 to L4 highway and urban autonomous driving
-Utilizes camera-based perception only — no Lidar, HD maps, or extra sensors required
-Trained end-to-end using Deep Teaching™, Helm.ai’s proprietary learning method
-Demonstrates human-like driving behaviors in intersections, turns, obstacle avoidance, passing, and cut-ins — not manually programmed, but learned from real-world data
– Validated through closed-loop simulations with CARLA and GenSim-2, Helm’s generative AI for realistic camera rendering
-Fully compatible with Helm.ai’s production-grade surround-view perception stack
According to Helm.ai CEO and founder Vladislav Voroninski:
“By training on real-world data, we developed an advanced path prediction system which mimics the sophisticated behaviors of human drivers, learning end-to-end without any explicitly defined rules.”
This milestone highlights Helm.ai’s commitment to an AI-first approach to autonomous systems — scalable, adaptable, and safe.
🔗 Learn more about Helm.ai Driver and view the simulation demo: https://lnkd.in/gmPuCV5M
