Congratulations to our portfolio company, Helm.AI, on launching their Vid-Gen1: AI video for Autonomous driving
REDWOOD CITY, CA, June 20– Helm.ai, a leading provider of advanced AI software for high-end ADAS, Level 4 autonomous driving, and robotic automation, today announced the launch of VidGen-1, a generative AI model that produces highly realistic video sequences of driving scenes for autonomous driving development and validation. This innovative AI technology follows Helm.ai’s announcement of GenSim-1 for AI-generated labeled images and is significant for both prediction tasks and generative simulation.
VidGen-1 is able to generate videos of driving scenes in different geographies and for multiple types of cameras and vehicle perspectives. The model not only produces highly realistic appearances and temporally consistent object motion but also learns and reproduces human-like driving behaviors, generating motions of the ego-vehicle and surrounding agents acting according to traffic rules. The model simulates realistic video footage of various scenarios across multiple cities internationally, encompassing urban and suburban environments, a variety of vehicles, pedestrians, bicyclists, intersections, turns, weather conditions (e.g., rain, fog), illumination effects (e.g., glare, night driving), and even accurate reflections on wet road surfaces, reflective building walls and the hood of the ego-vehicle.
“We’ve made a technical breakthrough in generative AI for video to develop VidGen-1, setting a new bar in the autonomous driving domain. Combining our Deep Teaching technology, which we’ve been developing for years, with additional in-house innovation on generative DNN architectures results in a highly effective and scalable method for producing realistic AI-generated videos. Our technology is general and can be applied equally effectively to autonomous driving, robotics, and any other domain of video generation without change,” said Helm.ai’s CEO and Co-Founder, Vladislav Voroninski.
VidGen-1 offers automakers significant scalability advantages compared to traditional non-AI simulations, by enabling rapid asset generation and imbuing the agents in the simulation with sophisticated real-life behaviors. Helm.ai’s approach not only reduces development time and cost but also effectively closes the “sim-to-real” gap, providing a highly realistic and efficient solution that greatly widens the applicability of simulation-based training and validation.
Video data is the most information-rich sensory modality in autonomous driving and comes from the most cost-effective sensor—the camera. However, the high dimensionality of video data makes AI video generation a challenging task. Achieving a high level of image quality while accurately modeling the dynamics of a moving scene, hence video realism, is a well-known difficulty in video generation applications.
“Predicting the next frame in a video is similar to predicting the next word in a sentence but much more high dimensional,” added Voroninski. “Generating realistic video sequences of a driving scene represents the most advanced form of prediction for autonomous driving, as it entails accurately modeling the appearance of the real world and includes both intent prediction and path planning as implicit sub-tasks at the highest level of the stack. This capability is crucial for autonomous driving because, fundamentally, driving is about predicting what will happen next.”
About Helm.ai
Helm.ai is developing the next generation of AI software for high-end ADAS, Level 4 autonomous driving, and robotic automation. Founded in 2016 and headquartered in Redwood City, CA, the company has re-envisioned the approach to AI software development, aiming to make truly scalable autonomous driving a reality. For more information on Helm.ai, including its products, SDK, and open career opportunities, visit https://www.helm.ai/ or find Helm.ai on LinkedIn.