Connect with us

Artificial Intelligence

Eureka: Gen AI’s role in streamlining robot training with reward models

Generative models have found applications in code generation and training robots

Published

on

In recent months, the field of robotics has experienced remarkable progress, largely thanks to the rapid evolution of generative artificial intelligence (AI). Leading tech companies and research institutions are tapping into generative AI models to address some of the major challenges in robotics. These challenges have historically limited their application primarily to heavy industry and research labs.

Here are a few examples of how generative AI is advancing robotics research:

Bridging the Gap Between Simulation and Reality

Training robotic machine learning models in real-world scenarios has been a complex endeavor. It’s slow, as it unfolds in real-time, and it’s expensive due to the physical limitations of deploying multiple robots. Safety concerns and limited access to diverse training environments add further obstacles.

To overcome these challenges, researchers have turned to simulated environments for robotic model training. This approach offers scalability and cost savings compared to real-world training. However, creating detailed simulated environments can be expensive. Moreover, these simulations often lack the nuanced details found in the real world, creating what’s known as the “sim-to-real gap.” This gap leads to a drop in performance when models trained in simulations are deployed in the real world, as they struggle to handle the intricacies of their surroundings.

Recently, generative models have become essential tools for bridging the sim-to-real gap and making simulated environments more realistic and detailed. For instance, neural radiance fields (NeRF) models can transform 2D scenes into 3D objects, simplifying the creation of realistic training environments. Companies like Nvidia use these models to generate lifelike 3D environments from camera-recorded videos, which are then used to train self-driving vehicle models.

SyncDreamer, a model developed by researchers from various universities, generates multiple views of an object from a single 2D image. These views can be used with other generative models to create 3D models for simulated environments. DeepMind’s UniSim model utilises LLMs and diffusion models to generate photorealistic video sequences, used in creating fine-grained simulations for training robotic models.

Enhancing Human-Robot Interaction

Improving human-robot interaction is a significant challenge in robotics research. This involves enhancing robots’ ability to understand and respond to human commands effectively. Multi-modal generative models are playing a crucial role in addressing this issue. These models integrate natural language with other data types, such as images and videos, to enable more effective communication with robots.

One notable example is Google’s embodied language model, PaLM-E. This model combines language models and vision transformers, which are jointly trained to understand the relationships between images and text. This knowledge is then applied to analyze visual scenes and convert natural language instructions into robot actions. Models like PaLM-E have significantly enhanced robots’ ability to carry out complex commands.

Taking this concept further, Google introduced RT-2, a vision-language-action model that, after extensive training on web data, can execute natural language instructions, even for tasks it hasn’t explicitly learned.

Bridging the Gap Between Robots and Datasets

The realm of robotics research is filled with various models and datasets collected from real-world robots. However, these datasets are often scattered, collected from different robots in different formats for diverse tasks.

Some research groups have shifted their focus to consolidate the knowledge contained within these datasets to create more adaptable models as well. One notable project is RT-X, a collaborative effort involving DeepMind and 33 other research institutions. The project’s ambitious aim is to create a general-purpose AI system capable of working with different physical robots and performing a wide range of tasks.

Inspired by the success of large language models, which can perform tasks they weren’t explicitly trained for, the researchers brought together datasets from 22 robot embodiments and 20 institutions worldwide, encompassing 500 skills and 150,000 tasks. They then trained a series of models on this unified dataset. Impressively, the resulting models demonstrated the ability to generalise to various embodiments and tasks, even those they hadn’t been explicitly trained for.

Improving Reward Models

Generative models have found valuable applications not only in code generation but also in generating code for training robots. Nvidia’s latest model, Eureka, utilises generative AI to design reward models, a challenging component of reinforcement learning systems used in robot training.

Eureka employs GPT-4 to generate code for reward models, eliminating the need for task-specific prompts or predefined reward templates. It uses simulation environments and GPUs to efficiently assess the quality of large batches of reward candidates, streamlining the training process. Eureka also employs GPT-4 to analyse and enhance the generated code and can incorporate human feedback to align the reward model more closely with the developer’s goals.

Generative AI, which originally focused on simple tasks like generating images or text, is now taking on increasingly complex challenges. As generative AI becomes more integrated into robotics, we can anticipate rapid innovations that bring robots closer to everyday deployment alongside humans.

Shalini is an Executive Editor with Apeejay Newsroom. With a PG Diploma in Business Management and Industrial Administration and an MA in Mass Communication, she was a former Associate Editor with News9live. She has worked on varied topics - from news-based to feature articles.

Trending