Google DeepMind at ICLR 2024

https://deepmind.google/discover/blog/google-deepmind-at-iclr-2024/

Research

Published
3 May 2024

Developing next-gen AI agents, exploring new modalities, and pioneering foundational learning

Next week, AI researchers from around the globe will converge at the 12th International Conference on Learning Representations (ICLR), set to take place May 7-11 in Vienna, Austria.

Raia Hadsell, Vice President of Research at Google DeepMind, will deliver a keynote reflecting on the last 20 years in the field, highlighting how lessons learned are shaping the future of AI for the benefit of humanity.

We’ll also offer live demonstrations showcasing how we bring our foundational research into reality, from the development of Robotics Transformers to the creation of toolkits and open-source models like Gemma.

Teams from across Google DeepMind will present more than 70 papers this year. Some research highlights:

Problem-solving agents and human-inspired approaches

Large language models (LLMs) are already revolutionizing advanced AI tools, yet their full potential remains untapped. For instance, LLM-based AI agents capable of taking effective actions could transform digital assistants into more helpful and intuitive AI tools.

AI assistants that follow natural language instructions to carry out web-based tasks on people’s behalf would be a huge timesaver. In an oral presentation we introduce WebAgent, an LLM-driven agent that learns from self-experience to navigate and manage complex tasks on real-world websites.

To further enhance the general usefulness of LLMs, we focused on boosting their problem-solving skills. We demonstrate how we achieved this by equipping an LLM-based system with a traditionally human approach: producing and using “tools”. Separately, we present a training technique that ensures language models produce more consistently socially acceptable outputs. Our approach uses a sandbox rehearsal space that represents the values of society.

Pushing boundaries in vision and coding

Our Dynamic Scene Transformer (DyST) model leverages real-world single-camera videos to extract 3D representations of objects in the scene and their movements.

Until recently, large AI models mostly focused on text and images, laying the groundwork for large-scale pattern recognition and data interpretation. Now, the field is progressing beyond these static realms to embrace the dynamics of real-world visual environments. As computing advances across the board, it is increasingly important that its underlying code is generated and optimized with maximum efficiency.

When you watch a video on a flat screen, you intuitively grasp the three-dimensional nature of the scene. Machines, however, struggle to emulate this ability without explicit supervision. We showcase our Dynamic Scene Transformer (DyST) model, which leverages real-world single-camera videos to extract 3D representations of objects in the scene and their movements. What’s more, DyST also enables the generation of novel versions of the same video, with user control over camera angles and content.

Emulating human cognitive strategies also makes for better AI code generators. When programmers write complex code, they typically “decompose” the task into simpler subtasks. With ExeDec, we introduce a novel code-generating approach that harnesses a decomposition approach to elevate AI systems’ programming and generalization performance.

In a parallel spotlight paper we explore the novel use of machine learning to not only generate code, but to optimize it, introducing a dataset for the robust benchmarking of code performance. Code optimization is challenging, requiring complex reasoning, and our dataset enables the exploration of a range of ML techniques. We demonstrate that the resulting learning strategies outperform human-crafted code optimizations.

ExeDec introduces a novel code-generating approach that harnesses a decomposition approach to elevate AI systems’ programming and generalization performance

Advancing foundational learning

Our research teams are tackling the big questions of AI - from exploring the essence of machine cognition to understanding how advanced AI models generalize - while also working to overcome key theoretical challenges.

For both humans and machines, causal reasoning and the ability to predict events are closely related concepts. In a spotlight presentation, we explore how reinforcement learning is affected by prediction-based training objectives, and draw parallels to changes in brain activity also linked to prediction.

When AI agents are able to generalize well to new scenarios is it because they, like humans, have learned an underlying causal model of their world? This is a critical question in advanced AI. In an oral presentation, we reveal that such models have indeed learned an approximate causal model of the processes that resulted in their training data, and discuss the deep implications.

Another critical question in AI is trust, which in part depends on how accurately models can estimate the uncertainty of their outputs - a crucial factor for reliable decision-making. We've made significant advances in uncertainty estimation within Bayesian deep learning, employing a simple and essentially cost-free method.

Finally, we explore game theory’s Nash equilibrium (NE) - a state in which no player benefits from changing their strategy if others maintain theirs. Beyond simple two-player games, even approximating a Nash equilibrium is computationally intractable, but in an oral presentation, we reveal new state-of-the-art approaches in negotiating deals from poker to auctions.

Bringing together the AI community

We’re delighted to sponsor ICLR and support initiatives including Queer in AI and Women In Machine Learning. Such partnerships not only bolster research collaborations but also foster a vibrant, diverse community in AI and machine learning.

If you’re at ICLR, be sure to visit our booth and our Google Research colleagues next door. Discover our pioneering research, meet our teams hosting workshops, and engage with our experts presenting throughout the conference. We look forward to connecting with you!

{
"by": "yeldarb",
"descendants": 0,
"id": 40249384,
"score": 1,
"time": 1714753155,
"title": "Google DeepMind at ICLR 2024",
"type": "story",
"url": "https://deepmind.google/discover/blog/google-deepmind-at-iclr-2024/"
}
{
"author": null,
"date": "2024-05-07T12:00:00.000Z",
"description": "Developing next-gen AI agents, exploring new modalities, and pioneering foundational learning",
"image": "https://lh3.googleusercontent.com/8PzKGooudBtamqh9keU_q7O0ex5XxGgIIK3BKQNAVEV6WDzIkfadsbNPhU0QCg5PurFGnAOSOClrM9dQHIGvOEe9MPluA5uhyFcun3FvNMBfPI63mWk=w1200-h630-n-nu",
"logo": "https://logo.clearbit.com/deepmind.google",
"publisher": "DeepMind Technologies Limited",
"title": "Google DeepMind at ICLR 2024",
"url": "https://deepmind.google/discover/blog/google-deepmind-at-iclr-2024/"
}
{
"url": "https://deepmind.google/discover/blog/google-deepmind-at-iclr-2024/",
"title": "Google DeepMind at ICLR 2024",
"description": "Research Published 3 May 2024 Developing next-gen AI agents, exploring new modalities, and pioneering foundational learningNext...",
"links": [
"https://deepmind.google/discover/blog/google-deepmind-at-iclr-2024/"
],
"image": "https://lh3.googleusercontent.com/8PzKGooudBtamqh9keU_q7O0ex5XxGgIIK3BKQNAVEV6WDzIkfadsbNPhU0QCg5PurFGnAOSOClrM9dQHIGvOEe9MPluA5uhyFcun3FvNMBfPI63mWk=w1600-h900-n-nu",
"content": "<div>\n <article>\n <div>\n<div>\n <div>\n <p>Research</p>\n <dl>\n Published\n <dd>3 May 2024</dd>\n </dl>\n </div>\n <picture>\n <source media=\"(min-width: 1024px)\" type=\"image/webp\" srcset=\"https://lh3.googleusercontent.com/8PzKGooudBtamqh9keU_q7O0ex5XxGgIIK3BKQNAVEV6WDzIkfadsbNPhU0QCg5PurFGnAOSOClrM9dQHIGvOEe9MPluA5uhyFcun3FvNMBfPI63mWk=w1072-h603-n-nu-rw 1x, https://lh3.googleusercontent.com/8PzKGooudBtamqh9keU_q7O0ex5XxGgIIK3BKQNAVEV6WDzIkfadsbNPhU0QCg5PurFGnAOSOClrM9dQHIGvOEe9MPluA5uhyFcun3FvNMBfPI63mWk=w2144-h1206-n-nu-rw 2x\"></source><source media=\"(min-width: 600px)\" type=\"image/webp\" srcset=\"https://lh3.googleusercontent.com/8PzKGooudBtamqh9keU_q7O0ex5XxGgIIK3BKQNAVEV6WDzIkfadsbNPhU0QCg5PurFGnAOSOClrM9dQHIGvOEe9MPluA5uhyFcun3FvNMBfPI63mWk=w928-h522-n-nu-rw 1x, https://lh3.googleusercontent.com/8PzKGooudBtamqh9keU_q7O0ex5XxGgIIK3BKQNAVEV6WDzIkfadsbNPhU0QCg5PurFGnAOSOClrM9dQHIGvOEe9MPluA5uhyFcun3FvNMBfPI63mWk=w1856-h1044-n-nu-rw 2x\"></source><source type=\"image/webp\" srcset=\"https://lh3.googleusercontent.com/8PzKGooudBtamqh9keU_q7O0ex5XxGgIIK3BKQNAVEV6WDzIkfadsbNPhU0QCg5PurFGnAOSOClrM9dQHIGvOEe9MPluA5uhyFcun3FvNMBfPI63mWk=w528-h297-n-nu-rw 1x, https://lh3.googleusercontent.com/8PzKGooudBtamqh9keU_q7O0ex5XxGgIIK3BKQNAVEV6WDzIkfadsbNPhU0QCg5PurFGnAOSOClrM9dQHIGvOEe9MPluA5uhyFcun3FvNMBfPI63mWk=w1056-h594-n-nu-rw 2x\"></source>\n <img src=\"https://lh3.googleusercontent.com/8PzKGooudBtamqh9keU_q7O0ex5XxGgIIK3BKQNAVEV6WDzIkfadsbNPhU0QCg5PurFGnAOSOClrM9dQHIGvOEe9MPluA5uhyFcun3FvNMBfPI63mWk=w1072-h603-n-nu\" />\n </picture>\n </div>\n <div>\n <p>Developing next-gen AI agents, exploring new modalities, and pioneering foundational learning</p><p>Next week, AI researchers from around the globe will converge at the 12th <a href=\"https://iclr.cc/\" target=\"_blank\">International Conference on Learning Representations</a> (ICLR), set to take place May 7-11 in Vienna, Austria.</p><p>Raia Hadsell, Vice President of Research at Google DeepMind, will deliver a keynote reflecting on the last 20 years in the field, highlighting how lessons learned are shaping the future of AI for the benefit of humanity.</p><p>We’ll also offer live demonstrations showcasing how we bring our foundational research into reality, from the development of <a href=\"https://deepmind.google/discover/blog/shaping-the-future-of-advanced-robotics/\" target=\"_blank\">Robotics Transformers</a> to the creation of toolkits and open-source models like <a href=\"https://blog.google/technology/developers/gemma-open-models/\" target=\"_blank\">Gemma</a>.</p><p>Teams from across Google DeepMind will present more than 70 papers this year. Some research highlights:</p>\n</div>\n<section>\n <ul>\n <li>\n <a target=\"_blank\" href=\"https://deepmind.google/discover/events/iclr-2024/\">\n <span>Google DeepMind at ICLR 2024</span>\n </a>\n </li>\n <li>\n <a href=\"https://research.google/conferences-and-events/google-at-iclr-2024/\" target=\"_blank\">\n <span>Google Research at ICLR 2024</span>\n </a>\n </li>\n </ul>\n</section>\n <div>\n <h3>Problem-solving agents and human-inspired approaches</h3><p>Large language models (LLMs) are already revolutionizing advanced AI tools, yet their full potential remains untapped. For instance, LLM-based AI agents capable of taking effective actions could transform digital assistants into more helpful and intuitive AI tools.</p><p>AI assistants that follow natural language instructions to carry out web-based tasks on people’s behalf would be a huge timesaver. In an oral presentation we introduce <a href=\"https://openreview.net/pdf?id=9JQtrumvg8\" target=\"_blank\">WebAgent</a>, an LLM-driven agent that learns from self-experience to navigate and manage complex tasks on real-world websites.</p><p>To further enhance the general usefulness of LLMs, we focused on boosting their problem-solving skills. We demonstrate how we achieved this by equipping an LLM-based system with a traditionally human approach: <a href=\"https://openreview.net/forum?id=qV83K9d5WB\" target=\"_blank\">producing and using “tools”</a>. Separately, we present a training technique that ensures language models produce more consistently <a href=\"https://openreview.net/forum?id=NddKiWtdUm\" target=\"_blank\">socially acceptable outputs. Our approach</a> uses a sandbox rehearsal space that represents the <a href=\"https://deepmind.google/discover/blog/the-ethics-of-advanced-ai-assistants/\" target=\"_blank\">values of society</a>.</p><h3>Pushing boundaries in vision and coding</h3>\n</div>\n<figure>\n <figcaption>\n <p>Our Dynamic Scene Transformer (DyST) model leverages real-world single-camera videos to extract 3D representations of objects in the scene and their movements.</p>\n </figcaption>\n</figure>\n <div>\n <p>Until recently, large AI models mostly focused on text and images, laying the groundwork for large-scale pattern recognition and data interpretation. Now, the field is progressing beyond these static realms to embrace the dynamics of real-world visual environments. As computing advances across the board, it is increasingly important that its underlying code is generated and optimized with maximum efficiency.</p><p>When you watch a video on a flat screen, you intuitively grasp the three-dimensional nature of the scene. Machines, however, struggle to emulate this ability without explicit supervision. We showcase our <a href=\"https://openreview.net/forum?id=MnMWa94t12\" target=\"_blank\">Dynamic Scene Transformer</a> (DyST) model, which leverages real-world single-camera videos to extract 3D representations of objects in the scene and their movements. What’s more, DyST also enables the generation of novel versions of the same video, with user control over camera angles and content.</p><p>Emulating human cognitive strategies also makes for better AI code generators. When programmers write complex code, they typically “decompose” the task into simpler subtasks. With <a href=\"https://openreview.net/pdf?id=oTRwljRgiv\" target=\"_blank\">ExeDec</a>, we introduce a novel code-generating approach that harnesses a decomposition approach to elevate AI systems’ programming and generalization performance.</p><p>In a parallel <a href=\"https://openreview.net/forum?id=ix7rLVHXyY&amp;referrer=%5BAuthor%20Console%5D%28%2Fgroup%3Fid%3DICLR.cc%2F2024%2FConference%2FAuthors%23your-submissions\" target=\"_blank\">spotlight paper</a> we explore the novel use of machine learning to not only generate code, but to optimize it, introducing a <a href=\"https://pie4perf.com/\" target=\"_blank\">dataset for the robust benchmarking of code performance</a>. Code optimization is challenging, requiring complex reasoning, and our dataset enables the exploration of a range of ML techniques. We demonstrate that the resulting learning strategies outperform human-crafted code optimizations.</p>\n</div>\n<figure>\n <figcaption>\n <p>ExeDec introduces a novel code-generating approach that harnesses a decomposition approach to elevate AI systems’ programming and generalization performance</p>\n </figcaption>\n</figure>\n <div>\n <h3>Advancing foundational learning</h3><p>Our research teams are tackling the big questions of AI - from exploring the essence of machine cognition to understanding how advanced AI models generalize - while also working to overcome key theoretical challenges.</p><p>For both humans and machines, causal reasoning and the ability to predict events are closely related concepts. In a spotlight presentation, we explore how <a href=\"https://openreview.net/forum?id=agPpmEgf8C\" target=\"_blank\">reinforcement learning is affected by prediction-based training objectives</a>, and draw parallels to changes in brain activity also linked to prediction.</p><p>When AI agents are able to generalize well to new scenarios is it because they, like humans, have learned an underlying causal model of their world? This is a critical question in advanced AI. In an oral presentation, we reveal that such models <a href=\"https://openreview.net/forum?id=pOoKI3ouv1\" target=\"_blank\">have indeed learned an approximate causal model</a> of the processes that resulted in their training data, and discuss the deep implications.</p><p>Another critical question in AI is trust, which in part depends on how accurately models can estimate the uncertainty of their outputs - a crucial factor for reliable decision-making. We've made <a href=\"https://openreview.net/forum?id=Sx7BIiPzys\" target=\"_blank\">significant advances in uncertainty estimation within Bayesian deep learning</a>, employing a simple and essentially cost-free method.</p><p>Finally, we explore game theory’s Nash equilibrium (NE) - a state in which no player benefits from changing their strategy if others maintain theirs. Beyond simple two-player games, even approximating a Nash equilibrium is computationally intractable, but in an oral presentation, we <a href=\"https://openreview.net/forum?id=cc8h3I3V4E\" target=\"_blank\">reveal new state-of-the-art approaches</a> in negotiating deals from poker to auctions.</p><h3>Bringing together the AI community</h3><p>We’re delighted to sponsor ICLR and support initiatives including <a href=\"https://www.queerinai.com/\" target=\"_blank\">Queer in AI</a> and <a href=\"https://www.wiml.org/\" target=\"_blank\">Women In Machine Learning.</a> Such partnerships not only bolster research collaborations but also foster a vibrant, diverse community in AI and machine learning.</p><p>If you’re at ICLR, be sure to visit our booth and our <a href=\"https://research.google/conferences-and-events/google-at-iclr-2024/\" target=\"_blank\">Google Research</a> colleagues next door. Discover our pioneering research, meet our teams hosting workshops, and engage with our experts presenting throughout the conference. We look forward to connecting with you!</p>\n</div>\n </div>\n </article>\n </div>",
"author": "",
"favicon": "https://deepmind.google/static/icons/google_deepmind_48dp.5b470587fe7d.svg",
"source": "deepmind.google",
"published": "",
"ttr": 177,
"type": "article"
}