Detecting signs of disease before bones start to break
Detecting signs of disease before bones start to break
Detecting signs of disease before bones start to break
Detecting signs of disease before bones start to break
Helping uncover how protein mutations cause diseases and disorders
Helping uncover how protein mutations cause diseases and disorders
Helping uncover how protein mutations cause diseases and disorders
Helping uncover how protein mutations cause diseases and disorders
Helping uncover how protein mutations cause diseases and disorders
Helping uncover how protein mutations cause diseases and disorders
Creating a tool to study extinct species from 50,000 years ago
Creating a tool to study extinct species from 50,000 years ago
In our latest paper, we introduce Sparrow – a dialogue agent that’s useful and reduces the risk of unsafe and inappropriate answers. Our agent is designed to talk with a user, answer questions, and search the internet using Google when it’s helpful to look up evidence to inform its responses.
In our latest paper, we introduce Sparrow – a dialogue agent that’s useful and reduces the risk of unsafe and inappropriate answers. Our agent is designed to talk with a user, answer questions, and search the internet using Google when it’s helpful to look up evidence to inform its responses.
Predictions that pave the way to new treatments
Predictions that pave the way to new treatments
Our Operating Principles have come to define both our commitment to prioritising widespread benefit, as well as the areas of research and applications we refuse to pursue. These principles have been at the heart of our decision making since DeepMind was founded, and continue to be refined as the AI landscape changes and grows. They are designed for our role as a research-driven science company and consistent with Google’s AI principles.
Our Operating Principles have come to define both our commitment to prioritising widespread benefit, as well as the areas of research and applications we refuse to pursue. These principles have been at the heart of our decision making since DeepMind was founded, and continue to be refined as the AI landscape changes and grows. They are designed for our role as a research-driven science company and consistent with Google’s AI principles.
Colin, CBO at DeepMind, discusses collaborations with Alphabet and how we integrate ethics, accountability, and safety into everything we do.
Colin, CBO at DeepMind, discusses collaborations with Alphabet and how we integrate ethics, accountability, and safety into everything we do.
Our new paper, In conversation with AI: aligning language models with human values, explores a different approach, asking what successful communication between humans and an artificial conversational agent might look like and what values should guide conversation in these contexts.
Our new paper, In conversation with AI: aligning language models with human values, explores a different approach, asking what successful communication between humans and an artificial conversational agent might look like and what values should guide conversation in these contexts.
Using human and animal motions to teach robots to dribble a ball, and simulated humanoid characters to carry boxes and play football
Using human and animal motions to teach robots to dribble a ball, and simulated humanoid characters to carry boxes and play football
We came across Zindi – a dedicated partner with complementary goals – who are the largest community of African data scientists and host competitions that focus on solving Africa’s most pressing problems. Our Science team’s Diversity, Equity, and Inclusion (DE&I) team worked with Zindi to identify a scientific challenge that could help advance conservation efforts and grow involvement in AI. Inspired by Zindi’s bounding box turtle challenge, we landed on a project with the potential for real impact: turtle facial recognition.
We want to build safe, aligned artificial general intelligence (AGI) systems that pursue the intended goals of its designers. Causal influence diagrams (CIDs) are a way to model decision-making situations that allow us to reason about agent incentives. By relating training setups to the incentives that shape agent behaviour, CIDs help illuminate potential risks before training an agent and can inspire better agent designs. But how do we know when a CID is an accurate model of a training setup?
Accelerating the search for life saving leishmaniasis treatments
Looking into a protein’s past to unlock the mysteries of life itself
New insights into immunity to help protect the world’s flora
Big data that leads to discoveries that benefit everyone