Language modelling at scale: Gopher, ethical considerations, and retrieval — Google DeepMind

Language modelling at scale: Gopher, ethical considerations, and retrieval — Google DeepMind

🚀 Productivity

Language modelling at scale: Gopher, ethical considerations, and retrieval — Google DeepMind - AI Tool

Mar 24, 2026Admin

Skip to main content December 8, 2021 Responsibility & Safety Language modelling at scale: Gopher, ethical considerations, and retrieval Jack Rae, Geoffrey Irving, Laura Weidinger Share Language, and its role in demonstrating and facilitating comprehension - or intelligence - is a fundamental part of being human. It gives people the ability to communicate thoughts and concepts, express ideas, create memories, and build mutual understanding. These are foundational parts of social intelligence. It’s why our teams at DeepMind study aspects of language processing and communication, both in artificial agents and in humans. As part of a broader portfolio of AI research, we believe the development and study of more powerful language models – systems that predict and generate text – have tremendous potential for building advanced AI systems that can be used safely and efficiently to summarise information, provide expert advice and follow instructions via natural language. Developing beneficial language models requires research into their potential impacts, including the risks they pose. This includes collaboration between experts from varied backgrounds to thoughtfully anticipate and address the challenges that training algorithms on existing datasets can create. Today we are releasing three papers on language models that reflect this interdisciplinary approach. They include a detailed study of a 280 billion parameter transformer language model called Gopher , a study of ethical and social risks associated with large language models , and a paper investigating a new architecture with better training efficiency. Gopher - A 280 billion parameter language model In the quest to explore language models and develop new ones, we trained a series of transformer language models of different sizes, ranging from 44 million parameters to 280 billion parameters (the largest model we named Gopher). Our research investigated the strengths and weaknesses of those different-sized models,...

Related Tools

Comments

Please login to leave a comment