Google has been developing advanced generative AI models to power new conversational search features and other applications. But what is the actual name of the AI system behind Google’s new generative capabilities? Let’s explore the key language models and technologies that comprise Google’s ambitious push into generative AI.
LaMDA – Language Model for Dialogue Applications
One of the most prominent generative AI systems created by Google is called LaMDA, which stands for Language Model for Dialogue Applications. LaMDA was unveiled by Google in 2021.
It is a neural network conversational agent designed to have natural conversations on a wide range of topics through text interactions. LaMDA has been fine-tuned on dialog representing how humans communicate.
Also check this article: Google’s Generative AI Search Engine: Full Guide
Some key capabilities of LaMDA:
- Maintain coherent, interesting multi-turn conversations
- Discuss complex topics requiring world knowledge
- Generate diverse, high-quality responses
- Convey a unique personality customized for each conversation
LaMDA represented a major advance in Google’s conversational AI efforts thanks to its neural network architecture and sheer scale, with over 137 billion parameters.
Google has utilized LaMDA in experimental conversational systems like the AI Test Kitchen. It also provides the backbone for the new generative AI search features in Google that aim to offer more helpful, conversational responses to user queries.
PaLM – Pathways Language Model
In April 2022, Google unveiled its Pathways Language Model (PaLM), considered one of the largest and most advanced language models ever created.
PaLM has an astonishing 540 billion parameters, over 4 times more than GPT-3. It achieved state-of-the-art results across natural language processing tasks.
Some key capabilities and attributes of PaLM:
- Can read and write texts up to 2048 words in length
- Trained on 1.56 trillion words from webpages, books, Wikipedia, and other online sources
- Uses “pathways” structure to improve multi-modal understanding and common sense
- State-of-the-art results on SuperGLUE benchmark for language understanding
- Can perform mathematical reasoning, answer science exam questions, and complete coding challenges
PaLM represents a massive leap forward in generative language modeling. Google is leveraging PaLM to enhance search, conversational systems, and other applications with more robust natural language capabilities.
Also check this article: How to Access Google’s New Generative AI Search Feature
Parti – Parallel Interaction Model
In November 2022, Google discussed an emerging model called Parti – short for Parallel Interaction Model.
Parti has 1.2 trillion parameters, twice as many as PaLM. It is an ensemble model, meaning multiple models can operate in parallel to boost capabilities.
Key features of Parti include:
- Parallel ensemble architecture with multiple expert models
- 1.2 trillion parameters total across all experts
- Trained on dialogue data to support conversational applications
- Specialist modules for specific types of conversations and knowledge
- State-of-the-art results on natural language tasks
- Able to rapidly incorporate new information and skills by training new expert models
Google states Parti represents a milestone in developing AI assistants that can have helpful, knowledgeable conversations spanning many topics. The ensemble architecture allows partitioning knowledge into experts, improving quality.
Parti is not yet fully launched, but demonstrates Google’s continued momentum in generative AI research. The company aims to eventually integrate Parti into products like Google Search and Google Assistant.
Also check this article: What Languages is Google’s AI Search Available in?
Supporting Technologies
In addition to flagship language models like LaMDA, PaLM and Parti, Google has developed other key technologies to enable generative AI applications:
- RankBrain – AI system for understanding search queries and matching them to web pages. Helps power Google Search’s language understanding capabilities.
- Knowledge Graph – Database of real-world entities and relationships that aids language models with factual knowledge. Critical for high-quality informational responses.
- Meena – Conversational agent focused on depth, personality and engaging chit-chat ability through neural network training.
- T5 – Text-to-text transfer transformer framework effective for language generation tasks like translation, summarization and question answering.
These and other tools supplement the core language models to enhance the quality and capabilities of Google’s overall generative AI stack.
Recent Progress
Google has been continuously iterating on and scaling up its generative models over the past several years:
- 2017 – Smart Reply in Gmail leverages early seq2seq models to suggest response text
- 2018 – Duplex system for automated booking conversations via RNN, BERT architectures
- 2020 – Meena chatbot reached 79% human preference rating in conversations
- 2021 – LaMDA unveiled with 137 billion parameters, impressive interactive ability
- 2022 – PaLM launched with 540 billion parameters and state-of-the-art NLP performance
- 2022 – New conversational AI search features roll out in Google Search, powered by LaMDA
- 2022 – Parti discussed as Google’s next major ensemble model, currently in development
Google has established itself as one of the research leaders in generative AI, as evidenced by models like LaMDA, PaLM and Parti. The company continues pushing boundaries in its bid to power more helpful, conversational AI applications.
Also check this article: Is Google’s New AI Search Available Internationally?
Conclusion
Google has established itself as one of the leading innovators in generative artificial intelligence, as demonstrated by models like LaMDA, PaLM, and the emerging Parti. These large neural network architectures trained on massive datasets have achieved state-of-the-art conversational ability and natural language understanding.
Central to Google’s progress are technologies like RankBrain, Knowledge Graph, Meena and T5 which provide critical supporting capabilities. The company has rapidly iterated and expanded its generative models since early tools like Smart Reply.
With each new system disclosed, such as the 1.2 trillion parameter Parti currently in development, Google aims to push boundaries on the size and sophistication of models. The overarching goal is to integrate these emerging generative AIs into products like Search, Assistant and Gmail to transform how people find information and interact with technology.
While challenges remain around limitations of current systems, Google has established a commanding lead in generative AI research. With models like LaMDA already powering new conversational search features, users are starting to experience applications of Google’s expanding generative capabilities firsthand. The future points to a new era of much more helpful, human-like interaction with artificial intelligence across Google’s products and services.
Also check this article: What Is Amazon Bedrock & How Does It Work?
FAQ About Google’s Generative AI
Here are some common questions about the key language models and technologies behind Google’s ambitious generative AI research efforts:
What is LaMDA?
LaMDA stands for Language Model for Dialogue Applications. It is a 137 billion parameter conversational AI agent created by Google focused on natural dialog interactions.
What does PaLM stand for?
PaLM is short for Pathways Language Model. It is Google’s 540 billion parameter language model that achieves state-of-the-art results on NLP benchmarks.
What are some unique capabilities of Parti?
Parti uses an ensemble architecture with multiple expert models operating in parallel. This allows partitioning knowledge across specialists.
How many parameters does Parti have?
Google states Parti contains a total of 1.2 trillion parameters, over twice as many as PaLM.
What is RankBrain and how does Google use it?
RankBrain is an AI system to parse search queries and match them to web pages. It helps power Google Search’s language understanding capabilities.
Why is the Knowledge Graph important for generative AI?
The Knowledge Graph provides factual information on real-world entities that language models leverage to improve accuracy and quality of responses.
When did Google first launch Smart Reply in Gmail?
Smart Reply, which uses generative AI to suggest responses, first launched in Gmail in 2017 based on early seq2seq models.
What was Duplex and how did it utilize AI?
Duplex was a 2018 Google system for automated phone booking conversations using RNN, BERT and other neural network architectures.
How good was Google’s Meena chatbot at conversing?
In human evaluations, Meena reportedly reached a 79% preference rating against other chatbots in 2020 based on relevance, personality and engagingness.