Retrieval Augmented Generation (RAG) sounds like a dream come true for anyone working with AI language models. The idea is simple: enhance models like ChatGPT with external data so they can provide answers based on information beyond their original training. Need your AI to answer questions about your company's internal documents or recent events not covered in its training data? RAG seems like the perfect solution.
But when we roll up our sleeves and implement RAG in the real world, things get messy. Let's dive into why RAG isn't always the magic fix we hope for and explore the hurdles that can trip us up along the way.
The Allure of RAG
At its heart, RAG is about bridging gaps in an AI's knowledge:
- Compute Embeddings: Break down your documents into chunks and convert them into embeddings—numerical representations that capture the essence of the text.
- Store and Retrieve: Keep these embeddings in a database. When a question comes in, find the chunks whose embeddings are most similar to the question.
- Augment the AI: Feed these relevant chunks to the AI alongside the question, giving it the context it needs to generate an informed answer.
In theory, this means your AI can tap into any knowledge source you provide, even if that information isn't part of its original training.
The Reality Check
Despite its promise, implementing RAG isn't all smooth sailing. Here are some of the bumps you might hit on the road.
1. The Ever-Changing Embeddings
Embeddings are the foundation of RAG—they're how we represent text in a way that the AI can understand and compare. But here's the catch: embedding models keep evolving. New models offer better performance, but they come with their own embeddings that aren't compatible with the old ones.So, you're faced with a dilemma:
- Recompute All Embeddings: Every time a new model comes out, you could reprocess your entire document library to generate new embeddings. But if you're dealing with millions or billions of chunks, that's a hefty computational bill.
- Stick with the Old Model: You might decide to keep using the old embeddings to save on costs. But over time, you miss out on improvements and possibly pay more for less efficient models.
- Mix and Match: Use new embeddings for new documents and keep the old ones for existing data. But now your database is fragmented, and searching across different embedding spaces gets complicated.
There's no perfect solution. Some platforms, like SemDB.ai, try to ease the pain by allowing multiple embeddings in the same database, but the underlying challenge remains.
2. The Pronoun Problem
Language is messy. People use pronouns, references, and context that computers struggle with. Let's look at an example:
Original Text: "Chocolate cookies are made from the finest imported cocoa. They sell for $4 a dozen."
When we break this text into chunks for embeddings, we might get:
Chunk 1: "Chocolate cookies are made from the finest imported cocoa."Chunk 2: "They sell for $4 a dozen."
Now, if someone asks, "How much do chocolate cookies cost?", the system searches for embeddings similar to the question. But Chunk 2 doesn't mention "chocolate cookies" explicitly—it uses "they." The AI might miss this chunk because the embedding doesn't match well with the question.
Solving It
One way to tackle this is by cleaning up the text before creating embeddings:
Chunk 1: "Chocolate cookies are made from the finest imported cocoa."
Chunk 2: "Chocolate cookies sell for $4 a dozen."
By replacing pronouns with the nouns they refer to, we make each chunk self-contained and easier for the AI to match with questions.
3. Navigating Domain-Specific Knowledge
Things get trickier with specialized or branded products. Imagine you have a product description like this:
"Introducing Darlings—the ultimate cookie experience that brings together the timeless flavors of vanilla and chocolate in perfect harmony... And at just $5 per dozen, indulgence has never been so affordable."
Extracting key facts:
Darlings are cookies.
Darlings combine vanilla and chocolate.
Darlings cost $5 per dozen.
Now, if someone asks, "How much are the chocolate and vanilla cookies?", they might not mention "Darlings" by name. The embeddings might prioritize more general chunks about chocolate or vanilla cookies, missing the specific info about Darlings.
4. The Limits of Knowledge Graphs
To overcome these issues, some suggest using Knowledge Graphs alongside RAG. Knowledge Graphs store information as simple relationships:
(Darlings, are, cookies)
(Darlings, cost, $5)
(Darlings, contain, chocolate and vanilla)
In theory, this structure makes it easy to retrieve specific facts. But reality isn't so tidy.
The Complexity of Real-World Information
Not all knowledge fits neatly into simple relationships. Consider:
"Bob painted the room red on Tuesday because he was feeling inspired."
Trying to capture all the nuances of this sentence in a simple graph gets complicated quickly. You need more than just triplets—you need context, causation, and temporal information.
Conflicting Information
Knowledge Graphs also struggle with contradictions or exceptions. For example:
(Richard Nixon, is a, Quaker)
(Quakers, are, pacifists)
(Richard Nixon, escalated, the Vietnam War)
Does the graph conclude that Nixon is a pacifist? Real-world logic isn't always straightforward, and AI can stumble over these nuances.
5. The Human vs. Machine Conundrum
Humans are flexible thinkers. We handle ambiguity, context, and exceptions with ease. Computers, on the other hand, need clear, structured data. When we try to force the richness of human language and knowledge into rigid formats, we lose something important.
The Database Dilemma
All these challenges highlight a broader issue: how we store and retrieve data for AI systems. Balancing the need for detailed, accurate information with the limitations of current technology isn't easy.Embedding databases can become unwieldy as they grow. Knowledge Graphs can help organize information but may oversimplify complex concepts. We're still searching for the best way to bridge the gap between human language and machine understanding.
So, What Now?
RAG isn't a lost cause—it just isn't a one-size-fits-all solution. To make it work better, we might need to:
- Develop Smarter Preprocessing: Clean and prepare text in ways that make it easier for AI to understand, like resolving pronouns and simplifying sentences.
- Embrace Hybrid Approaches: Combine embeddings with other methods, like traditional search algorithms or domain-specific rules, to improve accuracy.
- Accept Imperfection: Recognize that AI has limitations and set realistic expectations about what it can and can't do.
Final Thoughts
Retrieval Augmented Generation holds a lot of promise, but it's not a magic wand. By understanding its limitations and working to address them, we can build better AI systems that come closer to meeting our needs. It's an ongoing journey, and with each challenge, we learn more about how to bridge the gap between human knowledge and artificial intelligence.