Evaluating RAG pipelines with the RAG triad

Retrieval-Augmented Generation (RAG) emerged as a dominant framework for feeding Large Language Models (LLMs) the context beyond the scope of their training data and enabling LLMs to respond with more grounded answers and fewer hallucinations based on that context. However, designing an effective RAG pipeline can be challenging. You need to answer questions such as: How should you parse and chunk text documents for vector embedding? What chunk size and overlay size should you use? Read More ↗︎

DeepEval adds native support for Gemini as an LLM Judge

In my previous post on DeepEval and Vertex AI, I introduced DeepEval, an open-source evaluation framework for LLMs. I also demonstrated how to use Gemini (on Vertex AI) as an LLM Judge in DeepEval, replacing the default OpenAI judge to evaluate outputs from other LLMs. At that time, the Gemini integration with DeepEval wasn’t ideal and I had to implement my own integration. Thanks to the excellent work by Roy Arsan in PR #1493, DeepEval now includes native Gemini integration. Read More →

Much simplified function calling in Gemini 2.X models

Last year, in my Deep dive into function calling in Gemini post, I talked about how to do function calling in Gemini. More specifically, I showed how to call two functions (location_to_lat_long and lat_long_to_weather) to get the weather information for a location from Gemini. It wasn’t difficult but it involved a lot of steps for 2 simple function calls. I’m pleased to see that the latest Gemini 2.X models and the unified Google Gen AI SDK (that I talked about in my Gemini on Vertex AI and Google AI now unified with the new Google Gen AI SDK) made function calling much simpler. Read More →

RAG with a PDF using LlamaIndex and SimpleVectorStore on Vertex AI

Previously, I showed how to do RAG with a PDF using LangChain and Annoy Vector Store and RAG with a PDF using LangChain and Firestore Vector Store. Both used a PDF as the RAG backend and used LangChain as the LLM framework to orchestrate RAG ingestion and retrieval. LlamaIndex is another popular LLM framework. I wondered how to set up the same PDF based RAG pipeline with LlamaIndex and Vertex AI but I didn’t find a good sample. Read More →

Ensuring AI Code Quality with SonarQube + Gemini Code Assist

In my previous Code Quality in the Age of AI-Assisted Development blog post, I talked about how generative AI is changing the way we code and its potential impact on code quality. I recommended using static code analysis tools to monitor AI-generated code, ensuring its security and quality. In this blog post, I will explore one such static code analysis tool, SonarQube, and see how it improves the quality of AI-generated code. Read More →

Code Quality in the Age of AI-Assisted Development

As developers transition from manual coding to AI-assisted coding, an increasing share of code is now being generated by AI. This shift has significantly boosted productivity and efficiency, but it raises an important question: how does AI-assisted development impact code quality? How can we ensure that AI-generated code maintains high quality, adheres to good style, and follows best practices? This question has been on my mind recently, and it is the topic of this blog post. Read More →

Improve the RAG pipeline with RAG triad metrics

In my previous RAG Evaluation - A Step-by-Step Guide with DeepEval post, I showed how to evaluate a RAG pipeline with the RAG triad metrics using DeepEval and Vertex AI. As a recap, these were the results: Answer relevancy and faithfulness metrics had perfect 1.0 scores whereas contextual relevancy was low at 0.29 because we retrieved a lot of irrelevant context: The score is 0.29 because while the context mentions relevant information such as "The Cymbal Starlight 2024 has a cargo capacity of 13. Read More →

RAG Evaluation - A Step-by-Step Guide with DeepEval

In my previous Evaluating RAG pipelines post, I introduced two approaches to evaluating RAG pipelines. In this post, I will show you how to implement these two approaches in detail. The implementation will naturally depend on the framework you use. In my case, I’ll be using DeepEval, an open-source evaluation framework. Approach 1: Evaluating Retrieval and Generator separately As a recap, in this approach, you evaluate the retriever and generator of the RAG pipeline separately with their own separate metrics. Read More →

Evaluating RAG pipelines

Retrieval-Augmented Generation (RAG) emerged as a dominant framework to feed LLMs the context beyond the scope of its training data and enable LLMs to respond with more grounded answers with less hallucinations based on that context. However, designing an effective RAG pipeline can be challenging. You need to answer certain questions such as: How should you parse and chunk text documents for embedding? What chunk and overlay size to use? Read More →

Gemini on Vertex AI and Google AI now unified with the new Google Gen AI SDK

If you’ve been working with Gemini, you’ve likely encountered the two separate client libraries for Gemini: the Gemini library for Google AI vs. Vertex AI in Google Cloud. Even though the two libraries are quite similar, there are slight differences that make the two libraries non-interchangeable. I usually start my experiments in Google AI and when it is time to switch to Vertex AI on Google Cloud, I couldn’t simply copy and paste my code. Read More →