Gen AI Evaluation Service - Multimodal Metrics

This is the sixth and final post in my Vertex AI Gen AI Evaluation Service blog post series. In the previous posts, we covered computation-based, model-based, tool-use, and agent metrics. These metrics measure different aspects of an LLM response in different ways but one thing they all had in common: they are all for text-based outputs. LLMs nowadays also produce multimodal (images, videos) outputs. How do you evaluate multimodal outputs? That’s the topic of this blog post. Read More →

Gen AI Evaluation Service - Agent Metrics

In my previous Gen AI Evaluation Service - Tool-Use Metrics post, we talked about LLMs calling external tools and how you can use tool-use metrics to evaluate how good those tool calls are. In today’s fifth post of my Vertex AI Gen AI Evaluation Service blog post series, we will talk about a related topic: agents and agent metrics. What are agents? There are many definitions of agents but an agent is essentially a piece of software that acts autonomously to achieve specific goals. Read More →

Gen AI Evaluation Service - Tool-Use Metrics

I’m continuing my Vertex AI Gen AI Evaluation Service blog post series. In today’s fourth post of the series, I will talk about tool-use metrics. What is tool use? Tool use, also known as function calling, provides the LLM with definitions of external tools (for example, a get_current_weather function). When processing a prompt, the model determines if a tool is needed and, if so, outputs structured data specifying the tool to call and its parameters (for example, get_current_weather(location='London')). Read More →

Gen AI Evaluation Service - Model-Based Metrics

In the Gen AI Evaluation Service - An Overview post, I introduced Vertex AI’s Gen AI evaluation service and talked about the various classes of metrics it supports. In the Gen AI Evaluation Service - Computation-Based Metrics post, we delved into computation-based metrics, what they provide, and discussed their limitations. In today’s third post of the series, we’ll dive into model-based metrics. The idea of model-based metrics is to use a judge model to evaluate the output of a candidate model. Read More →

Gen AI Evaluation Service - Computation-Based Metrics

In my Gen AI Evaluation Service - An Overview post, I introduced Vertex AI’s Gen AI evaluation service and talked about the various classes of metrics it supports. In today’s post, I want to dive into computation-based metrics, what they provide, and discuss their limitations. Computation-based metrics are metrics that can be calculated using a mathematical formula. They’re deterministic – the same input produces the same score, unlike model-based metrics where you might get slightly different scores for the same input. Read More →

Gen AI Evaluation Service - An Overview

Generating content with Large Language Models (LLMs) is easy. Determining whether the generated content is good is hard. That’s why evaluating LLM outputs with metrics is crucial. Previously, I talked about DeepEval and Promptfoo as some of the tools you can use for LLM evaluation. I also talked about RAG triad metrics specifically for Retrieval Augmented Generation (RAG) evaluation for LLMs. In the next few posts, I want to talk about a Google Cloud specific evaluation service: the Gen AI evaluation service in Vertex AI. Read More →

Evaluating RAG pipelines with the RAG triad

Retrieval-Augmented Generation (RAG) emerged as a dominant framework for feeding Large Language Models (LLMs) the context beyond the scope of their training data and enabling LLMs to respond with more grounded answers and fewer hallucinations based on that context. However, designing an effective RAG pipeline can be challenging. You need to answer questions such as: How should you parse and chunk text documents for vector embedding? What chunk size and overlay size should you use? Read More ↗︎

DeepEval adds native support for Gemini as an LLM Judge

In my previous post on DeepEval and Vertex AI, I introduced DeepEval, an open-source evaluation framework for LLMs. I also demonstrated how to use Gemini (on Vertex AI) as an LLM Judge in DeepEval, replacing the default OpenAI judge to evaluate outputs from other LLMs. At that time, the Gemini integration with DeepEval wasn’t ideal and I had to implement my own integration. Thanks to the excellent work by Roy Arsan in PR #1493, DeepEval now includes native Gemini integration. Read More →

Much simplified function calling in Gemini 2.X models

Last year, in my Deep dive into function calling in Gemini post, I talked about how to do function calling in Gemini. More specifically, I showed how to call two functions (location_to_lat_long and lat_long_to_weather) to get the weather information for a location from Gemini. It wasn’t difficult but it involved a lot of steps for 2 simple function calls. I’m pleased to see that the latest Gemini 2.X models and the unified Google Gen AI SDK (that I talked about in my Gemini on Vertex AI and Google AI now unified with the new Google Gen AI SDK) made function calling much simpler. Read More →

RAG with a PDF using LlamaIndex and SimpleVectorStore on Vertex AI

Previously, I showed how to do RAG with a PDF using LangChain and Annoy Vector Store and RAG with a PDF using LangChain and Firestore Vector Store. Both used a PDF as the RAG backend and used LangChain as the LLM framework to orchestrate RAG ingestion and retrieval. LlamaIndex is another popular LLM framework. I wondered how to set up the same PDF based RAG pipeline with LlamaIndex and Vertex AI but I didn’t find a good sample. Read More →