DeepEval and Vertex AI

Introduction When you’re working with Large Language Models (LLMs), it’s crucial to have an evaluation framework in place. Only by constantly evaluating and testing your LLM outputs, you can tell if the changes you’re making to prompts or the output you’re getting back from the LLM are actually good. In this blog post, we’ll look into one of those evaluation frameworks called DeepEval, an open-source evaluation framework for LLMs. It allows to “unit test” LLM outputs in a similar way to Pytest. Read More →

Deep dive into function calling in Gemini

Introduction In this blog post, we’ll deep dive into function calling in Gemini. More specifically, you’ll see how to handle multiple and parallel function call requests from generate_content and chat interfaces and take a look at the new auto function calling feature through a sample weather application. What is function calling? Function Calling is useful to augment LLMs with more up-to-date data via external API calls. You can define custom functions and provide these to an LLM. Read More →

Control LLM costs with context caching

Introduction Some large language models (LLMs), such as Gemini 1.5 Flash or Gemini 1.5 Pro, have a very large context window. This is very useful if you want to analyze a big chunk of data, such as a whole book or a long video. On the other hand, it can get quite expensive if you keep sending the same large data in your prompts. Context caching can help. Context caching is useful in reducing costs when a substantial context is referenced repeatedly by shorter requests such as: Read More →

Control LLM output with response type and schema

Introduction Large language models (LLMs) are great at generating content but the output format you get back can be a hit or miss sometimes. For example, you ask for a JSON output in certain format and you might get free-form text or a JSON wrapped in markdown string or a proper JSON but with some required fields missing. If your application requires a strict format, this can be a real problem. Read More →

RAG API powered by LlamaIndex on Vertex AI

Introduction Recently, I talked about why grounding LLMs is important and how to ground LLMs with public data using Google Search (Vertex AI’s Grounding with Google Search: how to use it and why) and with private data using Vertex AI Search (Grounding LLMs with your own data using Vertex AI Search). In today’s post, I want to talk about another more flexible and customizable way of grounding your LLMs with private data: the RAG API powered by LlamaIndex on Vertex AI. Read More →

Grounding LLMs with your own data using Vertex AI Search

Introduction In my previous Vertex AI’s Grounding with Google Search: how to use it and why post, I explained why you need grounding with large language models (LLMs) and how Vertex AI’s grounding with Google Search can help to ground LLMs with public up-to-date data. That’s great but you sometimes need to ground LLMs with your own private data. How can you do that? There are many ways but Vertex AI Search is the easiest way and that’s what I want to talk about today with a simple use case. Read More →

Give your LLM a quick lie detector test

Introduction It’s no secret that LLMs sometimes lie and they do so in a very confident kind of way. This might be OK for some applications but it can be a real problem if your application requires high levels of accuracy. I remember when the first LLMs emerged back in early 2023. I tried some of the early models and it felt like they were hallucinating half of the time. More recently, it started feeling like LLMs are getting better at giving more factual answers. Read More →

Vertex AI's Grounding with Google Search - how to use it and why

Introduction Once in a while, you come across a feature that is so easy to use and so useful that you don’t know how you lived without it before. For me, Vertex AI’s Grounding with Google Search is one of those features. In this blog post, I explain why you need grounding with large language models (LLMs) and how Vertex AI’s Grounding with Google Search can help with minimal effort on your part. Read More ↗︎

A tour of Gemini 1.5 Pro samples

Introduction Back in February, Google announced Gemini 1.5 Pro with its impressive 1 million token context window. Larger context size means that Gemini 1.5 Pro can process vast amounts of information in one go — 1 hour of video, 11 hours of audio, 30,000 lines of code or over 700,000 words and the good news is that there’s good language support. In this blog post, I will point out some samples utilizing Gemini 1. Read More →

Making API calls exactly once when using Workflows

One challenge with any distributed system, including Workflows, is ensuring that requests sent from one service to another are processed exactly once, when needed; for example, when placing a customer order in a shipping queue, withdrawing funds from a bank account, or processing a payment. In this blog post, we’ll provide an example of a website invoking Workflows, and Workflows in turn invoking a Cloud Function. We’ll show how to make sure both Workflows and the Cloud Function logic only runs once. Read More ↗︎