Codelab - Gemini for Developers

Gemini for Developers
Gemini for Developers

The Gemini ecosystem has evolved into a comprehensive suite of models, tools, and APIs. Whether you are “vibe-coding” a web app or deploying enterprise-grade agents, navigating the options can be overwhelming.

I am happy to announce a new Gemini for Developers codelab. This codelab is designed to teach you everything you need to know about the Gemini ecosystem, from the different model flavors to tools powered by Gemini to integration using the Google Gen AI SDK and the new Interactions API.

Read More →

Gemini Interactions API - One interface for models and agents

Interactions API overview
Interactions API overview

GenAI is rapidly moving from simple “prompt-and-response” patterns to complex, agentic workflows. To support this shift, Google recently introduced the Interactions API, a new unified foundation designed specifically for building with both models and agents.

In this post, I’ll introduce the core concepts of the Interactions API and walk through some of the samples available in my genai-samples repository.

What is the Interactions API?

Traditionally, developers had to use the Gemini API to talk to the models and another framework like Agent Development Kit (ADK) to create and manage agents. Currently in beta, the Interactions API simplifies this by providing a single interface for:

Read More →

RAG just got much easier with File Search Tool in Gemini API

File Search Tool
File Search Tool

The Gemini team at Google recently announced the File Search Tool, a fully managed RAG system built directly into the Gemini API as a simple, integrated, and scalable way to ground Gemini. I gave it a try and I’m impressed how easy it is to use to ground Gemini with your own data.

In this blog post, I’ll introduce the File Search Tool and show you a concrete example.

Read More →

Quick Guide to ADK Callbacks

I’ve been exploring the Agent Developer Kit (ADK) and its powerful callbacks feature. In this blog post, I want to outline what callbacks are and provide a sample agent with all the callbacks implemented for a quick reference and testing.

At its core, an agent framework like ADK gives you a sequence of steps:

receive input → invoke model → invoke tools → return output

In real-world systems, we often need to hook into these steps for logging, guarding, caching, altering prompts or results, or dynamically changing behaviour based on session state. That’s exactly where callbacks come in. Think of callbacks as “checkpoints” in the agent’s lifecycle. The ADK framework automatically calls your functions at these key stages, giving you a chance to intervene.

Read More →

Introducing Google Gen AI .NET SDK

Introducing Google Gen AI .NET SDK
Introducing Google Gen AI .NET SDK

Last year, we announced the Google Gen AI SDK as the new unified library for Gemini on Google AI (via the Gemini Developer API) and Vertex AI (via the Vertex AI API). At the time, it was only a Python SDK. Since then, the team has been busy adding support for Go, Node.js, and Java but my favorite language, C#, was missing until now.

Read More ↗︎

Search Flights with Gemini Computer Use model

Earlier this month, the Gemini 2.5 Computer Use model was announced. This model is specialized in interacting with graphical user interfaces (UI). This is useful in scenarios where a structured API does not exist for the model to interact with (via function calling). Instead, you can use the Computer Use model to directly interact with user interfaces such as filling and submitting forms.

It’s important to note that the model does not interact with the UI directly. As input, the model receives the user request, a screenshot of the environment, and a history of recent actions. As output, it generates a function call representing a UI action such as clicking or typing (see the full list of supported UI actions). It’s the client-side code’s responsibility to execute the received action and the process continues in a loop:

Read More →

Secure your LLM apps with Google Cloud Model Armor

Model armor
Model armor

It’s crucial to secure inputs and outputs to and from your Large Language Model (LLM). Failure to do so can result in prompt injections, jailbreaking, sensitive information exposure, and more (as detailed in OWASP Top 10 for Large Language Model Applications).

I previously talked about LLM Guard and Vertex AI and showed how to use LLM Guard to secure LLMs. Google Cloud has its own service to secure LLMs: Model Armor. In this post, we’ll explore Model Armor and see how it can help to safeguard your LLM applications.

Read More →

Gen AI Evaluation Service - Multimodal Metrics

Multimodal metrics
Multimodal metrics

This is the sixth and final post in my Vertex AI Gen AI Evaluation Service blog post series. In the previous posts, we covered computation-based, model-based, tool-use, and agent metrics. These metrics measure different aspects of an LLM response in different ways but one thing they all had in common: they are all for text-based outputs.

LLMs nowadays also produce multimodal (images, videos) outputs. How do you evaluate multimodal outputs? That’s the topic of this blog post.

Read More →

Gen AI Evaluation Service - Agent Metrics

Agent metrics
Agent metrics

In my previous Gen AI Evaluation Service - Tool-Use Metrics post, we talked about LLMs calling external tools and how you can use tool-use metrics to evaluate how good those tool calls are. In today’s fifth post of my Vertex AI Gen AI Evaluation Service blog post series, we will talk about a related topic: agents and agent metrics.

What are agents?

There are many definitions of agents but an agent is essentially a piece of software that acts autonomously to achieve specific goals. They use LLMs to perform tasks, utilize external tools, coordinate with other agents, and ultimately produce a response to the user.

Read More →