<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Atamel.Dev</title><link>https://atamel.dev/</link><description>Recent content on Atamel.Dev</description><generator>Hugo</generator><language>en</language><managingEditor>atamel@gmail.com (Mete Atamel)</managingEditor><webMaster>atamel@gmail.com (Mete Atamel)</webMaster><lastBuildDate>Fri, 13 Mar 2026 10:24:48 +0000</lastBuildDate><atom:link href="https://atamel.dev/index.xml" rel="self" type="application/rss+xml"/><item><title>Codelab - Gemini for Developers</title><link>https://atamel.dev/posts/2026/02-13_codelab_gemini_for_developers/</link><pubDate>Fri, 13 Feb 2026 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2026/02-13_codelab_gemini_for_developers/</guid><description>&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://atamel.dev/img/2026/codelab_gemini_for_developers.png" alt="Gemini for Developers" /&gt;
 
 &lt;figcaption&gt;Gemini for Developers&lt;/figcaption&gt;
 
&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;The Gemini ecosystem has evolved into a comprehensive suite of models, tools, and APIs. Whether you are &amp;ldquo;vibe-coding&amp;rdquo; a
web app or deploying enterprise-grade agents, navigating the options can be overwhelming.&lt;/p&gt;
&lt;p&gt;I am happy to announce a new &lt;a href="https://codelabs.developers.google.com/gemini-for-developers"&gt;Gemini for Developers
codelab&lt;/a&gt;. This codelab is designed to teach you everything
you need to know about the Gemini ecosystem, from the different model flavors to tools powered by Gemini to integration
using the Google Gen AI SDK and the new Interactions API.&lt;/p&gt;</description></item><item><title>Gemini Interactions API - One interface for models and agents</title><link>https://atamel.dev/posts/2026/02-03_gemini_interactions_api/</link><pubDate>Tue, 03 Feb 2026 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2026/02-03_gemini_interactions_api/</guid><description>&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://raw.githubusercontent.com/meteatamel/genai-samples/main/vertexai/interactions-api/hero_gemini_interactions_api.png" alt="Interactions API overview" /&gt;
 
 &lt;figcaption&gt;Interactions API overview&lt;/figcaption&gt;
 
&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;GenAI is rapidly moving from simple &amp;ldquo;prompt-and-response&amp;rdquo; patterns to complex, agentic workflows. To support this shift,
Google recently introduced the &lt;a href="https://ai.google.dev/gemini-api/docs/interactions"&gt;Interactions API&lt;/a&gt;, a new unified
foundation designed specifically for building with both models and agents.&lt;/p&gt;
&lt;p&gt;In this post, I’ll introduce the core concepts of the Interactions API and walk through some of the samples available in
my &lt;a href="https://github.com/meteatamel/genai-samples/tree/main/vertexai/interactions-api"&gt;genai-samples repository&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="what-is-the-interactions-api"&gt;What is the Interactions API?&lt;/h2&gt;
&lt;p&gt;Traditionally, developers had to use the Gemini API to talk to the models and another framework like Agent Development
Kit (ADK) to create and manage agents. Currently in beta, the Interactions API simplifies this by providing a single interface
for:&lt;/p&gt;</description></item><item><title>Introducing Google Cloud VertexAI Extensions for .NET</title><link>https://atamel.dev/posts/2026/01-30_introducing_vertexai_extensions_dotnet/</link><pubDate>Fri, 30 Jan 2026 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2026/01-30_introducing_vertexai_extensions_dotnet/</guid><description>&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://atamel.dev/img/2026/introducing_vertexai_extensions_msft.png" alt="Hero image" /&gt;
 
 &lt;figcaption&gt;Hero image&lt;/figcaption&gt;
 
&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;In October 2024, Microsoft
&lt;a href="https://devblogs.microsoft.com/dotnet/introducing-microsoft-extensions-ai-preview/"&gt;announced&lt;/a&gt; the
&lt;a href="https://www.nuget.org/packages/Microsoft.Extensions.AI.Abstractions/"&gt;Microsoft.Extensions.AI.Abstractions&lt;/a&gt; and
&lt;a href="https://www.nuget.org/packages/Microsoft.Extensions.AI"&gt;Microsoft.Extensions.AI&lt;/a&gt; libraries for .NET. These libraries
provide the .NET ecosystem with essential abstractions for integrating AI services into .NET applications from various
providers such as Open AI, Azure, Google.&lt;/p&gt;
&lt;p&gt;Today, we’re happy to announce the
&lt;a href="https://www.nuget.org/packages/Google.Cloud.VertexAI.Extensions"&gt;Google.Cloud.VertexAI.Extensions&lt;/a&gt; library. This is the
Vertex AI implementation of &lt;strong&gt;Microsoft.Extensions.AI&lt;/strong&gt;. It enables .NET developers to integrate Google Gemini models on Vertex
AI via the &lt;strong&gt;Microsoft.Extensions.AI&lt;/strong&gt; abstractions.&lt;/p&gt;</description></item><item><title>Parallel agents in Antigravity</title><link>https://atamel.dev/posts/2026/01-19_parallel_agents_antigravity/</link><pubDate>Mon, 19 Jan 2026 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2026/01-19_parallel_agents_antigravity/</guid><description>&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://atamel.dev/img/2026/parallel_agents_antigravity.jpg" alt="Hero image" /&gt;
 
 &lt;figcaption&gt;Hero image&lt;/figcaption&gt;
 
&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://antigravity.google/"&gt;Google Antigravity&lt;/a&gt; transforms your regular IDE into an agentic development platform. In
my previous blog posts, I showed some of the unique features of Antigravity compared to a regular IDE. In today’s blog
post, I’ll talk about what makes Antigravity truly unique: &lt;strong&gt;Its ability to spin up and manage multiple agents.&lt;/strong&gt;&lt;/p&gt;
&lt;h2 id="antigravity-modes"&gt;Antigravity modes&lt;/h2&gt;
&lt;p&gt;Antigravity has two modes: &lt;strong&gt;Editor&lt;/strong&gt; and &lt;strong&gt;Agent Manager&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Editor&lt;/strong&gt; is your familiar IDE with an agent on the side to help you with tasks. You can read more about it in my
previous blog post &lt;a href="https://atamel.dev/posts/2025/12-01_antigravity_editor_tips/"&gt;Google Antigravity Editor - Tips &amp;amp;
Tricks&lt;/a&gt; and learn how to provide feedback to the agent in
&lt;a href="https://atamel.dev/posts/2025/12-10_antigravity_provide_feedback/"&gt;Provide Feedback to Google Antigravity&lt;/a&gt; and how to
customize it in &lt;a href="https://atamel.dev/posts/2025/11-25_customize_antigravity_rules_workflows/"&gt;Customize Google Antigravity with rules and
workflows&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Provide Feedback to Google Antigravity</title><link>https://atamel.dev/posts/2025/12-10_antigravity_provide_feedback/</link><pubDate>Wed, 10 Dec 2025 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2025/12-10_antigravity_provide_feedback/</guid><description>&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://atamel.dev/img/2025/antigravity_provide_feedback_outline.png" alt="Google Antigravity Provide Feedback" /&gt;
 
 &lt;figcaption&gt;Google Antigravity Provide Feedback&lt;/figcaption&gt;
 
&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;At the heart of &lt;a href="https://antigravity.google/"&gt;Google Antigravity&lt;/a&gt; is its ability to effortlessly gather your feedback
at every stage of the experience. In this blog post, I will show you all the different ways you can provide feedback
to Antigravity.&lt;/p&gt;
&lt;p&gt;As the agent works on a task, it creates different artifacts along the way:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;An implementation plan and a task list (before coding)&lt;/li&gt;
&lt;li&gt;Code diffs (as it generates code)&lt;/li&gt;
&lt;li&gt;A walkthrough to verify the results (after coding)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;These artifacts are a way for Antigravity to communicate its plans and progress. More importantly, they&amp;rsquo;re also a way
for you to provide feedback to the agent in Google docs style comments. This is very useful to effectively steer the
agent in the direction you want.&lt;/p&gt;</description></item><item><title>Google Antigravity Editor - Tips &amp; Tricks</title><link>https://atamel.dev/posts/2025/12-01_antigravity_editor_tips/</link><pubDate>Mon, 01 Dec 2025 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2025/12-01_antigravity_editor_tips/</guid><description>&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://atamel.dev/img/2025/antigravity_tips_and_tricks.png" alt="Google Antigravity Tips and Tricks" /&gt;
 
 &lt;figcaption&gt;Google Antigravity Tips and Tricks&lt;/figcaption&gt;
 
&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://antigravity.google/"&gt;Google Antigravity&lt;/a&gt; is an agentic development platform where you have your familiar code
editor along with a powerful agent on the side. In today&amp;rsquo;s post, I want to show you some tips and tricks for the code
editor.&lt;/p&gt;
&lt;h2 id="setup-and-extensions"&gt;Setup and Extensions&lt;/h2&gt;
&lt;p&gt;In a typical setup, you&amp;rsquo;d have the editor, the terminal, and the agent visible:&lt;/p&gt;
&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://atamel.dev/img/2025/antigravity_ide.png" alt="Antigravity IDE" /&gt;
 
 &lt;figcaption&gt;Antigravity IDE&lt;/figcaption&gt;
 
&lt;/figure&gt;&lt;/p&gt;</description></item><item><title>Customize Google Antigravity with rules and workflows</title><link>https://atamel.dev/posts/2025/11-25_customize_antigravity_rules_workflows/</link><pubDate>Tue, 25 Nov 2025 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2025/11-25_customize_antigravity_rules_workflows/</guid><description>&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://antigravity.google/assets/image/blog/introducing-antigravity-1.jpg" alt="Google Antigravity" /&gt;
 
 &lt;figcaption&gt;Google Antigravity&lt;/figcaption&gt;
 
&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://antigravity.google/"&gt;Google Antigravity&lt;/a&gt; was
&lt;a href="https://antigravity.google/blog/introducing-google-antigravity"&gt;announced&lt;/a&gt; last week as the next generation agentic
IDE. I&amp;rsquo;m very impressed with it so far. It already helped me to upgrade my blog to the latest Hugo (that I&amp;rsquo;ve been
putting off for a long time). It even recognized that some of the shortcodes (eg. Twitter) from the old version changed
in the new version and automatically updated my blog posts with the new version of the shortcodes. Nice!&lt;/p&gt;</description></item><item><title>RAG just got much easier with File Search Tool in Gemini API</title><link>https://atamel.dev/posts/2025/11-14_easy_rag_file_search_tool_gemini/</link><pubDate>Thu, 13 Nov 2025 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2025/11-14_easy_rag_file_search_tool_gemini/</guid><description>&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://storage.googleapis.com/gweb-uniblog-publish-prod/images/FileSearch-Keyword_RD2-V01.width-1200.format-webp.webp" alt="File Search Tool" /&gt;
 
 &lt;figcaption&gt;File Search Tool&lt;/figcaption&gt;
 
&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;The Gemini team at Google recently
&lt;a href="https://blog.google/technology/developers/file-search-gemini-api/?utm_campaign=CDR_0xe875a906_awareness&amp;amp;utm_medium=external&amp;amp;utm_source=blog"&gt;announced&lt;/a&gt;
the &lt;a href="http://ai.google.dev/gemini-api/docs/file-search"&gt;File Search Tool&lt;/a&gt;, a fully managed RAG system built directly into
the Gemini API as a simple, integrated, and scalable way to ground Gemini. I gave it a try and I’m impressed how easy it
is to use to ground Gemini with your own data.&lt;/p&gt;
&lt;p&gt;In this blog post, I’ll introduce the File Search Tool and show you a concrete example.&lt;/p&gt;</description></item><item><title>Quick Guide to ADK Callbacks</title><link>https://atamel.dev/posts/2025/11-03_quick_guide_adk_callbacks/</link><pubDate>Mon, 03 Nov 2025 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2025/11-03_quick_guide_adk_callbacks/</guid><description>&lt;p&gt;I&amp;rsquo;ve been exploring the Agent Developer Kit (ADK) and its powerful callbacks feature. In this blog post, I want to outline
what callbacks are and provide a sample agent with all the callbacks implemented for a quick reference and testing.&lt;/p&gt;
&lt;p&gt;At its core, an agent framework like ADK gives you a sequence of steps:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;receive input → invoke model → invoke tools → return output&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;In real-world systems, we often need to hook into these steps for logging, guarding, caching, altering prompts or
results, or dynamically changing behaviour based on session state. That’s exactly where &lt;strong&gt;callbacks&lt;/strong&gt; come in. Think of
callbacks as &amp;ldquo;checkpoints&amp;rdquo; in the agent&amp;rsquo;s lifecycle. The ADK framework automatically calls your functions at these key
stages, giving you a chance to intervene.&lt;/p&gt;</description></item><item><title>Vibe coding an AI Trivia Quest app with Google AI Studio</title><link>https://atamel.dev/posts/2025/10-27_vibe_code_ai_trivia_ai_studio/</link><pubDate>Mon, 27 Oct 2025 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2025/10-27_vibe_code_ai_trivia_ai_studio/</guid><description>&lt;p&gt;A few days ago, I saw &lt;a href="https://x.com/OfficialLoganK/"&gt;Logan Kilpatrick&lt;/a&gt;’s Tweet about the new AI-first vibe coding
experience in AI Studio:&lt;/p&gt;
&lt;blockquote class="twitter-tweet"&gt;&lt;p lang="en" dir="ltr"&gt;Introducing the new AI first vibe coding experience in &lt;a href="https://twitter.com/GoogleAIStudio?ref_src=twsrc%5Etfw"&gt;@GoogleAIStudio&lt;/a&gt;! Built to take you from prompt to production with Gemini, and optimized for AI app creation. Start building AI apps for free : ) &lt;br&gt;&lt;br&gt;More updates and features to come! &lt;a href="https://t.co/HpI7Dsl8Bj"&gt;pic.twitter.com/HpI7Dsl8Bj&lt;/a&gt;&lt;/p&gt;&amp;mdash; Logan Kilpatrick (@OfficialLoganK) &lt;a href="https://twitter.com/OfficialLoganK/status/1980674135693971550?ref_src=twsrc%5Etfw"&gt;October 21, 2025&lt;/a&gt;&lt;/blockquote&gt;
&lt;script async src="https://platform.twitter.com/widgets.js" charset="utf-8"&gt;&lt;/script&gt;


&lt;p&gt;Last time I tried AI Studio for vibe coding, it was mostly a single-page web application with all the code in a single
file (which wasn’t ideal).&lt;/p&gt;</description></item><item><title>Introducing Google Gen AI .NET SDK</title><link>https://atamel.dev/posts/2025/10-23_intro_google_genai_dotnet_sdk/</link><pubDate>Thu, 23 Oct 2025 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2025/10-23_intro_google_genai_dotnet_sdk/</guid><description>&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://storage.googleapis.com/gweb-cloudblog-publish/images/image.max-2500x2500.jpg" alt="Introducing Google Gen AI .NET SDK" /&gt;
 
 &lt;figcaption&gt;Introducing Google Gen AI .NET SDK&lt;/figcaption&gt;
 
&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Last year, we
&lt;a href="https://medium.com/google-cloud/gemini-on-vertex-ai-and-google-ai-now-unified-with-the-new-google-gen-ai-sdk-094a7ebca8e6"&gt;announced&lt;/a&gt;
the &lt;a href="https://cloud.google.com/vertex-ai/generative-ai/docs/sdks/overview"&gt;Google Gen AI SDK&lt;/a&gt; as the new unified library
for Gemini on Google AI (via the &lt;a href="https://ai.google.dev/gemini-api/docs"&gt;Gemini Developer API&lt;/a&gt;) and Vertex AI (via the
&lt;a href="https://cloud.google.com/vertex-ai/generative-ai/docs/learn/overview"&gt;Vertex AI API&lt;/a&gt;). At the time, it was only a
&lt;strong&gt;Python&lt;/strong&gt; SDK. Since then, the team has been busy adding support for &lt;strong&gt;Go&lt;/strong&gt;, &lt;strong&gt;Node.js&lt;/strong&gt;, and &lt;strong&gt;Java&lt;/strong&gt; but my favorite
language, &lt;strong&gt;C#&lt;/strong&gt;, was missing until now.&lt;/p&gt;</description></item><item><title>Search Flights with Gemini Computer Use model</title><link>https://atamel.dev/posts/2025/10-20_gemini_computer_use_flights/</link><pubDate>Mon, 20 Oct 2025 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2025/10-20_gemini_computer_use_flights/</guid><description>&lt;p&gt;Earlier this month, the Gemini 2.5 Computer Use model was
&lt;a href="https://blog.google/technology/google-deepmind/gemini-computer-use-model/"&gt;announced&lt;/a&gt;. This model is specialized in
interacting with graphical user interfaces (UI). This is useful in scenarios where a structured API does not exist for
the model to interact with (via function calling). Instead, you can use the Computer Use model to directly interact with
user interfaces such as filling and submitting forms.&lt;/p&gt;
&lt;p&gt;It’s important to note that the model does not interact with the UI directly. As input, the model receives the user
request, a screenshot of the environment, and a history of recent actions. As output, it generates a function call
representing a UI action such as clicking or typing (see the &lt;a href="https://ai.google.dev/gemini-api/docs/computer-use#supported-actions"&gt;full list of supported UI
actions&lt;/a&gt;). It’s the client-side code’s
responsibility to execute the received action and the process continues in a loop:&lt;/p&gt;</description></item><item><title>Secure your LLM apps with Google Cloud Model Armor</title><link>https://atamel.dev/posts/2025/08-11_secure_llm_model_armor/</link><pubDate>Mon, 11 Aug 2025 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2025/08-11_secure_llm_model_armor/</guid><description>&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://atamel.dev/img/2025/model-armor.png" alt="Model armor" /&gt;
 
 &lt;figcaption&gt;Model armor&lt;/figcaption&gt;
 
&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;It’s crucial to secure inputs and outputs to and from your Large Language Model (LLM). Failure to do so can result in
prompt injections, jailbreaking, sensitive information exposure, and more (as detailed in &lt;a href="https://owasp.org/www-project-top-10-for-large-language-model-applications/"&gt;OWASP Top 10 for
Large Language Model Applications&lt;/a&gt;).&lt;/p&gt;
&lt;p&gt;I previously talked about &lt;a href="https://atamel.dev/posts/2024/11-11_llmguard_vertexai/"&gt;LLM Guard and Vertex AI&lt;/a&gt; and showed
how to use &lt;a href="https://github.com/protectai/llm-guard"&gt;LLM Guard&lt;/a&gt; to secure LLMs. Google Cloud has its own service to secure
LLMs: Model Armor. In this post, we&amp;rsquo;ll explore Model Armor and see how it can help to safeguard your LLM applications.&lt;/p&gt;</description></item><item><title>Gen AI Evaluation Service - Multimodal Metrics</title><link>https://atamel.dev/posts/2025/08-05_genai_eval_service_multimodal_metrics/</link><pubDate>Tue, 05 Aug 2025 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2025/08-05_genai_eval_service_multimodal_metrics/</guid><description>&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://atamel.dev/img/2025/multimodal-metrics.png" alt="Multimodal metrics" /&gt;
 
 &lt;figcaption&gt;Multimodal metrics&lt;/figcaption&gt;
 
&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;This is the sixth and final post in my &lt;strong&gt;Vertex AI Gen AI Evaluation Service blog post series.&lt;/strong&gt; In the previous posts,
we covered computation-based, model-based, tool-use, and agent metrics. These metrics measure different aspects of an LLM
response in different ways but one thing they all had in common: they are all for &lt;strong&gt;text-based outputs&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;LLMs nowadays also produce multimodal (images, videos) outputs. &lt;strong&gt;How do you evaluate multimodal outputs?&lt;/strong&gt; That’s the topic
of this blog post.&lt;/p&gt;</description></item><item><title>Gen AI Evaluation Service - Agent Metrics</title><link>https://atamel.dev/posts/2025/08-01_genai_eval_service_agent_metrics/</link><pubDate>Fri, 01 Aug 2025 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2025/08-01_genai_eval_service_agent_metrics/</guid><description>&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://atamel.dev/img/2025/agent-metrics.png" alt="Agent metrics" /&gt;
 
 &lt;figcaption&gt;Agent metrics&lt;/figcaption&gt;
 
&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;In my previous &lt;a href="https://atamel.dev/posts/2025/07-28_genai_eval_service_tool_metrics/"&gt;Gen AI Evaluation Service - Tool-Use Metrics
post&lt;/a&gt;, we talked about LLMs calling external tools
and how you can use tool-use metrics to evaluate how good those tool calls are. In today’s fifth post of my &lt;strong&gt;Vertex AI
Gen AI Evaluation Service blog post series&lt;/strong&gt;, we will talk about a related topic: agents and agent metrics.&lt;/p&gt;
&lt;h2 id="what-are-agents"&gt;What are agents?&lt;/h2&gt;
&lt;p&gt;There are many definitions of agents but an agent is essentially a piece of software that acts autonomously to achieve
specific goals. They use LLMs to perform tasks, utilize external tools, coordinate with other agents, and ultimately
produce a response to the user.&lt;/p&gt;</description></item><item><title>Gen AI Evaluation Service - Tool-Use Metrics</title><link>https://atamel.dev/posts/2025/07-28_genai_eval_service_tool_metrics/</link><pubDate>Mon, 28 Jul 2025 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2025/07-28_genai_eval_service_tool_metrics/</guid><description>&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://atamel.dev/img/2025/tool-use-metrics.png" alt="Tool-use metrics" /&gt;
 
 &lt;figcaption&gt;Tool-use metrics&lt;/figcaption&gt;
 
&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;I’m continuing my Vertex AI Gen AI Evaluation Service blog post series. In today’s fourth post of the series, I will
talk about tool-use metrics.&lt;/p&gt;
&lt;h2 id="what-is-tool-use"&gt;What is tool use?&lt;/h2&gt;
&lt;p&gt;Tool use, also known as function calling, provides the LLM with definitions of external tools (for example, a
&lt;code&gt;get_current_weather&lt;/code&gt; function). When processing a prompt, the model determines if a tool is needed and, if so, outputs
structured data specifying the tool to call and its parameters (for example, &lt;code&gt;get_current_weather(location='London')&lt;/code&gt;).&lt;/p&gt;</description></item><item><title>Gen AI Evaluation Service - Model-Based Metrics</title><link>https://atamel.dev/posts/2025/07-08_genai_eval_service_model_metrics/</link><pubDate>Tue, 08 Jul 2025 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2025/07-08_genai_eval_service_model_metrics/</guid><description>&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://storage.googleapis.com/gweb-cloudblog-publish/images/1_s3RjOGV.max-1700x1700.png" alt="Model-based metrics" /&gt;
 
 &lt;figcaption&gt;Model-based metrics&lt;/figcaption&gt;
 
&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;In the &lt;a href="https://atamel.dev/posts/2025/06-30_genai_eval_service_overview/"&gt;Gen AI Evaluation Service - An Overview&lt;/a&gt;
post, I introduced Vertex AI’s &lt;a href="https://cloud.google.com/vertex-ai/generative-ai/docs/models/evaluation-overview?utm_campaign=CDR_0xe875a906_default&amp;amp;utm_medium=external&amp;amp;utm_source=blog"&gt;Gen AI evaluation
service&lt;/a&gt;
and talked about the various classes of metrics it supports. In the &lt;a href="https://atamel.dev/posts/2025/07-02_genai_eval_service_comp_metrics/"&gt;Gen AI Evaluation Service - Computation-Based
Metrics&lt;/a&gt; post, we delved into &lt;a href="https://cloud.google.com/vertex-ai/generative-ai/docs/models/determine-eval#computation-based-metrics?utm_campaign=CDR_0xe875a906_default&amp;amp;utm_medium=external&amp;amp;utm_source=blog"&gt;computation-based
metrics&lt;/a&gt;,
what they provide, and discussed their limitations. In today’s third post of the series, we’ll dive into &lt;a href="https://cloud.google.com/vertex-ai/generative-ai/docs/models/determine-eval#model-based-metrics?utm_campaign=CDR_0xe875a906_default&amp;amp;utm_medium=external&amp;amp;utm_source=blog"&gt;model-based
metrics&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The idea of &lt;a href="https://cloud.google.com/vertex-ai/generative-ai/docs/models/determine-eval#model-based-metrics?utm_campaign=CDR_0xe875a906_default&amp;amp;utm_medium=external&amp;amp;utm_source=blog"&gt;model-based
metrics&lt;/a&gt;
is to use a judge model to evaluate the output of a candidate model. Using an LLM as a judge allows more flexible and
rich evaluations that the computational/statistical metrics fail to do.&lt;/p&gt;</description></item><item><title>Gen AI Evaluation Service - Computation-Based Metrics</title><link>https://atamel.dev/posts/2025/07-02_genai_eval_service_comp_metrics/</link><pubDate>Wed, 02 Jul 2025 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2025/07-02_genai_eval_service_comp_metrics/</guid><description>&lt;p&gt;In my &lt;a href="https://atamel.dev/posts/2025/06-30_genai_eval_service_overview/"&gt;Gen AI Evaluation Service - An Overview&lt;/a&gt; post,
I introduced Vertex AI’s &lt;a href="https://cloud.google.com/vertex-ai/generative-ai/docs/models/evaluation-overview"&gt;Gen AI evaluation
service&lt;/a&gt; and talked about the various
classes of metrics it supports. In today’s post, I want to dive into &lt;a href="https://cloud.google.com/vertex-ai/generative-ai/docs/models/determine-eval#computation-based-metrics"&gt;computation-based
metrics&lt;/a&gt;, what
they provide, and discuss their limitations.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://cloud.google.com/vertex-ai/generative-ai/docs/models/determine-eval#computation-based-metrics"&gt;Computation-based
metrics&lt;/a&gt; are
metrics that can be calculated using a mathematical formula. They’re deterministic – the same input produces the same
score, unlike model-based metrics where you might get slightly different scores for the same input.&lt;/p&gt;</description></item><item><title>Gen AI Evaluation Service - An Overview</title><link>https://atamel.dev/posts/2025/06-30_genai_eval_service_overview/</link><pubDate>Mon, 30 Jun 2025 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2025/06-30_genai_eval_service_overview/</guid><description>&lt;p&gt;Generating content with Large Language Models (LLMs) is easy. Determining whether the generated content is good is hard.
That’s why evaluating LLM outputs with metrics is crucial. Previously, I talked about
&lt;a href="https://atamel.dev/posts/2024/08-12_deepeval_vertexai/"&gt;DeepEval&lt;/a&gt; and
&lt;a href="https://atamel.dev/posts/2024/11-04_promptfoo_vertexai/"&gt;Promptfoo&lt;/a&gt; as some of the tools you can use for LLM
evaluation. I also talked about &lt;a href="https://javapro.io/2025/05/14/evaluating-rag-pipelines-with-the-rag-triad/"&gt;RAG triad&lt;/a&gt;
metrics specifically for Retrieval Augmented Generation (RAG) evaluation for LLMs.&lt;/p&gt;
&lt;p&gt;In the next few posts, I want to talk about a Google Cloud specific evaluation service: the &lt;a href="https://cloud.google.com/vertex-ai/generative-ai/docs/models/evaluation-overview?utm_campaign=CDR_0xe875a906_default&amp;amp;utm_medium=external&amp;amp;utm_source=blog"&gt;Gen AI evaluation
service&lt;/a&gt;
in Vertex AI. The &lt;a href="https://cloud.google.com/vertex-ai/generative-ai/docs/models/evaluation-overview?utm_campaign=CDR_0xe875a906_default&amp;amp;utm_medium=external&amp;amp;utm_source=blog"&gt;Gen AI evaluation
service&lt;/a&gt;
in Vertex AI lets you evaluate any generative model or application against a set of criteria or your own custom
criteria.&lt;/p&gt;</description></item><item><title>Evaluating RAG pipelines with the RAG triad</title><link>https://atamel.dev/posts/2025/05-14_evaluate_rag_with_rag_triad/</link><pubDate>Wed, 14 May 2025 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2025/05-14_evaluate_rag_with_rag_triad/</guid><description>&lt;p&gt;Retrieval-Augmented Generation (RAG) emerged as a dominant framework for feeding Large Language Models (LLMs) the
context beyond the scope of their training data and enabling LLMs to respond with more grounded answers and fewer
hallucinations based on that context.&lt;/p&gt;
&lt;p&gt;However, designing an effective RAG pipeline can be challenging. You need to answer questions such as:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;How should you parse and chunk text documents for vector embedding? What chunk size and overlay size should you use?&lt;/li&gt;
&lt;li&gt;What vector embedding model should you use?&lt;/li&gt;
&lt;li&gt;What retrieval method should I use to fetch the relevant context? How many documents should you retrieve by default?
Does the retriever 1.actually manage to retrieve the applicable documents?&lt;/li&gt;
&lt;li&gt;Does the generator actually generate content that is in line with the retrieved context? What parameters (model,
prompt template, temperature) work better?&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The only way to objectively answer these questions is to measure how well the RAG pipeline works, but what exactly do
you measure, and how do you measure it? This is the topic I’ll cover here.&lt;/p&gt;</description></item><item><title>DeepEval adds native support for Gemini as an LLM Judge</title><link>https://atamel.dev/posts/2025/04-29_deepeval_native_gemini_judge/</link><pubDate>Tue, 29 Apr 2025 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2025/04-29_deepeval_native_gemini_judge/</guid><description>&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://github.com/meteatamel/genai-beyond-basics/raw/main/samples/evaluation/deepeval/images/deepeval_gemini.png" alt="DeepEval and Gemini" /&gt;
 
 &lt;figcaption&gt;DeepEval and Gemini&lt;/figcaption&gt;
 
&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;In my &lt;a href="https://atamel.dev/posts/2024/08-12_deepeval_vertexai/"&gt;previous post on DeepEval and Vertex AI&lt;/a&gt;, I introduced
&lt;a href="https://www.deepeval.com/"&gt;DeepEval&lt;/a&gt;, an open-source evaluation framework for LLMs. I also demonstrated how to use
Gemini (on Vertex AI) as an LLM Judge in DeepEval, replacing the default OpenAI judge to evaluate outputs from other
LLMs. At that time, the Gemini integration with DeepEval wasn’t ideal and I had to implement my own integration.&lt;/p&gt;
&lt;p&gt;Thanks to the excellent work by &lt;a href="https://www.linkedin.com/in/arsan/"&gt;Roy Arsan&lt;/a&gt; in &lt;a href="https://github.com/confident-ai/deepeval/pull/1493"&gt;PR
#1493&lt;/a&gt;, DeepEval now includes &lt;strong&gt;native Gemini integration&lt;/strong&gt;. Since
it’s built on the new unified &lt;a href="https://cloud.google.com/vertex-ai/generative-ai/docs/sdks/overview"&gt;Google GenAI SDK&lt;/a&gt;,
DeepEval supports Gemini models running both on Vertex AI and Google AI. Nice!&lt;/p&gt;</description></item><item><title>Much simplified function calling in Gemini 2.X models</title><link>https://atamel.dev/posts/2025/04-08_simplified_function_calling_gemini/</link><pubDate>Tue, 08 Apr 2025 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2025/04-08_simplified_function_calling_gemini/</guid><description>&lt;p&gt;Last year, in my &lt;a href="https://atamel.dev/posts/2024/08-06_deepdive_function_calling_gemini/"&gt;Deep dive into function calling in Gemini&lt;/a&gt;
post, I talked about how to do function calling in Gemini. More specifically, I showed how to call two functions
(&lt;code&gt;location_to_lat_long&lt;/code&gt; and &lt;code&gt;lat_long_to_weather&lt;/code&gt;) to get the weather information for a location from Gemini.
It wasn&amp;rsquo;t difficult but it involved a lot of steps for 2 simple function calls.&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;m pleased to see that the latest Gemini 2.X models and the unified Google Gen AI SDK (that I talked about in my
&lt;a href="https://atamel.dev/posts/2024/12-17_vertexai_googleai_united_with_new_genai_sdk/"&gt;Gemini on Vertex AI and Google AI now unified with the new Google Gen AI SDK&lt;/a&gt;)
made function calling much simpler.&lt;/p&gt;</description></item><item><title>RAG with a PDF using LlamaIndex and SimpleVectorStore on Vertex AI</title><link>https://atamel.dev/posts/2025/03-24_rag_llamaindex_vertexai/</link><pubDate>Mon, 24 Mar 2025 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2025/03-24_rag_llamaindex_vertexai/</guid><description>&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://atamel.dev/img/2024/llamaindex_vertexai.png" alt="LlamaIndex and Vertex AI" /&gt;
 
 &lt;figcaption&gt;LlamaIndex and Vertex AI&lt;/figcaption&gt;
 
&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Previously, I showed how to do &lt;a href="https://github.com/meteatamel/genai-beyond-basics/tree/main/samples/grounding/rag-pdf-annoy"&gt;RAG with a PDF using LangChain and Annoy Vector
Store&lt;/a&gt; and &lt;a href="https://github.com/meteatamel/genai-beyond-basics/tree/main/samples/grounding/rag-pdf-firestore"&gt;RAG with a PDF
using LangChain and Firestore Vector
Store&lt;/a&gt;. Both used a PDF
as the RAG backend and used LangChain as the LLM framework to orchestrate RAG ingestion and retrieval.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.llamaindex.ai/"&gt;LlamaIndex&lt;/a&gt; is another popular LLM framework. I wondered how to set up the same PDF based
RAG pipeline with LlamaIndex and Vertex AI but I didn’t find a good sample. I put together a sample and in this short
post, I show how to set up the same PDF based RAG pipeline with LlamaIndex.&lt;/p&gt;</description></item><item><title>Ensuring AI Code Quality with SonarQube + Gemini Code Assist</title><link>https://atamel.dev/posts/2025/03-04_ensure_code_quality_sonarqube_gemini/</link><pubDate>Tue, 04 Mar 2025 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2025/03-04_ensure_code_quality_sonarqube_gemini/</guid><description>&lt;p&gt;In my previous &lt;a href="https://atamel.dev/posts/2025/01-28_code_quality_ai_development/"&gt;Code Quality in the Age of AI-Assisted
Development&lt;/a&gt; blog post, I talked about how generative
AI is changing the way we code and its potential impact on code quality. I recommended using &lt;strong&gt;static code analysis
tools to&lt;/strong&gt; monitor AI-generated code, ensuring its security and quality.&lt;/p&gt;
&lt;p&gt;In this blog post, I will explore one such static code analysis tool,
&lt;a href="https://www.sonarsource.com/products/sonarqube/"&gt;SonarQube&lt;/a&gt;, and see how it improves the quality of AI-generated code.&lt;/p&gt;</description></item><item><title>Code Quality in the Age of AI-Assisted Development</title><link>https://atamel.dev/posts/2025/01-28_code_quality_ai_development/</link><pubDate>Tue, 28 Jan 2025 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2025/01-28_code_quality_ai_development/</guid><description>&lt;p&gt;As developers transition from manual coding to AI-assisted coding, an increasing share of code is now being generated by
AI. This shift has significantly boosted productivity and efficiency, but it raises an important question: &lt;strong&gt;how does
AI-assisted development impact code quality?&lt;/strong&gt; How can we ensure that AI-generated code maintains high quality, adheres
to good style, and follows best practices? This question has been on my mind recently, and it is the topic of this blog
post.&lt;/p&gt;</description></item><item><title>Improve the RAG pipeline with RAG triad metrics</title><link>https://atamel.dev/posts/2025/01-21_improve-rag-with-rag-triad-metrics/</link><pubDate>Tue, 21 Jan 2025 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2025/01-21_improve-rag-with-rag-triad-metrics/</guid><description>&lt;p&gt;In my previous &lt;a href="https://atamel.dev/posts/2025/01-14_rag_evaluation_deepeval/"&gt;RAG Evaluation - A Step-by-Step Guide with
DeepEval&lt;/a&gt; post, I showed how to evaluate a RAG pipeline
with the RAG triad metrics using &lt;a href="https://docs.confident-ai.com/"&gt;DeepEval&lt;/a&gt; and Vertex AI. As a recap, these were the
results:&lt;/p&gt;
&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://atamel.dev/img/2025/rag_deepeval_results3.png" alt="RAG triad with DeepEval" /&gt;
 
 &lt;figcaption&gt;RAG triad with DeepEval&lt;/figcaption&gt;
 
&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Answer relevancy&lt;/strong&gt; and &lt;strong&gt;faithfulness&lt;/strong&gt; metrics had perfect 1.0 scores whereas &lt;strong&gt;contextual relevancy&lt;/strong&gt; was low at
0.29 because we retrieved a lot of irrelevant context:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-gdscript3" data-lang="gdscript3"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;The&lt;/span&gt; &lt;span class="n"&gt;score&lt;/span&gt; &lt;span class="n"&gt;is&lt;/span&gt; &lt;span class="mf"&gt;0.29&lt;/span&gt; &lt;span class="n"&gt;because&lt;/span&gt; &lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="n"&gt;the&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt; &lt;span class="n"&gt;mentions&lt;/span&gt; &lt;span class="n"&gt;relevant&lt;/span&gt; &lt;span class="n"&gt;information&lt;/span&gt; &lt;span class="n"&gt;such&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;The Cymbal Starlight 2024 has a cargo&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;capacity&lt;/span&gt; &lt;span class="n"&gt;of&lt;/span&gt; &lt;span class="mf"&gt;13.5&lt;/span&gt; &lt;span class="n"&gt;cubic&lt;/span&gt; &lt;span class="n"&gt;feet&lt;/span&gt;&lt;span class="s2"&gt;&amp;#34;, much of the retrieved context is irrelevant. For example, several statements discuss&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;towing&lt;/span&gt; &lt;span class="n"&gt;capacity&lt;/span&gt; &lt;span class="n"&gt;like&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;Your Cymbal Starlight 2024 is not equipped to tow a trailer&amp;#34;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ow"&gt;or&lt;/span&gt; &lt;span class="n"&gt;describe&lt;/span&gt; &lt;span class="n"&gt;how&lt;/span&gt; &lt;span class="n"&gt;to&lt;/span&gt; &lt;span class="n"&gt;access&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nb"&gt;load&lt;/span&gt; &lt;span class="n"&gt;cargo&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;like&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;To access the cargo area, open the trunk lid using the trunk release lever located in the driver&amp;#39;s footwell&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;instead&lt;/span&gt; &lt;span class="n"&gt;of&lt;/span&gt; &lt;span class="n"&gt;focusing&lt;/span&gt; &lt;span class="n"&gt;on&lt;/span&gt; &lt;span class="n"&gt;the&lt;/span&gt; &lt;span class="n"&gt;requested&lt;/span&gt; &lt;span class="n"&gt;cargo&lt;/span&gt; &lt;span class="n"&gt;capacity&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;Can we improve this? Let&amp;rsquo;s take a look.&lt;/p&gt;</description></item><item><title>RAG Evaluation - A Step-by-Step Guide with DeepEval</title><link>https://atamel.dev/posts/2025/01-14_rag_evaluation_deepeval/</link><pubDate>Tue, 14 Jan 2025 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2025/01-14_rag_evaluation_deepeval/</guid><description>&lt;p&gt;In my previous &lt;a href="https://atamel.dev/posts/2025/01-09_evaluating_rag_pipelines/"&gt;Evaluating RAG pipelines&lt;/a&gt; post, I introduced two approaches to evaluating RAG pipelines. In this post, I will show you how to implement these two approaches in detail. The implementation will naturally depend on the framework you use. In my case, I’ll be using &lt;a href="https://docs.confident-ai.com/"&gt;DeepEval&lt;/a&gt;, an open-source evaluation framework.&lt;/p&gt;
&lt;h2 id="approach-1-evaluating-retrieval-and-generator-separately"&gt;Approach 1: Evaluating Retrieval and Generator separately&lt;/h2&gt;
&lt;p&gt;As a recap, in this approach, you evaluate the retriever and generator of the RAG pipeline separately with their own separate metrics. This approach allows to pinpoint issues at the retriever and the generator level:&lt;/p&gt;</description></item><item><title>Evaluating RAG pipelines</title><link>https://atamel.dev/posts/2025/01-09_evaluating_rag_pipelines/</link><pubDate>Thu, 09 Jan 2025 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2025/01-09_evaluating_rag_pipelines/</guid><description>&lt;p&gt;Retrieval-Augmented Generation (RAG) emerged as a dominant framework to feed LLMs the context beyond the scope of its
training data and enable LLMs to respond with more grounded answers with less hallucinations based on that context.&lt;/p&gt;
&lt;p&gt;However, designing an effective RAG pipeline can be challenging. You need to answer certain questions such as:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;How should you parse and chunk text documents for embedding? What chunk and overlay size to use?&lt;/li&gt;
&lt;li&gt;What embedding model is best for your use case?&lt;/li&gt;
&lt;li&gt;What retrieval method works most effectively? How many documents should you retrieve by default? Does the retriever
actually manage to retrieve the relevant documents?&lt;/li&gt;
&lt;li&gt;Does the generator actually generate content in line with the relevant context? What parameters (e.g. model, prompt
template, temperature) work better?&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The only way to objectively answer these questions is to measure how well the RAG pipeline works but what exactly do you
measure? This is the topic of this blog post.&lt;/p&gt;</description></item><item><title>Gemini on Vertex AI and Google AI now unified with the new Google Gen AI SDK</title><link>https://atamel.dev/posts/2024/12-17_vertexai_googleai_united_with_new_genai_sdk/</link><pubDate>Tue, 17 Dec 2024 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2024/12-17_vertexai_googleai_united_with_new_genai_sdk/</guid><description>&lt;p&gt;If you&amp;rsquo;ve been working with Gemini, you&amp;rsquo;ve likely encountered the two separate client libraries for Gemini:
the Gemini library for Google AI vs. Vertex AI in Google Cloud. Even though the two libraries are quite similar, there are
slight differences that make the two libraries non-interchangeable.&lt;/p&gt;
&lt;p&gt;I usually start my experiments in Google AI and when it is time to switch to Vertex AI on Google Cloud, I couldn&amp;rsquo;t
simply copy and paste my code. I had to go through updating my Google AI libraries to Vertex AI libraries. It wasn&amp;rsquo;t difficult
but it was quite annoying.&lt;/p&gt;</description></item><item><title>Control LLM output with LangChain's structured and Pydantic output parsers</title><link>https://atamel.dev/posts/2024/12-09_control_llm_output_langchain_structured_pydantic/</link><pubDate>Mon, 09 Dec 2024 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2024/12-09_control_llm_output_langchain_structured_pydantic/</guid><description>&lt;p&gt;In my previous &lt;a href="https://atamel.dev/posts/2024/07-15_control_llm_output/"&gt;Control LLM output with response type and schema&lt;/a&gt;
post, I talked about how you can define a JSON response schema and Vertex AI makes sure the output of the
Large Language Model (LLM) conforms to that schema.&lt;/p&gt;
&lt;p&gt;In this post, I show how you can implement a similar response schema using LangChain&amp;rsquo;s structured output parser
with any model. You can further get the output parsed and populated into Python classes automatically with the
Pydantic output parser. This helps you to really narrow down and structure LLM outputs.&lt;/p&gt;</description></item><item><title>Tracing with Langtrace and Gemini</title><link>https://atamel.dev/posts/2024/11-27_tracing_langtrace_gemini/</link><pubDate>Wed, 27 Nov 2024 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2024/11-27_tracing_langtrace_gemini/</guid><description>&lt;p&gt;Large Language Models (LLMs) feel like a totally new technology with totally new problems. It&amp;rsquo;s true to some extent but
at the same time, they also have the same old problems that we had to tackle in traditional technology.&lt;/p&gt;
&lt;p&gt;For example, how do you figure out which LLM calls are taking too long or have failed? At the bare minimum, you need logging
but ideally, you use a full observability platform like &lt;a href="https://opentelemetry.io/"&gt;OpenTelemetry&lt;/a&gt; with logging,
tracing, metrics and more. You need the good old software engineering practices, such as observability, applied to new
technologies like LLMs.&lt;/p&gt;</description></item><item><title>Batch prediction in Gemini</title><link>https://atamel.dev/posts/2024/11-18_batch_prediction_gemini/</link><pubDate>Mon, 18 Nov 2024 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2024/11-18_batch_prediction_gemini/</guid><description>&lt;p&gt;LLMs are great in generating content on demand but if left unchecked, you can be left with a large bill at the end of
the day. In my &lt;a href="https://atamel.dev/posts/2024/07-19_control_llm_costs_context_caching/"&gt;Control LLM costs with context
caching&lt;/a&gt; post, I talked about how to limit costs
by using context caching. Batch generation is another technique you can use to save time at a discounted price.&lt;/p&gt;
&lt;h2 id="whats-batch-generation"&gt;What&amp;rsquo;s batch generation?&lt;/h2&gt;
&lt;p&gt;Batch generation in Gemini allows you to send multiple generative AI requests in batches rather than one by one and get
responses asynchronously either in a Cloud Storage bucket or a BigQuery table. This not only simplifies processing of
large datasets, but it also saves time and money, as batch requests are processed in paralllel and discounted 50% from
standard requests.&lt;/p&gt;</description></item><item><title>LLM Guard and Vertex AI</title><link>https://atamel.dev/posts/2024/11-11_llmguard_vertexai/</link><pubDate>Mon, 11 Nov 2024 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2024/11-11_llmguard_vertexai/</guid><description>&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://github.com/meteatamel/genai-beyond-basics/blob/main/samples/guardrails/llmguard/images/llm_guard.png?raw=true" alt="LLM Guard and Vertex AI" /&gt;
 
 &lt;figcaption&gt;LLM Guard and Vertex AI&lt;/figcaption&gt;
 
&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;ve been focusing on evaluation frameworks lately because I believe that the hardest problem while using LLMs is to
make sure they behave properly. Are you getting the right outputs grounded with your data? Are outputs free of
harmful or PII data? When you make a change to your RAG pipeline or to your prompts, are outputs getting better or worse?
How do you know? You don&amp;rsquo;t know unless you measure. What do you measure and how? These are the sort of questions you need
to answer and that&amp;rsquo;s when evaluation frameworks come into the picture.&lt;/p&gt;</description></item><item><title>Promptfoo and Vertex AI</title><link>https://atamel.dev/posts/2024/11-04_promptfoo_vertexai/</link><pubDate>Mon, 04 Nov 2024 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2024/11-04_promptfoo_vertexai/</guid><description>&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://github.com/meteatamel/genai-beyond-basics/blob/main/samples/evaluation/promptfoo/images/promptfoo_vertexai.png?raw=true" alt="Promptfoo and Vertex AI" /&gt;
 
 &lt;figcaption&gt;Promptfoo and Vertex AI&lt;/figcaption&gt;
 
&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;In my previous &lt;a href="https://atamel.dev/posts/2024/08-12_deepeval_vertexai/"&gt;DeepEval and Vertex AI&lt;/a&gt; blog post, I talked about how crucial it is to have an evaluation framework
in place when working with Large Language Models (LLMs) and introduced &lt;a href="https://docs.confident-ai.com/"&gt;DeepEval&lt;/a&gt; as
one of such evaluation frameworks.&lt;/p&gt;
&lt;p&gt;Recently, I came across another LLM evaluation and security framework called &lt;a href="https://www.promptfoo.dev/"&gt;Promptfoo&lt;/a&gt;.
In this post, I will introduce &lt;a href="https://www.promptfoo.dev/"&gt;Promptfoo&lt;/a&gt;, show what it provides for evaluations and
security testing, and how it can be used with Vertex AI.&lt;/p&gt;</description></item><item><title>Firestore for Image Embeddings</title><link>https://atamel.dev/posts/2024/10-29_firestore_for_image_embeddings/</link><pubDate>Tue, 29 Oct 2024 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2024/10-29_firestore_for_image_embeddings/</guid><description>&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://atamel.dev/img/2024/firestore_langchain.png" alt="Firestore and LangChain" /&gt;
 
 &lt;figcaption&gt;Firestore and LangChain&lt;/figcaption&gt;
 
&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;In my previous &lt;a href="https://medium.com/firebase-developers/firestore-for-text-embedding-and-similarity-search-d74acbc8d6f5"&gt;Firestore for Text Embedding and Similarity Search&lt;/a&gt; post, I talked about how Firestore and LangChain can help you to store &lt;strong&gt;text embeddings&lt;/strong&gt; and do similarity searches against them. With &lt;a href="https://cloud.google.com/vertex-ai/generative-ai/docs/embeddings/get-multimodal-embeddings"&gt;multimodal&lt;/a&gt; embedding models, you can generate embeddings not only for text but for images and video as well. In this post, I will show you how to store &lt;strong&gt;image embeddings&lt;/strong&gt; in Firestore and later use them for similarity search.&lt;/p&gt;</description></item><item><title>Firestore for Text Embedding and Similarity Search</title><link>https://atamel.dev/posts/2024/10-09_firestore_text_embedding_search/</link><pubDate>Wed, 09 Oct 2024 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2024/10-09_firestore_text_embedding_search/</guid><description>&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://atamel.dev/img/2024/firestore_langchain.png" alt="Firestore and LangChain" /&gt;
 
 &lt;figcaption&gt;Firestore and LangChain&lt;/figcaption&gt;
 
&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;In my previous &lt;a href="https://medium.com/firebase-developers/persisting-llm-chat-history-to-firestore-4e3716dd67fe"&gt;Persisting LLM chat history to
Firestore&lt;/a&gt;
post, I showed how to persist chat messages in Firestore for more meaningful and
context-aware conversations. Another common requirement in LLM applications is
to ground responses in data for more relevant answers. For that, you need
&lt;strong&gt;embeddings&lt;/strong&gt;. In this post, I want to talk specifically about &lt;strong&gt;text
embeddings&lt;/strong&gt; and how Firestore and LangChain can help you to store text
embeddings and do similarity searches against them.&lt;/p&gt;</description></item><item><title>Persisting LLM chat history to Firestore</title><link>https://atamel.dev/posts/2024/10-01_persist_llm_chat_history_firestore/</link><pubDate>Tue, 01 Oct 2024 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2024/10-01_persist_llm_chat_history_firestore/</guid><description>&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://atamel.dev/img/2024/firestore_langchain.png" alt="Firestore and LangChain" /&gt;
 
 &lt;figcaption&gt;Firestore and LangChain&lt;/figcaption&gt;
 
&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://firebase.google.com/docs/firestore"&gt;Firestore&lt;/a&gt; has long been my go-to
NoSQL backend for my serverless apps. Recently, it’s becoming my go-to backend
for my LLM powered apps too. In this series of posts, I want to show you how
Firestore can help for your LLM apps.&lt;/p&gt;
&lt;p&gt;In the first post of the series, I want to talk about LLM powered chat
applications. I know, not all LLM apps have to be chat based apps but a lot of
them are because LLMs are simply very good at chat based communication.&lt;/p&gt;</description></item><item><title>Semantic Kernel and Gemini</title><link>https://atamel.dev/posts/2024/08-19_semantic_kernel_gemini/</link><pubDate>Mon, 19 Aug 2024 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2024/08-19_semantic_kernel_gemini/</guid><description>&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://github.com/meteatamel/genai-beyond-basics/blob/main/samples/frameworks/semantic-kernel/chat/images/semantic_kernel_gemini.png?raw=true" alt="Semantic Kernel and VertexAI" /&gt;
 
 &lt;figcaption&gt;Semantic Kernel and VertexAI&lt;/figcaption&gt;
 
&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="introduction"&gt;Introduction&lt;/h2&gt;
&lt;p&gt;When you&amp;rsquo;re building a Large Language Model (LLMs) application, you typically
start with the SDK of the LLM you&amp;rsquo;re trying to talk to. However, at some point,
it might make sense to start using a higher level framework. This is especially
true if you rely on multiple LLMs from different vendors. Instead of learning
and using SDKs from multiple vendors, you can learn a higher level framework and
use that to orchestrate your calls to multiple LLMs. These frameworks also
have useful abstractions beyond simple LLM calls that accelerate LLM application
development.&lt;/p&gt;</description></item><item><title>DeepEval and Vertex AI</title><link>https://atamel.dev/posts/2024/08-12_deepeval_vertexai/</link><pubDate>Mon, 12 Aug 2024 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2024/08-12_deepeval_vertexai/</guid><description>&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://github.com/meteatamel/genai-beyond-basics/raw/main/samples/evaluation/deepeval/images/deepeval_vertexai.png" alt="DeepEval and VertexAI" /&gt;
 
 &lt;figcaption&gt;DeepEval and VertexAI&lt;/figcaption&gt;
 
&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="introduction"&gt;Introduction&lt;/h2&gt;
&lt;p&gt;When you&amp;rsquo;re working with Large Language Models (LLMs), it&amp;rsquo;s crucial to have an
evaluation framework in place. Only by constantly evaluating and testing your
LLM outputs, you can tell if the changes you&amp;rsquo;re making to prompts or the output
you&amp;rsquo;re getting back from the LLM are actually good.&lt;/p&gt;
&lt;p&gt;In this blog post, we&amp;rsquo;ll look into one of those evaluation frameworks called
&lt;a href="https://docs.confident-ai.com/"&gt;DeepEval&lt;/a&gt;, an open-source evaluation
framework for LLMs. It allows to &amp;ldquo;unit test&amp;rdquo; LLM outputs in a similar way to
Pytest. We&amp;rsquo;ll also see how DeepEval can be configured to work with Vertex AI.&lt;/p&gt;</description></item><item><title>Deep dive into function calling in Gemini</title><link>https://atamel.dev/posts/2024/08-06_deepdive_function_calling_gemini/</link><pubDate>Tue, 06 Aug 2024 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2024/08-06_deepdive_function_calling_gemini/</guid><description>&lt;h2 id="introduction"&gt;Introduction&lt;/h2&gt;
&lt;p&gt;In this blog post, we&amp;rsquo;ll deep dive into function calling in Gemini. More
specifically, you&amp;rsquo;ll see how to handle &lt;strong&gt;multiple&lt;/strong&gt; and &lt;strong&gt;parallel&lt;/strong&gt; function
call requests from &lt;code&gt;generate_content&lt;/code&gt; and &lt;code&gt;chat&lt;/code&gt; interfaces and take a look at
the new &lt;strong&gt;auto function calling&lt;/strong&gt; feature through a sample weather application.&lt;/p&gt;
&lt;h2 id="what-is-function-calling"&gt;What is function calling?&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/function-calling"&gt;Function
Calling&lt;/a&gt;
is useful to augment LLMs with more up-to-date data via external API calls.&lt;/p&gt;
&lt;p&gt;You can define custom functions and provide these to an LLM. While processing a
prompt, the LLM can choose to delegate tasks to the functions that you identify.
The model does not call the functions directly but rather makes function call
requests with parameters to your application. In turn, your application code
responds to function call requests by calling external APIs and providing the
responses back to the model, allowing the LLM to complete its response to the prompt.&lt;/p&gt;</description></item><item><title>Control LLM costs with context caching</title><link>https://atamel.dev/posts/2024/07-19_control_llm_costs_context_caching/</link><pubDate>Fri, 19 Jul 2024 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2024/07-19_control_llm_costs_context_caching/</guid><description>&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://github.com/meteatamel/genai-beyond-basics/raw/main/samples/context-caching/images/context-caching.png" alt="Context caching" /&gt;
 
 &lt;figcaption&gt;Context caching&lt;/figcaption&gt;
 
&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="introduction"&gt;Introduction&lt;/h2&gt;
&lt;p&gt;Some large language models (LLMs), such as Gemini 1.5 Flash or Gemini 1.5 Pro, have a very large context
window. This is very useful if you want to analyze a big chunk of data, such as
a whole book or a long video. On the other hand, it can get quite expensive if
you keep sending the same large data in your prompts. Context caching can help.&lt;/p&gt;</description></item><item><title>Control LLM output with response type and schema</title><link>https://atamel.dev/posts/2024/07-15_control_llm_output/</link><pubDate>Mon, 15 Jul 2024 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2024/07-15_control_llm_output/</guid><description>&lt;h2 id="introduction"&gt;Introduction&lt;/h2&gt;
&lt;p&gt;Large language models (LLMs) are great at generating content but the output
format you get back can be a hit or miss sometimes.&lt;/p&gt;
&lt;p&gt;For example, you ask for a JSON output in certain format and you might get
free-form text or a JSON wrapped in markdown string or a proper JSON but with
some required fields missing. If your application requires a strict format, this
can be a real problem.&lt;/p&gt;</description></item><item><title>RAG API powered by LlamaIndex on Vertex AI</title><link>https://atamel.dev/posts/2024/07-08_ragapi_llamaindex_vertexai/</link><pubDate>Mon, 08 Jul 2024 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2024/07-08_ragapi_llamaindex_vertexai/</guid><description>&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://atamel.dev/img/2024/llamaindex_vertexai.png" alt="LlamaIndex and Vertex AI" /&gt;
 
 &lt;figcaption&gt;LlamaIndex and Vertex AI&lt;/figcaption&gt;
 
&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="introduction"&gt;Introduction&lt;/h2&gt;
&lt;p&gt;Recently, I talked about why grounding LLMs is important and how to ground LLMs
with public data using Google Search (&lt;a href="https://cloud.google.com/blog/products/ai-machine-learning/using-vertex-ai-grounding-with-google-search"&gt;Vertex AI&amp;rsquo;s Grounding with Google Search:
how to use it and
why&lt;/a&gt;)
and with private data using Vertex AI Search (&lt;a href="https://atamel.dev/posts/2024/07-01_grounding_with_own_data_vertexai_search/"&gt;Grounding LLMs with your own data using Vertex AI Search&lt;/a&gt;).&lt;/p&gt;
&lt;p&gt;In today’s post, I want to talk about another more flexible and customizable way
of grounding your LLMs with private data: &lt;strong&gt;the RAG API powered by LlamaIndex on
Vertex AI&lt;/strong&gt;.&lt;/p&gt;</description></item><item><title>Grounding LLMs with your own data using Vertex AI Search</title><link>https://atamel.dev/posts/2024/07-01_grounding_with_own_data_vertexai_search/</link><pubDate>Mon, 01 Jul 2024 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2024/07-01_grounding_with_own_data_vertexai_search/</guid><description>&lt;h2 id="introduction"&gt;Introduction&lt;/h2&gt;
&lt;p&gt;In my previous &lt;a href="https://cloud.google.com/blog/products/ai-machine-learning/using-vertex-ai-grounding-with-google-search"&gt;Vertex AI&amp;rsquo;s Grounding with Google Search: how to use it and why&lt;/a&gt; post, I explained why you need grounding with large language models (LLMs) and how Vertex AI’s grounding with Google Search can help to ground LLMs with public up-to-date data.&lt;/p&gt;
&lt;p&gt;That’s great but you sometimes need to ground LLMs with your own private data. How can you do that? There are many ways but Vertex AI Search is the easiest way and that’s what I want to talk about today with a simple use case.&lt;/p&gt;</description></item><item><title>Give your LLM a quick lie detector test</title><link>https://atamel.dev/posts/2024/06-06_llm_lie_detector_test/</link><pubDate>Thu, 06 Jun 2024 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2024/06-06_llm_lie_detector_test/</guid><description>&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://atamel.dev/img/2024/lie_detector_llm.png" alt="Lie Detector LLM" /&gt;
 
 &lt;figcaption&gt;Lie Detector LLM&lt;/figcaption&gt;
 
&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="introduction"&gt;Introduction&lt;/h2&gt;
&lt;p&gt;It’s no secret that LLMs sometimes lie and they do so in a very confident kind of way. This might be OK for some applications but it can be a real problem if your application requires high levels of accuracy.&lt;/p&gt;
&lt;p&gt;I remember when the first LLMs emerged back in early 2023. I tried some of the early models and it felt like they were hallucinating half of the time. More recently, it started feeling like LLMs are getting better at giving more factual answers. But it’s just a feeling and you can’t base application decisions (or any decision?) on feelings, can you?&lt;/p&gt;</description></item><item><title>The Consistency vs. Novelty Dilemma</title><link>https://atamel.dev/posts/2024/06-02_consistency_vs_novelty/</link><pubDate>Sun, 02 Jun 2024 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2024/06-02_consistency_vs_novelty/</guid><description>&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://atamel.dev/img/2024/consistency_vs_novelty.png" alt="Consistency vs. Novelty" /&gt;
 
 &lt;figcaption&gt;Consistency vs. Novelty&lt;/figcaption&gt;
 
&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;It’s been a while since I wrote a non-work related topic. Last time, I wrote
about the unique kindness I experienced in Japan (see &lt;a href="https://atamel.dev/posts/2024/02-02_butterfly_effect_of_kindness"&gt;The Butterfly effect of
kindness&lt;/a&gt;). This time, I
want to write about a dilemma that I’ve been thinking about for a while.&lt;/p&gt;
&lt;p&gt;When I reflect on my life so far, whenever I had some progress (learning a new
skill, making new lasting connections, changing to a new job, losing weight), it
was always due to &lt;strong&gt;consistency&lt;/strong&gt; in my life. I was not traveling, I was not
thinking about where to go, what to do, where to eat, how to get from point A to
point B. I was in my familiar environment with a consistent (and maybe boring)
routine where the basics of my life were in place. As a result, I had time, got
bored, and started exploring. This consistency fueled boredom allowed me to
explore an aspect of life that I wasn’t happy about and I put the time and
energy into improving it.&lt;/p&gt;</description></item><item><title>Vertex AI's Grounding with Google Search - how to use it and why</title><link>https://atamel.dev/posts/2024/05-29_using-vertex-ai-grounding-with-google-search/</link><pubDate>Wed, 29 May 2024 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2024/05-29_using-vertex-ai-grounding-with-google-search/</guid><description>&lt;h2 id="introduction"&gt;Introduction&lt;/h2&gt;
&lt;p&gt;Once in a while, you come across a feature that is so easy to use and so useful
that you don’t know how you lived without it before. For me, Vertex AI&amp;rsquo;s &lt;a href="https://cloud.google.com/vertex-ai/generative-ai/docs/grounding/overview#ground-public"&gt;Grounding
with Google
Search&lt;/a&gt;
is one of those features.&lt;/p&gt;
&lt;p&gt;In &lt;a href="https://cloud.google.com/blog/products/ai-machine-learning/using-vertex-ai-grounding-with-google-search?e=48754805"&gt;this blog
post&lt;/a&gt;,
I explain why you need grounding with large language models (LLMs) and how
Vertex AI’s Grounding with Google Search can help with minimal effort on your
part.&lt;/p&gt;</description></item><item><title>AsyncAPI gets a new version 3.0 and new operations</title><link>https://atamel.dev/posts/2024/05-13_asyncapi_30_send_receive/</link><pubDate>Mon, 13 May 2024 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2024/05-13_asyncapi_30_send_receive/</guid><description>&lt;p&gt;Almost one year ago, I talked about &lt;a href="https://www.asyncapi.com/"&gt;AsyncAPI&lt;/a&gt; &lt;code&gt;2.6&lt;/code&gt; and
how confusing its &lt;code&gt;publish&lt;/code&gt; and &lt;code&gt;subscribe&lt;/code&gt; operations can be in my &lt;a href="https://atamel.dev/posts/2023/05-18_asyncapi_publishsubscribe_refactor"&gt;Understanding
AsyncAPI&amp;rsquo;s publish &amp;amp; subscribe semantics with an
example&lt;/a&gt; post.&lt;/p&gt;
&lt;p&gt;Since then, a new &lt;code&gt;3.0&lt;/code&gt; version of AsyncAPI has been released with breaking changes
and a totally new &lt;code&gt;send&lt;/code&gt; and &lt;code&gt;receive&lt;/code&gt; operations.&lt;/p&gt;
&lt;p&gt;In this blog post, I want to revisit the example from last year and show how to
rewrite it for AsyncAPI &lt;code&gt;3.0&lt;/code&gt; with the new &lt;code&gt;send&lt;/code&gt; and &lt;code&gt;receive&lt;/code&gt; operations.&lt;/p&gt;</description></item><item><title>A tour of Gemini 1.5 Pro samples</title><link>https://atamel.dev/posts/2024/05-07_gemini_15_pro_samples/</link><pubDate>Tue, 07 May 2024 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2024/05-07_gemini_15_pro_samples/</guid><description>&lt;h2 id="introduction"&gt;Introduction&lt;/h2&gt;
&lt;p&gt;Back in February, Google
&lt;a href="https://blog.google/technology/ai/google-gemini-next-generation-model-february-2024"&gt;announced&lt;/a&gt;
Gemini 1.5 Pro with its impressive 1 million token context window.&lt;/p&gt;
&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://atamel.dev/img/2024/gemini15.gif" alt="Gemini 1.5 Pro" title="Gemini 1.5 Pro" /&gt;
 
 &lt;figcaption&gt;Gemini 1.5 Pro&lt;/figcaption&gt;
 
&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Larger context size means that Gemini 1.5 Pro can process vast amounts of
information in one go — 1 hour of video, 11 hours of audio, 30,000 lines of code
or over 700,000 words and the good news is that there&amp;rsquo;s good language support.&lt;/p&gt;
&lt;p&gt;In this blog post, I will point out some samples utilizing Gemini 1.5 Pro in
Google Cloud&amp;rsquo;s Vertex AI in different use cases and languages (Python, Node.js,
Java, C#, Go).&lt;/p&gt;</description></item><item><title>Making API calls exactly once when using Workflows</title><link>https://atamel.dev/posts/2024/05-03_using-single-execution-calls-with-workflows/</link><pubDate>Fri, 03 May 2024 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2024/05-03_using-single-execution-calls-with-workflows/</guid><description>&lt;p&gt;One challenge with any distributed system, including Workflows, is ensuring that requests sent from one service to another are processed exactly once, when needed; for example, when placing a customer order in a shipping queue, withdrawing funds from a bank account, or processing a payment.&lt;/p&gt;
&lt;p&gt;In this blog post, we’ll provide an example of a website invoking Workflows, and Workflows in turn invoking a Cloud Function. We’ll show how to make sure both Workflows and the Cloud Function logic only runs once. We’ll also talk about how to invoke Workflows exactly once when using HTTP callbacks, Pub/Sub messages, or Cloud Tasks.&lt;/p&gt;</description></item><item><title>C# and Vertex AI Gemini streaming API bug and workaround</title><link>https://atamel.dev/posts/2024/05-01_csharp_vertex_gemini_streaming_bug/</link><pubDate>Wed, 01 May 2024 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2024/05-01_csharp_vertex_gemini_streaming_bug/</guid><description>&lt;p&gt;A user recently
&lt;a href="https://github.com/GoogleCloudPlatform/dotnet-docs-samples/issues/2609"&gt;reported&lt;/a&gt;
an intermittent error with C# and Gemini 1.5 model on Vertex AI&amp;rsquo;s streaming API.
In this blog post, I want to outline what the error is, what causes it, and how
to avoid it with the hopes of saving some frustration for someone out there.&lt;/p&gt;
&lt;h2 id="error"&gt;Error&lt;/h2&gt;
&lt;p&gt;The user reported using &lt;code&gt;Google.Cloud.AIPlatform.V1&lt;/code&gt; library with version
&lt;code&gt;2.27.0&lt;/code&gt; to use Gemini &lt;code&gt;1.5&lt;/code&gt; via Vertex AI&amp;rsquo;s streaming API and running into an
intermittent &lt;code&gt;System.IO.IOException&lt;/code&gt;.&lt;/p&gt;</description></item><item><title>A Tour of Gemini Code Assist - Slides and Demos</title><link>https://atamel.dev/posts/2024/04_24_tour_of_gemini_code_assist/</link><pubDate>Wed, 24 Apr 2024 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2024/04_24_tour_of_gemini_code_assist/</guid><description>&lt;p&gt;This week, I&amp;rsquo;m speaking at 3 meetups on &lt;a href="https://cloud.google.com/products/gemini/code-assist"&gt;Gemini Code
Assist&lt;/a&gt;. My talk has a
little introduction to GenAI and Gemini, followed by a series of hands-on demos
that showcase different features of Gemini Code Assist.&lt;/p&gt;
&lt;p&gt;In the demos, I setup Gemini Code Assist in &lt;a href="https://cloud.google.com/code"&gt;Cloud
Code&lt;/a&gt; IDE plugin in Visual Studio Code. Then, I
show how to design and create an application, explain, run, generate, test,
transform code, and finish with understanding logs with the help of Gemini.&lt;/p&gt;</description></item><item><title>Vertex AI Gemini generateContent (non-streaming) API</title><link>https://atamel.dev/posts/2024/02-26_vertexai_gemini_generate_content_api/</link><pubDate>Mon, 26 Feb 2024 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2024/02-26_vertexai_gemini_generate_content_api/</guid><description>&lt;h2 id="introduction"&gt;Introduction&lt;/h2&gt;
&lt;p&gt;In my recent blog post, I&amp;rsquo;ve been exploring Vertex AI&amp;rsquo;s Gemini REST API and mainly talked
about the
&lt;a href="https://cloud.google.com/vertex-ai/docs/reference/rest/v1/projects.locations.publishers.models/streamGenerateContent"&gt;&lt;code&gt;streamGenerateContent&lt;/code&gt;&lt;/a&gt;
method which is a streaming API.&lt;/p&gt;
&lt;p&gt;Recently, a new method appeared in Vertex AI docs:
&lt;a href="https://cloud.google.com/vertex-ai/docs/reference/rest/v1/projects.locations.publishers.models/generateContent"&gt;&lt;code&gt;generateContent&lt;/code&gt;&lt;/a&gt;
which is the &lt;strong&gt;non-streaming&lt;/strong&gt; (unary) version of the API.&lt;/p&gt;
&lt;p&gt;In this short blog post, I take a closer look at the new non-streaming
&lt;code&gt;generateContent&lt;/code&gt; API and explain why it makes sense to use as a simpler API when
the latency is not super critical.&lt;/p&gt;</description></item><item><title>Orchestrate Vertex AI’s PaLM and Gemini APIs with Workflows</title><link>https://atamel.dev/posts/2024/02-22_vertex-ai-palm-and-gemini-apis-using-workflows/</link><pubDate>Thu, 22 Feb 2024 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2024/02-22_vertex-ai-palm-and-gemini-apis-using-workflows/</guid><description>&lt;p&gt;Everyone is excited about generative AI (gen AI) nowadays and rightfully so. You might be generating text with PaLM 2 or Gemini Pro, generating images with ImageGen 2, translating code from language to another with Codey, or describing images and videos with Gemini Pro Vision.&lt;/p&gt;
&lt;p&gt;No matter how you’re using gen AI, at the end of the day, you’re calling an endpoint either with an SDK or a library or via a REST API. Workflows, my go-to service to orchestrate and automate other services, is more relevant than ever when it comes to gen AI.&lt;/p&gt;</description></item><item><title>Using Vertex AI Gemini from GAPIC libraries (C#)</title><link>https://atamel.dev/posts/2024/02-14_vertexai_gemini_gapic_libraries/</link><pubDate>Wed, 14 Feb 2024 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2024/02-14_vertexai_gemini_gapic_libraries/</guid><description>&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://atamel.dev/img/2024/gemini.png" alt="Gemini" title="Gemini" /&gt;
 
 &lt;figcaption&gt;Gemini&lt;/figcaption&gt;
 
&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="introduction"&gt;Introduction&lt;/h2&gt;
&lt;p&gt;In my previous &lt;a href="https://atamel.dev/posts/2024/02-05_vertexai_gemini_restapi_csharp_rust/"&gt;Using Vertex AI Gemini REST API&lt;/a&gt;
post, I showed how to use the Gemini REST API from languages without SDK support
yet such as C# and Rust.&lt;/p&gt;
&lt;p&gt;There&amp;rsquo;s actually another way to use Gemini from languages without SDK support:
&lt;strong&gt;GAPIC libraries&lt;/strong&gt;. In this post, I show you how to use Vertex AI Gemini from
GAPIC libraries, using C# as an example.&lt;/p&gt;
&lt;h2 id="what-is-gapic"&gt;What is GAPIC?&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;At this point, you might be wondering: What&amp;rsquo;s GAPIC?&lt;/strong&gt; GAPIC stands for Google API
CodeGen. In Google Cloud, all services have auto-generated libraries from
Google&amp;rsquo;s service proto files. Since these libraries are auto-generated, they&amp;rsquo;re
not the easiest and most intuitive way of calling a service. Because of that,
some services also have hand-written SDKs/libraries on top of GAPIC libraries.&lt;/p&gt;</description></item><item><title>Using Vertex AI Gemini REST API (C# and Rust)</title><link>https://atamel.dev/posts/2024/02-05_vertexai_gemini_restapi_csharp_rust/</link><pubDate>Mon, 05 Feb 2024 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2024/02-05_vertexai_gemini_restapi_csharp_rust/</guid><description>&lt;h2 id="introduction"&gt;Introduction&lt;/h2&gt;
&lt;p&gt;Back in December, Google announced
&lt;a href="https://blog.google/technology/ai/google-gemini-ai/"&gt;Gemini&lt;/a&gt;, its most capable
and general model so far available from &lt;a href="https://makersuite.google.com/"&gt;Google AI
Studio&lt;/a&gt; and&lt;a href="https://cloud.google.com/vertex-ai/docs/generative-ai/start/quickstarts/quickstart-multimodal"&gt; Google Cloud Vertex
AI&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://atamel.dev/img/2024/gemini.png" alt="Gemini" title="Gemini" /&gt;
 
 &lt;figcaption&gt;Gemini&lt;/figcaption&gt;
 
&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;The &lt;a href="https://cloud.google.com/vertex-ai/docs/generative-ai/start/quickstarts/quickstart-multimodal"&gt;Try the Vertex AI Gemini
API&lt;/a&gt;
documentation page shows instructions on how to use the Gemini API from
&lt;strong&gt;Python&lt;/strong&gt;, &lt;strong&gt;Node.js&lt;/strong&gt;, &lt;strong&gt;Java&lt;/strong&gt;, and &lt;strong&gt;Go&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://atamel.dev/img/2024/try_vertexai_page.png" alt="alt_text" title="Try the Vertex AI Gemini API" /&gt;
 
 &lt;figcaption&gt;alt_text&lt;/figcaption&gt;
 
&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;That’s great but what about other languages?&lt;/p&gt;
&lt;p&gt;Even though there are no official SDKs/libraries for other languages yet, you
can use the &lt;a href="https://cloud.google.com/vertex-ai/docs/generative-ai/model-reference/gemini"&gt;Gemini REST
API&lt;/a&gt;
to access the same functionality with a little bit more work on your part.&lt;/p&gt;</description></item><item><title>The butterfly effect of kindness</title><link>https://atamel.dev/posts/2024/02-02_butterfly_effect_of_kindness/</link><pubDate>Fri, 02 Feb 2024 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2024/02-02_butterfly_effect_of_kindness/</guid><description>&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://atamel.dev/img/2024/butterfly_effect_kindness.png" alt="Butteryfly effect of kindess" /&gt;
 
 &lt;figcaption&gt;Butteryfly effect of kindess&lt;/figcaption&gt;
 
&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;I’ve been using my blog for work-related topics since I can remember. I don’t
know why but recently, I had an itch to write about non-work related topics for
no particular reason. I’m not sure if anyone would care or find my non-tech
writing useful but I’ll definitely find it useful for me as a mark of my
consciousness at a given time, so here we go.&lt;/p&gt;</description></item><item><title>Test and change an existing web app with Duet AI</title><link>https://atamel.dev/posts/2024/01-29_duetai_test_change_existing_webapp/</link><pubDate>Mon, 29 Jan 2024 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2024/01-29_duetai_test_change_existing_webapp/</guid><description>&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://atamel.dev/img/2024/duetai_logo.png" alt="Duet AI" /&gt;
 
 &lt;figcaption&gt;Duet AI&lt;/figcaption&gt;
 
&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;In the &lt;a href="https://atamel.dev/posts/2024/01-23_duetai_create_deploy_webapp_clourun/"&gt;Create and deploy a new web app to Cloud Run with Duet AI&lt;/a&gt; post,
I created a simple web application and deployed to Cloud Run using &lt;a href="https://cloud.google.com/duet-ai"&gt;Duet
AI&lt;/a&gt;&amp;rsquo;s help. Duet AI has been great to get a new
and simple app up and running. But does it help for existing apps? Let&amp;rsquo;s figure
it out.&lt;/p&gt;
&lt;p&gt;In this blog post, I take an existing web app, explore it,
test it, add a unit test, add new functionality, and add more unit tests all
with the help of Duet AI. Again, I captured some lessons learned along the way to
get the most out of Duet AI.&lt;/p&gt;</description></item><item><title>Create and deploy a new web app to Cloud Run with Duet AI</title><link>https://atamel.dev/posts/2024/01-23_duetai_create_deploy_webapp_clourun/</link><pubDate>Tue, 23 Jan 2024 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2024/01-23_duetai_create_deploy_webapp_clourun/</guid><description>&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://atamel.dev/img/2024/duetai_logo.png" alt="Duet AI" /&gt;
 
 &lt;figcaption&gt;Duet AI&lt;/figcaption&gt;
 
&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;I’ve been playing with &lt;a href="https://cloud.google.com/duet-ai"&gt;Duet AI&lt;/a&gt;, Google’s
AI-powered collaborator, recently to see how useful it can be for my development
workflow. I&amp;rsquo;m pleasantly surprised how helpful Duet AI can be when provided with
specific questions with the right context.&lt;/p&gt;
&lt;p&gt;In this blog post, I document my journey of creating and deploying a new web
application to Cloud Run with Duet AI&amp;rsquo;s help. I also capture some lessons learned
along the way to get the most out of Duet AI.&lt;/p&gt;</description></item><item><title>Announcing Workflows execution steps history</title><link>https://atamel.dev/posts/2024/01-20_announcing-workflows-execution-steps-history/</link><pubDate>Sat, 20 Jan 2024 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2024/01-20_announcing-workflows-execution-steps-history/</guid><description>&lt;p&gt;As you orchestrate more services with Workflows, the workflow gets more complicated with more steps, jumps, iterations, parallel branches. When the workflow execution inevitably fails at some point, you need to debug and figure out which step failed and why. So far, you only had an execution summary with inputs/outputs and logs to rely on in your execution debugging. While this was good enough for basic workflows, it didn&amp;rsquo;t provide step level debugging information.&lt;/p&gt;</description></item><item><title>C# library and samples for GenAI in Vertex AI</title><link>https://atamel.dev/posts/2023/12-11_csharp_library_and_samples_genai_vertexai/</link><pubDate>Mon, 11 Dec 2023 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2023/12-11_csharp_library_and_samples_genai_vertexai/</guid><description>&lt;p&gt;In my &lt;a href="https://atamel.dev/posts/2023/11-28_multilanguage_libs_samples_genai_vertexai/"&gt;previous
post&lt;/a&gt;,
I talked about multi-language libraries and samples for GenAI. In this post, I
want to zoom into some C# specific information for GenAI in Vertex AI.&lt;/p&gt;
&lt;h2 id="c-genai-samples-for-vertex-ai"&gt;C# GenAI samples for Vertex AI&lt;/h2&gt;
&lt;p&gt;If you want to skip this blog post and just jump into code, there’s a collection
of &lt;a href="https://cloud.google.com/vertex-ai/docs/samples?language=csharp&amp;amp;text=generative"&gt;C# GenAI samples for Vertex
AI&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://atamel.dev/img/2023/genai_csharp_samples_vertexai.png" alt="C# GenAI samples for Vertex AI" /&gt;
 
 &lt;figcaption&gt;C# GenAI samples for Vertex AI&lt;/figcaption&gt;
 
&lt;/figure&gt;&lt;/p&gt;</description></item><item><title>Deploy and manage Kubernetes applications with Workflows</title><link>https://atamel.dev/posts/2023/12-01_use-workflows-to-deploy-and-manage-kubernetes/</link><pubDate>Fri, 01 Dec 2023 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2023/12-01_use-workflows-to-deploy-and-manage-kubernetes/</guid><description>&lt;p&gt;Workflows is a versatile service in orchestrating and automating a wide range of use cases: microservices, business processes, Data and ML pipelines, IT operations, and more. It can also be used to automate deployment of containerized applications on Kubernetes Engine (GKE) and this got even easier with the newly released (in preview) Kubernetes API Connector.&lt;/p&gt;
&lt;p&gt;The new Kubernetes API connector enables access to GKE services from Workflows and this in turn enables Kubernetes based resource management or orchestration, scheduled Kubernetes jobs, and more.&lt;/p&gt;</description></item><item><title>Multi-language libraries and samples for GenAI in Vertex AI</title><link>https://atamel.dev/posts/2023/11-28_multilanguage_libs_samples_genai_vertexai/</link><pubDate>Tue, 28 Nov 2023 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2023/11-28_multilanguage_libs_samples_genai_vertexai/</guid><description>&lt;p&gt;You might think that you need to know Python to be able to use GenAI with
VertexAI. While Python is the dominant language in GenAI (and Vertex AI is no
exception in that regard), you can actually use GenAI in Vertex AI from other
languages such as Java, C#, Node.js, Go, and more.&lt;/p&gt;
&lt;p&gt;Let’s take a look at the details.&lt;/p&gt;
&lt;h2 id="vertex-ai-sdk-for-python"&gt;Vertex AI SDK for Python&lt;/h2&gt;
&lt;p&gt;The official SDK for Vertex AI is &lt;a href="https://cloud.google.com/vertex-ai/docs/python-sdk/use-vertex-ai-python-sdk"&gt;Vertex AI SDK for
Python&lt;/a&gt;
and as expected, it’s in Python. You can initialize Vertex AI SDK with some
parameters and utilize GenAI models with a few lines of code:&lt;/p&gt;</description></item><item><title>Introducing a new Eventarc destination - internal HTTP endpoint in a VPC network</title><link>https://atamel.dev/posts/2023/11-13_introduce_eventarc_internal_http_endpoint/</link><pubDate>Mon, 13 Nov 2023 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2023/11-13_introduce_eventarc_internal_http_endpoint/</guid><description>&lt;h2 id="introduction"&gt;Introduction&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://cloud.google.com/eventarc/docs"&gt;Eventarc&lt;/a&gt; helps users build
event-driven architectures without having to implement, customize, or maintain
the underlying infrastructure.&lt;/p&gt;
&lt;p&gt;Eventarc has added support (in public preview) for delivering events to internal
HTTP endpoints in a Virtual Private Cloud (VPC) network. Customers, especially
large enterprises, often run compute (typically GKE or GCE) on VPC-private IPs,
often behind internal load balancers. This launch will enable these services to
consume Eventarc events.&lt;/p&gt;
&lt;p&gt;Internal HTTP endpoints can be an &lt;a href="https://cloud.google.com/vpc/docs/ip-addresses"&gt;internal IP
address&lt;/a&gt; or &lt;a href="https://cloud.google.com/compute/docs/internal-dns#instance-fully-qualified-domain-names"&gt;fully qualified DNS
name
(FQDN)&lt;/a&gt;
for any HTTP endpoint in the VPC network. Examples of destinations that can be
targeted via the internal HTTP endpoints include &lt;a href="https://cloud.google.com/compute/docs/ip-addresses"&gt;Compute Engine
VMs&lt;/a&gt; with internal IPs,
services fronted by an &lt;a href="https://cloud.google.com/load-balancing/docs/dns-names"&gt;L7 Internal Load
Balancer&lt;/a&gt;, &lt;a href="https://cloud.google.com/vpc/docs/configure-private-service-connect-services#list-endpoints"&gt;Private
Service Connect
endpoints&lt;/a&gt;,
&lt;a href="https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balance-ingress#validate"&gt;Google Kubernetes Engine
Ingress&lt;/a&gt;,
&lt;a href="https://cloud.google.com/kubernetes-engine/docs/concepts/service"&gt;Google Kubernetes Engine
Services&lt;/a&gt;,
&lt;a href="https://cloud.google.com/load-balancing/docs/l7-internal/setting-up-l7-internal-serverless"&gt;Cloud Run behind an internal Application Load
Balancer&lt;/a&gt;
and any destinations registered with Cloud DNS via &lt;a href="https://cloud.google.com/dns/docs/records-overview"&gt;DNS
record&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>What languages are supported in WebAssembly outside the browser?</title><link>https://atamel.dev/posts/2023/09-07_language_support_wasm/</link><pubDate>Thu, 07 Sep 2023 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2023/09-07_language_support_wasm/</guid><description>&lt;p&gt;&lt;strong&gt;What languages are supported in WebAssembly running outside the browser?&lt;/strong&gt;
This is a question I often hear people ask. It’s has a complicated answer
because:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;WebAssembly outside the browser needs &lt;a href="https://wasi.dev/"&gt;WASI&lt;/a&gt; and not all
languages have WASI support in their toolchain.&lt;/li&gt;
&lt;li&gt;Even if WASI is supported well in a language, WASI has its own limitations
that you need to take into account.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;In short, you can’t take any code written in any language and expect to compile
and run it as a Wasm+Wasi module right now. Documentation on what’s supported
is patchy or misleading at times. Unfortunately, you often need to try things
out before knowing what works and what doesn’t.&lt;/p&gt;</description></item><item><title>Adding HTTP around Wasm with Wagi</title><link>https://atamel.dev/posts/2023/07-18_add_http_around_wasm_with_wagi/</link><pubDate>Tue, 18 Jul 2023 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2023/07-18_add_http_around_wasm_with_wagi/</guid><description>&lt;p&gt;In my previous posts, I talked about how you can &lt;a href="https://atamel.dev/posts/2023/06-20_explore_wasm_outside_browser/"&gt;run WebAssembly (Wasm) outside
the browser with Wasi
&lt;/a&gt;and &lt;a href="https://atamel.dev/posts/2023/06-29_run_wasm_in_docker/"&gt;run it
in a Docker container with
runwasi&lt;/a&gt;. The
&lt;a href="https://wasi.dev/"&gt;Wasi&lt;/a&gt; specification allows Wasm modules access to things
like the filesystem and environment variables (and I showed how in &lt;a href="https://atamel.dev/posts/2023/06-26_compile_rust_go_wasm_wasi/"&gt;this blog
post&lt;/a&gt;) but
networking and threading are not implemented yet. This is severely limiting if
you want to run HTTP based microservices on Wasm for example.&lt;/p&gt;</description></item><item><title>Buffer workflow executions with a Cloud Tasks queue</title><link>https://atamel.dev/posts/2023/07-12_buffer-workflow-executions-with-a-cloud-tasks-queue/</link><pubDate>Wed, 12 Jul 2023 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2023/07-12_buffer-workflow-executions-with-a-cloud-tasks-queue/</guid><description>&lt;h2 id="introduction"&gt;Introduction&lt;/h2&gt;
&lt;p&gt;In my &lt;a href="https://cloud.google.com/blog/products/application-development/setup-parallel-task-execution-with-parent-and-child-workflows"&gt;previous
post&lt;/a&gt;,
I talked about how you can use a parent workflow to execute child workflows in
parallel for faster overall processing time and easier detection of errors.
Another useful pattern is to use a Cloud Tasks queue to create Workflows
executions and that&amp;rsquo;s the topic of this post.&lt;/p&gt;
&lt;p&gt;When your application experiences a sudden surge of traffic, it&amp;rsquo;s natural to
want to handle the increased load by creating a high number of concurrent
workflow executions. However, Google
Cloud&amp;rsquo;s &lt;a href="https://cloud.google.com/workflows"&gt;Workflows&lt;/a&gt; enforces &lt;a href="https://cloud.google.com/workflows/quotas"&gt;quotas&lt;/a&gt; to
prevent abuse and ensure fair resource allocation. These quotas limit the
maximum number of concurrent workflow executions per region, per project, for
example, Workflows currently enforces a maximum of 2000 concurrent executions by
default. Once this limit is reached, any new executions beyond the quota will
fail with an HTTP 429 error.&lt;/p&gt;</description></item><item><title>Workflows executing other parallel workflows: A practical guide</title><link>https://atamel.dev/posts/2023/07-08_setup-parallel-task-execution-with-parent-and-child-workflows/</link><pubDate>Sat, 08 Jul 2023 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2023/07-08_setup-parallel-task-execution-with-parent-and-child-workflows/</guid><description>&lt;h2 id="introduction"&gt;Introduction&lt;/h2&gt;
&lt;p&gt;There are numerous scenarios where you might want to execute tasks in parallel.
One common use case involves dividing data into batches, processing each batch
in parallel, and combining the results in the end. This approach not only
enhances the speed of the overall processing but it also allows for easier error
detection in smaller tasks.&lt;/p&gt;
&lt;p&gt;On the other hand, setting up parallel tasks, monitoring them, handling errors
in each task, and combining the results in the end is not trivial. Thankfully,
Google Cloud&amp;rsquo;s &lt;a href="https://cloud.google.com/workflows"&gt;Workflows&lt;/a&gt; can help. In this
post, we will explore how you can use a parent workflow to set up and execute
parallel child workflows.&lt;/p&gt;</description></item><item><title>Generative AI Short Courses by DeepLearning.AI</title><link>https://atamel.dev/posts/2023/07-04_genai_short_courses_by_deeplearning/</link><pubDate>Tue, 04 Jul 2023 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2023/07-04_genai_short_courses_by_deeplearning/</guid><description>&lt;h2 id="introduction"&gt;Introduction&lt;/h2&gt;
&lt;p&gt;In my previous couple posts
(&lt;a href="https://medium.com/google-cloud/generative-ai-learning-path-notes-part-1-d36bc565df1f"&gt;post1&lt;/a&gt;,
&lt;a href="https://medium.com/google-cloud/generative-ai-learning-path-notes-part-2-78a1855f6bd0"&gt;post2&lt;/a&gt;),
I shared my detailed notes on &lt;a href="https://www.cloudskillsboost.google/journeys/118"&gt;Generative AI Learning
Path&lt;/a&gt; in Google Cloud’s Skills
Boost. It’s a great collection of courses to get started in GenAI, especially on
the theory underpinning GenAI.&lt;/p&gt;
&lt;p&gt;Since then, I discovered another great resource
to learn more about GenAI: &lt;a href="https://www.deeplearning.ai/short-courses/"&gt;Learn Generative AI Short
Courses&lt;/a&gt; by
&lt;a href="DeepLearning.AI"&gt;DeepLearning.AI&lt;/a&gt; from &lt;a href="https://www.andrewng.org/"&gt;Andrew Ng&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;In this post, I summarize what each course teaches you to help you decide which
course to take. I highly recommend taking all 4 courses. They’re full of useful
information and short enough that even if you’re not fully interested in the
topic, you can still get a good idea about it in a short amount of time.&lt;/p&gt;</description></item><item><title>Running Wasm in a container</title><link>https://atamel.dev/posts/2023/06-29_run_wasm_in_docker/</link><pubDate>Thu, 29 Jun 2023 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2023/06-29_run_wasm_in_docker/</guid><description>&lt;p&gt;Docker recently announced experimental support for running Wasm modules (see
&lt;a href="https://www.docker.com/blog/announcing-dockerwasm-technical-preview-2/"&gt;Announcing Docker+Wasm Technical Preview
2&lt;/a&gt;). In
this blog post, I explain what this means and how to run a Wasm module in
Docker.&lt;/p&gt;
&lt;h2 id="why-run-wasm-in-a-container"&gt;Why run Wasm in a container?&lt;/h2&gt;
&lt;p&gt;In my &lt;a href="https://atamel.dev/posts/2023/06-20_explore_wasm_outside_browser/"&gt;Exploring WebAssembly outside the
browser&lt;/a&gt;
post, I mentioned how Wasm is faster, smaller, more secure, and more portable
than a container. You might be wondering: &lt;strong&gt;Why take something faster, smaller,
more secure, and more portable and run it in a container?&lt;/strong&gt;&lt;/p&gt;</description></item><item><title>Compile Rust &amp; Go to a Wasm+Wasi module and run in a Wasm runtime</title><link>https://atamel.dev/posts/2023/06-26_compile_rust_go_wasm_wasi/</link><pubDate>Mon, 26 Jun 2023 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2023/06-26_compile_rust_go_wasm_wasi/</guid><description>&lt;p&gt;In my &lt;a href="https://atamel.dev/posts/2023/06-20_explore_wasm_outside_browser/"&gt;Exploring WebAssembly outside the
browser&lt;/a&gt;
post, I talked about how WebAssembly System Interface
(&lt;a href="https://wasi.dev/"&gt;WASI&lt;/a&gt;) enables Wasm modules to run outside the browser and
interact with the host in a limited set of use cases that Wasi supports (see
&lt;a href="https://github.com/WebAssembly/WASI/blob/main/Proposals.md"&gt;Wasi proposals&lt;/a&gt;).&lt;/p&gt;
&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://atamel.dev/img/2023/runtime_wasi_host.png" alt="WASI" title="WASI" /&gt;
 
 &lt;figcaption&gt;WASI&lt;/figcaption&gt;
 
&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;In this blog post, let’s look into details of how to compile code to a Wasm+Wasi
module and then run it in a Wasm runtime. Notice that I use Wasm+Wasi module
deliberately (instead of just Wasm) because some languages have Wasm support and
can run perfectly fine in the browser but they have no or limited Wasi support
to run outside the browser.&lt;/p&gt;</description></item><item><title>Exploring WebAssembly outside the browser</title><link>https://atamel.dev/posts/2023/06-20_explore_wasm_outside_browser/</link><pubDate>Tue, 20 Jun 2023 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2023/06-20_explore_wasm_outside_browser/</guid><description>&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://avatars.githubusercontent.com/u/11578470?s=200&amp;amp;v=4" alt="WebAssembly Logo" title="WebAssembly Logo" /&gt;
 
 &lt;figcaption&gt;WebAssembly Logo&lt;/figcaption&gt;
 
&lt;/figure&gt;&lt;/p&gt;
&lt;h2 id="introduction"&gt;Introduction&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://webassembly.org/"&gt;WebAssembly&lt;/a&gt; (Wasm) was initially designed as a
binary instruction format for executing native code efficiently within web
browsers. The original use cases are focused on augmenting Javascript in the
browser to run native code in a fast, portable, and secure way for games, 3D
graphics, etc.&lt;/p&gt;
&lt;p&gt;However, its potential extends far beyond the browser. In this blog post, we&amp;rsquo;ll
delve into the exciting realm of running Wasm outside the browser, exploring its
advantages, and relevant specifications.&lt;/p&gt;</description></item><item><title>Generative AI Learning Path Notes – Part 2</title><link>https://atamel.dev/posts/2023/06-15_genai_learningpath_notes_part2/</link><pubDate>Thu, 15 Jun 2023 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2023/06-15_genai_learningpath_notes_part2/</guid><description>&lt;p&gt;If you’re looking to upskill in Generative AI, there’s a &lt;a href="https://www.cloudskillsboost.google/journeys/118"&gt;Generative AI Learning
Path&lt;/a&gt; in Google Cloud Skills
Boost. It currently consists of 10 courses and provides a good foundation on the
theory behind Generative AI.&lt;/p&gt;
&lt;p&gt;As I went through these courses myself, I took notes, as I learn best when I
write things down. In &lt;a href="https://atamel.dev/posts/2023/06-06_genai_learningpath_notes_part1/"&gt;part 1 of the blog
series&lt;/a&gt;, I
shared my notes for courses 1 to 6. In this part 2 of the blog series, I
continue sharing my notes for courses 7 to 10.&lt;/p&gt;</description></item><item><title>Generative AI Learning Path Notes – Part 1</title><link>https://atamel.dev/posts/2023/06-06_genai_learningpath_notes_part1/</link><pubDate>Tue, 06 Jun 2023 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2023/06-06_genai_learningpath_notes_part1/</guid><description>&lt;p&gt;If you’re looking to upskill in Generative AI (GenAI), there’s a &lt;a href="https://www.cloudskillsboost.google/journeys/118"&gt;Generative AI
Learning Path&lt;/a&gt; in Google Cloud
Skills Boost. It currently consists of 10 courses and provides a good foundation
on the theory behind Generative AI and what tools and services Google provides in
GenAI. The best part is that it’s completely free!&lt;/p&gt;
&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://atamel.dev/img/2023/genai_learningpath_part1.png" alt="GenAI Learning Path" /&gt;
 
 &lt;figcaption&gt;GenAI Learning Path&lt;/figcaption&gt;
 
&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;As I went through these courses myself, I took notes, as I learn best when I
write things down. In this part 1 of the blog series, I want to share my notes
for courses 1 to 6, in case you want to quickly read summaries of these courses.&lt;/p&gt;</description></item><item><title>New Batch connector for Workflows</title><link>https://atamel.dev/posts/2023/05-30_workflows_batch_connector/</link><pubDate>Tue, 30 May 2023 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2023/05-30_workflows_batch_connector/</guid><description>&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://atamel.dev/img/2023/batch-workflows.png" alt="Batch and Workflows" /&gt;
 
 &lt;figcaption&gt;Batch and Workflows&lt;/figcaption&gt;
 
&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://cloud.google.com/workflows"&gt;Workflows&lt;/a&gt; just released a new
&lt;a href="https://cloud.google.com/workflows/docs/reference/googleapis/batch/Overview"&gt;connector&lt;/a&gt;
for &lt;a href="https://cloud.google.com/batch"&gt;Batch&lt;/a&gt; that greatly simplifies how to
create and run Batch jobs from Workflows. Let&amp;rsquo;s take a look how you can use the
new Batch connector of Workflows.&lt;/p&gt;
&lt;h2 id="recap-batch-and-workflows"&gt;Recap: Batch and Workflows&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://cloud.google.com/batch"&gt;Batch&lt;/a&gt; is a fully managed service to schedule,
queue, and execute batch jobs on Google&amp;rsquo;s infrastructure. These batch jobs run
on Compute Engine VM instances but they are managed by Batch service, so you don&amp;rsquo;t
have to provision and manage VM instances yourself.&lt;/p&gt;</description></item><item><title>Google Cloud Pub/Sub + AsyncAPI</title><link>https://atamel.dev/posts/2023/05-25_asyncapi_googlepubsub/</link><pubDate>Thu, 25 May 2023 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2023/05-25_asyncapi_googlepubsub/</guid><description>&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://atamel.dev/img/2023/pubsub_asyncapi.png" alt="Cloud Pub/Sub and AsyncAPI" /&gt;
 
 &lt;figcaption&gt;Cloud Pub/Sub and AsyncAPI&lt;/figcaption&gt;
 
&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;ve been covering different aspects of &lt;a href="https://www.asyncapi.com/"&gt;AsyncAPI&lt;/a&gt;
in my recent blog posts. In this final post of my AsyncAPI blog post series, I
want to talk about how to document Google Cloud&amp;rsquo;s Pub/Sub using AsyncAPI.&lt;/p&gt;
&lt;p&gt;AsyncAPI has pretty good support for Google Pub/Sub, thanks to contributions
from &lt;a href="https://twitter.com/whitlockjc"&gt;Jeremy Whitlock&lt;/a&gt;, an engineer from Google,
and the flexibility baked in AsyncAPI spec. Jeremy also has a nice &lt;a href="https://discuss.thiswith.me/posts/documenting-cloud-pubsub-using-asyncapi/#server-object"&gt;blog
post&lt;/a&gt;
on this topic that you can read for more details.&lt;/p&gt;</description></item><item><title>CloudEvents + AsyncAPI</title><link>https://atamel.dev/posts/2023/05-23_asyncapi_cloudevents/</link><pubDate>Tue, 23 May 2023 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2023/05-23_asyncapi_cloudevents/</guid><description>&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://atamel.dev/img/2023/cloudevents_asyncapi.png" alt="CloudEvents and AsyncAPI" /&gt;
 
 &lt;figcaption&gt;CloudEvents and AsyncAPI&lt;/figcaption&gt;
 
&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;ve been recently talking about &lt;a href="https://cloudevents.io/"&gt;CloudEvents&lt;/a&gt; and
&lt;a href="https://www.asyncapi.com/"&gt;AsyncAPI&lt;/a&gt;,two of my favorite open-source
specifications for event-driven architectures. In this blog post, I want to talk
about how you can use CloudEvents and AsyncAPI together. More specifically, I&amp;rsquo;ll
show you how to document CloudEvents enabled services using AsyncAPI, thanks to
the flexibility and openness of both projects.&lt;/p&gt;
&lt;h2 id="recap-cloudevents-and-asyncapi"&gt;Recap: CloudEvents and AsyncAPI&lt;/h2&gt;
&lt;p&gt;Let&amp;rsquo;s first do a quick recap CloudEvents and AsyncAPI.&lt;/p&gt;</description></item><item><title>Understanding AsyncAPI's publish &amp; subscribe semantics with an example</title><link>https://atamel.dev/posts/2023/05-18_asyncapi_publishsubscribe_refactor/</link><pubDate>Thu, 18 May 2023 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2023/05-18_asyncapi_publishsubscribe_refactor/</guid><description>&lt;p&gt;In &lt;a href="https://www.asyncapi.com/"&gt;AsyncAPI&lt;/a&gt;, a channel can have a &lt;code&gt;publish&lt;/code&gt; and
&lt;code&gt;subscribe&lt;/code&gt; operation. This can be confusing, depending on which perspective
you&amp;rsquo;re considering (server vs. user) and what you&amp;rsquo;re comparing against (eg.
WebSocket).&lt;/p&gt;
&lt;p&gt;In this blog post, I want to go through an example to show you how to construct
your AsyncAPI file with the right &lt;code&gt;publish&lt;/code&gt; and &lt;code&gt;subscribe&lt;/code&gt; semantics. As a
bonus, I also show you how to refactor your AsyncAPI files with common
configuration.&lt;/p&gt;</description></item><item><title>AsyncAPI Tools</title><link>https://atamel.dev/posts/2023/05-16_asyncapi_tools/</link><pubDate>Tue, 16 May 2023 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2023/05-16_asyncapi_tools/</guid><description>&lt;p&gt;In my &lt;a href="https://atamel.dev/posts/2023/05-12_asyncapi_basics/"&gt;previous post&lt;/a&gt;, I
talked about basic of &lt;a href="https://www.asyncapi.com/"&gt;AsyncAPI&lt;/a&gt;. In this post, I
want to get into more details on some tools around AsyncAPI.&lt;/p&gt;
&lt;p&gt;More specially, we&amp;rsquo;ll install AsyncAPI CLI and Generator, generate a sample AsyncAPI
definition, visualize it in AsyncAPI Studio, and generate code from it. You&amp;rsquo;ll see
how useful AsyncAPI can be in documenting and maintaining your event-driven architectures.&lt;/p&gt;
&lt;h2 id="install-asyncapi-tools"&gt;Install AsyncAPI tools&lt;/h2&gt;
&lt;p&gt;First, let&amp;rsquo;s install some of the AsyncAPI tools.&lt;/p&gt;</description></item><item><title>AsyncAPI Basics</title><link>https://atamel.dev/posts/2023/05-12_asyncapi_basics/</link><pubDate>Fri, 12 May 2023 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2023/05-12_asyncapi_basics/</guid><description>&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://avatars.githubusercontent.com/u/16401334?s=200&amp;amp;v=4" alt="AsyncAPI Logo" /&gt;
 
 &lt;figcaption&gt;AsyncAPI Logo&lt;/figcaption&gt;
 
&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Recently, I’ve been looking into &lt;a href="https://www.asyncapi.com/"&gt;AsyncAPI&lt;/a&gt;, an
open-source specification and tools to document and maintain event-driven
architectures (EDAs).&lt;/p&gt;
&lt;p&gt;In this blog post, I want summarize the basics of AsyncAPI and point to some
useful links to learn more. In future blog posts, I&amp;rsquo;ll get into more details of
AsyncAPI.&lt;/p&gt;
&lt;h2 id="asyncapi-why-what"&gt;AsyncAPI: Why? What?&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://www.asyncapi.com/"&gt;AsyncAPI&lt;/a&gt; is an open source initiative with the goal
of making event-driven APIs as easy as REST APIs. Fundamentally, it is a
&lt;strong&gt;specification&lt;/strong&gt; to define asynchronous APIs, similar to what
&lt;a href="https://www.openapis.org/"&gt;OpenAPI&lt;/a&gt; (aka Swagger) does for REST APIs.&lt;/p&gt;</description></item><item><title>Buffer HTTP requests with Cloud Tasks</title><link>https://atamel.dev/posts/2023/05-03_buffer_http_requests_cloudtasks/</link><pubDate>Thu, 04 May 2023 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2023/05-03_buffer_http_requests_cloudtasks/</guid><description>&lt;p&gt;&lt;a href="https://cloud.google.com/tasks"&gt;Cloud Tasks&lt;/a&gt; is a fully-managed service that manages the execution, dispatch, and asynchronous delivery of a large number of tasks to App Engine or any arbitrary HTTP endpoint. You can also use a Cloud Tasks queue to buffer requests between services for more robust intra-service communication.  &lt;/p&gt;
&lt;p&gt;Cloud Tasks introduces two new features, the new queue-level routing
configuration and BufferTask API. Together, they enable creating HTTP tasks and
adding to a queue without needing the tasks client library.&lt;/p&gt;</description></item><item><title>Workflows gets an updated JSON Schema</title><link>https://atamel.dev/posts/2023/04-11_workflows_json_schema/</link><pubDate>Tue, 11 Apr 2023 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2023/04-11_workflows_json_schema/</guid><description>&lt;p&gt;If you use &lt;a href="https://cloud.google.com/workflows"&gt;Workflows&lt;/a&gt;, you&amp;rsquo;ve been crafting your Workflows definitions in YAML (or JSON). You&amp;rsquo;re probably painfully aware of the limited support you get in your IDE with syntax validation or auto-completion with these YAML definitions. This was due to Workflow&amp;rsquo;s schema being out of date, as I covered in &lt;a href="https://medium.com/google-cloud/auto-completion-for-workflows-json-and-yaml-on-visual-studio-code-875a5e878d5e"&gt;my previous post&lt;/a&gt; last year.&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;m happy to report that our team recently updated the Workflows schema with the latest syntax. With a new schema, you get a much improved syntax validation and auto-completion of Workflows definition files in your favorite IDE. &lt;/p&gt;</description></item><item><title>CloudEvents Basics</title><link>https://atamel.dev/posts/2023/04-03_cloudevents_basics/</link><pubDate>Mon, 03 Apr 2023 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2023/04-03_cloudevents_basics/</guid><description>&lt;p&gt;I talked about CloudEvents in the context of event-driven architectures before.
In this post, let&amp;rsquo;s explore CloudEvents in more depth.&lt;/p&gt;
&lt;h2 id="cloudevents-why-what"&gt;CloudEvents: Why? What?&lt;/h2&gt;
&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://avatars.githubusercontent.com/u/32076828?s=200&amp;amp;v=4" alt="CloudEvents Logo" /&gt;
 
 &lt;figcaption&gt;CloudEvents Logo&lt;/figcaption&gt;
 
&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://cloudevents.io/"&gt;CloudEvents&lt;/a&gt; is a popular specification for describing
event data in a common way with the goal of increasing interoperability between
different event systems.&lt;/p&gt;
&lt;p&gt;Google Cloud&amp;rsquo;s &lt;a href="https://cloud.google.com/eventarc/docs"&gt;Eventarc&lt;/a&gt;, open-source
&lt;a href="https://knative.dev/docs/"&gt;Knative&lt;/a&gt;, Azure&amp;rsquo;s Event Grid, and many more projects
rely on CloudEvent specification to define their event formats.&lt;/p&gt;</description></item><item><title>Extending Cloud Code with custom templates</title><link>https://atamel.dev/posts/2023/03-18_extending_cloudcode_with_custom_templates/</link><pubDate>Sat, 18 Mar 2023 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2023/03-18_extending_cloudcode_with_custom_templates/</guid><description>&lt;p&gt;&lt;a href="https://cloud.google.com/code"&gt;Cloud Code&lt;/a&gt; is a set of IDE plugins for popular IDEs that make it easier to create, deploy and integrate applications with Google Cloud. Cloud Code provides an excellent extension mechanism through custom templates. In this post, I show you how you can create and use your own custom templates to add some features beyond those supported natively in Cloud Code, such as .NET functions, event triggered functions and more. &lt;/p&gt;</description></item><item><title>How to use Google Cloud Serverless tech to iterate quickly in a startup environment</title><link>https://atamel.dev/posts/2022/12-15_use-serverless-tech-to-iterate-quickly-in-a-startup-environment/</link><pubDate>Thu, 15 Dec 2022 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2022/12-15_use-serverless-tech-to-iterate-quickly-in-a-startup-environment/</guid><description>&lt;p&gt;In a startup, you need to get to the MVP fast, gather feedback from early
adopters, and iterate in a quick cycle. Anything that takes time away from
developing and iterating on features delays the launch and that’s a serious
problem when time-to-market is crucial.&lt;/p&gt;
&lt;p&gt;Google Cloud offers products that can help you to build and run your backend
services on a fully managed serverless platform, saving time and freeing you
from the burden of provisioning and managing infrastructure needed to run those
services. Less time on infrastructure means more time implementing and iterating
on your services. Additionally, most serverless products have a pay-as-you-go
pricing model with little upfront cost at the beginning. Pricing increases in
line with the scale of your services and your startup, so you only pay for what
you need, when you need it&lt;/p&gt;</description></item><item><title>Introducing Cloud Functions support in Cloud Code</title><link>https://atamel.dev/posts/2022/12-12_introduce_functions_in_cloud_code/</link><pubDate>Mon, 12 Dec 2022 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2022/12-12_introduce_functions_in_cloud_code/</guid><description>&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://atamel.dev/img/2022/cloudfunctionsincloudcode.png" alt="Cloud Code now supports Cloud Functions" /&gt;
 
 &lt;figcaption&gt;Cloud Code now supports Cloud Functions&lt;/figcaption&gt;
 
&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://cloud.google.com/code"&gt;Cloud Code&lt;/a&gt; has been providing IDE support for
the development cycle of Kubernetes and Cloud Run applications for a while now.
&lt;strong&gt;I’m happy to
&lt;a href="https://github.com/GoogleCloudPlatform/cloud-code-vscode/blob/master/CHANGELOG.md#version-1210-dec-2022"&gt;report&lt;/a&gt;
that the Dec 2022 version (1.21.0) of Cloud Code now supports Cloud Functions!&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;In this first release of Cloud Functions support, you can:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Use the Cloud Functions Explorer to view your project&amp;rsquo;s Cloud Functions
properties and source code.&lt;/li&gt;
&lt;li&gt;Download your Cloud Functions to edit your code locally, then configure your
local workspace to deploy those changes directly from Cloud Code.&lt;/li&gt;
&lt;li&gt;Invoke your HTTP-triggered functions from VS Code.&lt;/li&gt;
&lt;li&gt;Use the Cloud Code Logs Viewer to view logs from your Cloud Functions.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Let’s take a closer look.&lt;/p&gt;</description></item><item><title>Workflows patterns and best practices - Part 3</title><link>https://atamel.dev/posts/2022/12-07_workflows-patterns-and-best-practices-part-3/</link><pubDate>Mon, 05 Dec 2022 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2022/12-07_workflows-patterns-and-best-practices-part-3/</guid><description>&lt;p&gt;This is a three-part series of posts, in which we summarize Workflows and
service orchestration patterns. In this third and final post, we talk about
managing workflow life cycles and the benefits of using Firestore with
Workflows.&lt;/p&gt;
&lt;p&gt;If you’re not careful, the workflow definitions you create with YAML or JSON can get out of hand pretty quickly. While it is possible to use subworkflows to define snippets of a workflow that can be reused from multiple workflows, Workflows does not support importing these subworkflows. Thankfully, there are other tools, such as Terraform, that can help.&lt;/p&gt;</description></item><item><title>Workflows patterns and best practices - Part 2</title><link>https://atamel.dev/posts/2022/11-28_workflows-patterns-and-best-practices-part-2/</link><pubDate>Mon, 28 Nov 2022 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2022/11-28_workflows-patterns-and-best-practices-part-2/</guid><description>&lt;p&gt;This is part 2 of a three-part series of posts, in which we summarize Workflows and service orchestration patterns. You can apply these patterns to better take advantage of Workflows and service orchestration on Google Cloud.&lt;/p&gt;
&lt;p&gt;In the first post, we introduced some general tips and tricks, as well as patterns for event-driven orchestrations, parallel steps, and connectors. This second post covers more advanced patterns.&lt;/p&gt;
&lt;p&gt;Let’s dive in!&lt;/p&gt;
&lt;p&gt;Design for resiliency with retries and the saga pattern
It’s easy to put together a workflow that chains a series of services, especially if you assume that those services will never fail. This is a common distributed systems fallacy, however, because of course a service will fail at some point. The workflow step calling that service will fail, and then the whole workflow will fail. This is not what you want to see in a resilient architecture. Thankfully, Workflows has building blocks to handle both transient and permanent service failures.&lt;/p&gt;</description></item><item><title>Workflows patterns and best practices - Part 1</title><link>https://atamel.dev/posts/2022/11-22_workflows-patterns-and-best-practices-part-1/</link><pubDate>Tue, 22 Nov 2022 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2022/11-22_workflows-patterns-and-best-practices-part-1/</guid><description>&lt;p&gt;For the last couple of years, we’ve been using Workflows, Google Cloud’s service orchestrator, to bring order to our serverless microservices architectures. As we used and gained more experience with Workflows and service orchestration, we shared what he had learned in conference talks, blog posts, samples, and tutorials. Along the way, some common patterns and best practices emerged.&lt;/p&gt;
&lt;p&gt;To help you take better advantage of Workflows and service orchestration on Google Cloud, we’ve summarized these proven patterns and best practices in a three-part series of blog posts.&lt;/p&gt;</description></item><item><title>.NET 7 on Cloud Run</title><link>https://atamel.dev/posts/2022/11-11_dotnet7_cloud_run/</link><pubDate>Fri, 11 Nov 2022 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2022/11-11_dotnet7_cloud_run/</guid><description>&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://atamel.dev/img/2022/dotnet7oncloudrun.png" alt=".NET 7 on Cloud Run" /&gt;
 
 &lt;figcaption&gt;.NET 7 on Cloud Run&lt;/figcaption&gt;
 
&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;.NET 7 was
&lt;a href="https://devblogs.microsoft.com/dotnet/announcing-dotnet-7/"&gt;released&lt;/a&gt; a few
days ago with new features and performance improvements and it&amp;rsquo;s already
supported on Cloud Run on Google Cloud!&lt;/p&gt;
&lt;p&gt;In this short blog post, I show you how to deploy a .NET 7 web app to Cloud Run.&lt;/p&gt;
&lt;h2 id="create-a-net-7-web-app"&gt;Create a .NET 7 web app&lt;/h2&gt;
&lt;p&gt;First, make sure you&amp;rsquo;re on .NET 7:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;dotnet --version
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;7.0.100
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;Create a simple web app:&lt;/p&gt;</description></item><item><title>Executing commands (gcloud, kubectl) from Workflows</title><link>https://atamel.dev/posts/2022/10-17_executing_commands_from_workflows/</link><pubDate>Mon, 17 Oct 2022 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2022/10-17_executing_commands_from_workflows/</guid><description>&lt;p&gt;In a &lt;a href="https://cloud.google.com/blog/topics/developers-practitioners/introducing-new-connectors-workflows"&gt;previous
post&lt;/a&gt;,
I showed how to manage the lifecycle of a virtual machine using Workflows and
the Compute Engine connector. This works well when there’s a connector for the
resource you’re trying to manage. When there is no connector, you can try to use
the API of the resource from Workflows, if there’s one. Alternatively, you can
also use my favorite command line tool to manage the resource: &lt;code&gt;gcloud&lt;/code&gt;. Or if
you’re managing a Kubernetes cluster, maybe you want to call &lt;code&gt;kubectl&lt;/code&gt; instead.&lt;/p&gt;</description></item><item><title>Workflows that pause and wait for human approvals from Google Sheets</title><link>https://atamel.dev/posts/2022/10-10_workflows_wait_approvals_sheets/</link><pubDate>Mon, 10 Oct 2022 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2022/10-10_workflows_wait_approvals_sheets/</guid><description>&lt;p&gt;I’ve been writing a series of posts to showcase &lt;a href="https://workspace.google.com/"&gt;Google
Workspace&lt;/a&gt; and Google Cloud
&lt;a href="https://cloud.google.com/workflows"&gt;Workflows&lt;/a&gt; integration.&lt;/p&gt;
&lt;p&gt;In my &lt;a href="https://atamel.dev/posts/2022/09-09_trigger_workflows_from_sheets/"&gt;first
post&lt;/a&gt;, I
showed an IT automation use case in which a Google Sheets spreadsheet triggers a
workflow to create virtual machines in Google Cloud. In the &lt;a href="https://atamel.dev/posts/2022/09-26_writing_google_sheets_from_workflows/"&gt;second
post&lt;/a&gt;,
I showed how to feed a Google Sheets spreadsheet with data from BigQuery using a
workflow.&lt;/p&gt;
&lt;p&gt;In this third and final post of the series, I show how to design a workflow that
pauses and waits for human approvals from Google Sheets.&lt;/p&gt;</description></item><item><title>.NET 6 on Cloud Functions (2nd gen)</title><link>https://atamel.dev/posts/2022/10-03_dotnet6_cloud_functions/</link><pubDate>Mon, 03 Oct 2022 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2022/10-03_dotnet6_cloud_functions/</guid><description>&lt;p&gt;Back in August, we
&lt;a href="https://cloud.google.com/blog/products/serverless/cloud-functions-2nd-generation-now-generally-available"&gt;announced&lt;/a&gt;
the 2nd generation of Cloud Functions with longer request processing times,
larger instances, new event sources with Eventarc, and more.&lt;/p&gt;
&lt;p&gt;A few weeks ago, .NET 6 support (public preview) was silently added to Cloud
Functions.&lt;/p&gt;
&lt;p&gt;Let’s see how to deploy some .NET 6 functions to Cloud Functions 2nd gen.&lt;/p&gt;
&lt;h2 id="functions-framework-for-net"&gt;Functions Framework for .NET&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://github.com/GoogleCloudPlatform/functions-framework-dotnet"&gt;Functions Framework for
.NET&lt;/a&gt; is the
easiest way to create .NET functions for consuming HTTP or CloudEvent requests.&lt;/p&gt;</description></item><item><title>Writing to Google Sheets from Workflows</title><link>https://atamel.dev/posts/2022/09-26_writing_google_sheets_from_workflows/</link><pubDate>Mon, 26 Sep 2022 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2022/09-26_writing_google_sheets_from_workflows/</guid><description>&lt;p&gt;In my previous
&lt;a href="https://atamel.dev/posts/2022/09-09_trigger_workflows_from_sheets/"&gt;post&lt;/a&gt;, I
showed how to trigger a workflow in Google Cloud from a Google Sheets
spreadsheet using Apps Script. In this post, I show how to do the reverse: write
to Google Sheets from a workflow in Google Cloud.&lt;/p&gt;
&lt;h2 id="use-case"&gt;Use case&lt;/h2&gt;
&lt;p&gt;Imagine you have some dataset in BigQuery. Periodically, you want to query and
extract a subset of the dataset and save it to a Google Sheets spreadsheet. You
can implement such a process with Workflows quite easily.&lt;/p&gt;</description></item><item><title>Multi-environment service orchestrations</title><link>https://atamel.dev/posts/2022/09-20_multi-environment-service-orchestrations/</link><pubDate>Tue, 20 Sep 2022 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2022/09-20_multi-environment-service-orchestrations/</guid><description>&lt;p&gt;In a previous post, I showed how to use a GitOps approach to manage the deployment lifecycle of a service orchestration. This approach makes it easy to deploy changes to a workflow in a staging environment, run tests against it, and gradually roll out these changes to the production environment.&lt;/p&gt;
&lt;p&gt;While GitOps helps to manage the deployment lifecycle, it’s not enough. Sometimes, you need to make changes to the workflow before deploying to different environments. You need to design workflows with multiple environments in mind.&lt;/p&gt;</description></item><item><title>GitOps your service orchestrations</title><link>https://atamel.dev/posts/2022/09-16_gitsops-service-orchestration/</link><pubDate>Fri, 16 Sep 2022 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2022/09-16_gitsops-service-orchestration/</guid><description>&lt;p&gt;GitOps takes DevOps best practices used for application development (such as version control and CI/CD) and applies them to infrastructure automation. In GitOps, the Git repository serves as the source of truth and the CD pipeline is responsible for building, testing, and deploying the application code and the underlying infrastructure.&lt;/p&gt;
&lt;p&gt;Nowadays, an application is not just code running on infrastructure that you own and operate. It is usually a set of first-party and third-party microservices working together in an event-driven architecture or with a central service orchestrator such as Workflows.&lt;/p&gt;</description></item><item><title>Triggering Workflows from Google Sheets</title><link>https://atamel.dev/posts/2022/09-09_trigger_workflows_from_sheets/</link><pubDate>Fri, 09 Sep 2022 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2022/09-09_trigger_workflows_from_sheets/</guid><description>&lt;p&gt;Is it possible to integrate &lt;a href="https://workspace.google.com/"&gt;Google Workspace&lt;/a&gt;
tools such as Calendar, Sheets, and Forms with
&lt;a href="https://cloud.google.com/workflows"&gt;Workflows&lt;/a&gt;? For example, can you trigger a
workflow from a Google Form or a Sheet? Turns out, this is not only possible but
also easier than you might think. Let me show you how with a sample use case.&lt;/p&gt;
&lt;h2 id="use-case"&gt;Use case&lt;/h2&gt;
&lt;p&gt;Imagine you are an administrator in charge of allocating virtual machines (VM)
in your cloud infrastructure to users. You want to capture user requests with
the specifications for the VMs, have an approval step for the request, and then
create the VM with an automated process.&lt;/p&gt;</description></item><item><title>Route Datadog monitoring alerts to Google Cloud with Eventarc</title><link>https://atamel.dev/posts/2022/08-24_route-datadog-monitoring-alerts-google-cloud-eventarc/</link><pubDate>Wed, 24 Aug 2022 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2022/08-24_route-datadog-monitoring-alerts-google-cloud-eventarc/</guid><description>&lt;p&gt;A few weeks ago, we announced third-party event sources for Eventarc, with the first cohort of third-party providers by our ecosystem partners. This blog post describes how to listen for events from one of those third-party providers, Datadog, and route them to a service in Google Cloud via Eventarc.&lt;/p&gt;
&lt;p&gt;Datadog is a monitoring platform for cloud applications. It brings together end-to-end traces, metrics, and logs to make your applications and infrastructure observable. In Datadog, you can create monitors that track metrics, events, logs, integration availability, and network endpoints, and you can set alert thresholds on those monitors. Once these thresholds are reached, Datadog notifies your teams or services via email, Slack, and now a Google Cloud service via Eventarc.&lt;/p&gt;</description></item><item><title>Creating Workflows that pause and wait for events</title><link>https://atamel.dev/posts/2022/08-05_workflows_that_pause_for_events/</link><pubDate>Fri, 05 Aug 2022 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2022/08-05_workflows_that_pause_for_events/</guid><description>&lt;p&gt;In &lt;a href="https://cloud.google.com/workflows"&gt;Workflows&lt;/a&gt;, it’s easy to chain various
services together into an automated workflow. For some use cases, you might need
to pause workflow execution and wait for some input. This input could be a human
approval or an external service calling back with data needed to complete the
workflow.&lt;/p&gt;
&lt;p&gt;​​With Workflows
&lt;a href="https://cloud.google.com/workflows/docs/creating-callback-endpoints"&gt;callbacks&lt;/a&gt;,
a workflow can create an HTTP endpoint and pause execution until it receives an
HTTP callback to that endpoint. This is very useful for creating
human-in-the-middle type workflows. In a previous &lt;a href="https://cloud.google.com/blog/topics/developers-practitioners/introducing-workflows-callbacks"&gt;blog
post&lt;/a&gt;,
&lt;a href="https://twitter.com/glaforge"&gt;Guillaume Laforge&lt;/a&gt; showed how to build an
automated translation workflow with human validation using callbacks.&lt;/p&gt;</description></item><item><title>Trip report - Eight in-person conferences in eight weeks</title><link>https://atamel.dev/posts/2022/07-15_trip_report_eight_conferences_eight_weeks/</link><pubDate>Fri, 15 Jul 2022 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2022/07-15_trip_report_eight_conferences_eight_weeks/</guid><description>&lt;h2 id="careful-what-you-wish-for"&gt;Careful what you wish for&lt;/h2&gt;
&lt;p&gt;My last in-person conference before the pandemic was &lt;a href="https://ndclondon.com/"&gt;NDC
London&lt;/a&gt; back in January 2020. I distinctly remember
being quite tired at the beginning of 2020. For me, 2019 had been a long year
with 50+ speaking engagements at a variety of conferences all around the world.
I just wanted to take a short break in early 2020.&lt;/p&gt;
&lt;p&gt;Little did I know that the short break would turn into a 2.5 year long break due
to a global pandemic. I’m extremely grateful that I was able to continue working
and speaking in many online conferences during the pandemic. However, as the
pandemic dragged on, my enthusiasm for online events diminished. I sorely missed
connecting with other speakers, meeting attendees, and exploring the city I was
speaking in.&lt;/p&gt;</description></item><item><title>Taking screenshots of web pages with Cloud Run jobs, Workflows, and Eventarc</title><link>https://atamel.dev/posts/2022/06-16_taking-screenshots-web-pages-cloud-run-jobs-workflows-and-eventarc/</link><pubDate>Thu, 16 Jun 2022 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2022/06-16_taking-screenshots-web-pages-cloud-run-jobs-workflows-and-eventarc/</guid><description>&lt;p&gt;At Google Cloud I/O, we announced the public preview of Cloud Run jobs. Unlike Cloud Run services that run continuously to respond to web requests or events, Cloud Run jobs run code that performs some work and quits when the work is done. Cloud Run jobs are a good fit for administrative tasks such as database migration, scheduled work like nightly reports, or doing batch data transformation.&lt;/p&gt;
&lt;p&gt;In this post, I show you a fully serverless, event-driven application to take screenshots of web pages, powered by Cloud Run jobs, Workflows, and Eventarc.&lt;/p&gt;</description></item><item><title>Worklows state management with Firestore</title><link>https://atamel.dev/posts/2022/04-08_workflows_state_firestore/</link><pubDate>Fri, 08 Apr 2022 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2022/04-08_workflows_state_firestore/</guid><description>&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://atamel.dev/img/2022/workflows-firestore.png" alt="Workflows Firestore" /&gt;
 
 &lt;figcaption&gt;Workflows Firestore&lt;/figcaption&gt;
 
&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;In Workflows, sometimes, you need to store some state, a key/value pair, in a
step in one execution and later read that state in another step in another
execution. There&amp;rsquo;s no intrinsic key/value store in Workflows. However, you can
use Firestore as a key/value store and that&amp;rsquo;s what I want to show you here.&lt;/p&gt;
&lt;p&gt;If you want to skip to see some samples, check out
&lt;a href="https://github.com/GoogleCloudPlatform/workflows-demos/blob/master/state-management-firestore/workflow.yaml"&gt;workflow.yaml&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Introducing Eventarc triggers for Workflows</title><link>https://atamel.dev/posts/2022/04-01_introducing-eventarc-triggers-workflows/</link><pubDate>Fri, 01 Apr 2022 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2022/04-01_introducing-eventarc-triggers-workflows/</guid><description>&lt;p&gt;We’re happy to announce that you can now create Eventarc triggers to directly target Workflows destinations. Available as a preview feature, it simplifies event-driven orchestrations by enabling you to route Eventarc events to Workflows without having an intermediary service.&lt;/p&gt;
&lt;p&gt;Integrating Eventarc and Workflows
In a previous post, we talked about how to integrate Eventarc and Workflows. Since there was no direct integration between the two services, we had to deploy an intermediary Cloud Run service to receive events from Eventarc and then use the Workflows API to kick off a Workflows execution. Let’s take a look at a concrete example on how this worked.&lt;/p&gt;</description></item><item><title>Creating Eventarc triggers with Terraform</title><link>https://atamel.dev/posts/2022/03-18_creating-eventarc-triggers-terraform/</link><pubDate>Fri, 18 Mar 2022 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2022/03-18_creating-eventarc-triggers-terraform/</guid><description>&lt;p&gt;Terraform is increasingly the preferred tool for building, changing, and versioning infrastructure in Google Cloud and across clouds. In an earlier post, I showed how to create Eventarc triggers using Google Cloud Console or via the command line with gcloud. In this post, I show how to create the same triggers with the google_eventarc_trigger Terraform resource.&lt;/p&gt;
&lt;p&gt;See eventarc-samples/terraform on GitHub for the prerequisites and main.tf for
full Terraform configuration.&lt;/p&gt;
&lt;p&gt;Define a Cloud Run service as an event sink
Before you can create a trigger, you need to create a Cloud Run service as an event sink for the trigger. You can use Terraform’s google_cloud_run_service resource to define a Cloud Run service:&lt;/p&gt;</description></item><item><title>Applying a path pattern when filtering in Eventarc</title><link>https://atamel.dev/posts/2022/03-02_path_patterns_eventarc/</link><pubDate>Wed, 02 Mar 2022 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2022/03-02_path_patterns_eventarc/</guid><description>&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://atamel.dev/img/2022/Eventarc-128-color.png" alt="Eventarc" /&gt;
 
 &lt;figcaption&gt;Eventarc&lt;/figcaption&gt;
 
&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;You can now apply a &lt;a href="https://cloud.google.com/eventarc/docs/path-patterns"&gt;path
pattern&lt;/a&gt; when filtering in
Eventarc. This is especially useful when you need to filter on &lt;a href="https://cloud.google.com/apis/design/resource_names"&gt;resource
names&lt;/a&gt; beyond exact match.
Path pattern syntax allows you to define a regex-like expression that matches
events as broadly as you like.&lt;/p&gt;
&lt;p&gt;Let&amp;rsquo;s take a look at a concrete example.&lt;/p&gt;
&lt;h2 id="without-path-patterns"&gt;Without path patterns&lt;/h2&gt;
&lt;p&gt;Let&amp;rsquo;s say you want to listen for new file creations in a Cloud Storage bucket
with an AuditLog trigger.&lt;/p&gt;</description></item><item><title>Building APIs with Cloud Functions and API Gateway</title><link>https://atamel.dev/posts/2022/02-10_cloud_functions_api_gateway/</link><pubDate>Thu, 10 Feb 2022 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2022/02-10_cloud_functions_api_gateway/</guid><description>&lt;h2 id="building-apis-with-cloud-run"&gt;Building APIs with Cloud Run&lt;/h2&gt;
&lt;p&gt;If I want to build an API, I usually use Cloud Run. In Cloud Run, you run a
container and in that container, you run a web server that handles a base URL in
this format:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-fallback" data-lang="fallback"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;https://&amp;lt;service-name&amp;gt;-&amp;lt;hash&amp;gt;-&amp;lt;region&amp;gt;.a.run.app
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;You can then have the web server handle any path under that base URL such as:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-fallback" data-lang="fallback"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;https://&amp;lt;service-name&amp;gt;-&amp;lt;hash&amp;gt;-&amp;lt;region&amp;gt;.a.run.app/hello
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;https://&amp;lt;service-name&amp;gt;-&amp;lt;hash&amp;gt;-&amp;lt;region&amp;gt;.a.run.app/bye
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="building-apis-with-cloud-functions"&gt;Building APIs with Cloud Functions&lt;/h2&gt;
&lt;p&gt;In Cloud Functions, you only have access to a function (no web server) and that
function can only handle the base path:&lt;/p&gt;</description></item><item><title>Long-running containers with Workflows and Compute Engine</title><link>https://atamel.dev/posts/2022/02-09_long-running-containers-workflows-and-compute-engine/</link><pubDate>Wed, 09 Feb 2022 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2022/02-09_long-running-containers-workflows-and-compute-engine/</guid><description>&lt;p&gt;Sometimes, you need to run a piece of code for hours, days, or even weeks. Cloud Functions and Cloud Run are my default choices to run code. However, they both have limitations on how long a function or container can run. This rules out the idea of executing long-running code in a serverless way.&lt;/p&gt;
&lt;p&gt;Thanks to Workflows and Compute Engine, you can have an almost serverless experience with long running code.&lt;/p&gt;</description></item><item><title>Implementing the saga pattern in Workflows</title><link>https://atamel.dev/posts/2022/02-05_implementing-saga-pattern-workflows/</link><pubDate>Sat, 05 Feb 2022 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2022/02-05_implementing-saga-pattern-workflows/</guid><description>&lt;p&gt;It’s common to have a separate database for each service in microservices-based architectures. This pattern ensures that the independently designed and deployed microservices remain independent in the data layer as well. But it also introduces a new problem: How do you implement transactions (single units of work, usually made up of multiple operations) that span multiple microservices each with their own local database?&lt;/p&gt;
&lt;p&gt;In a traditional monolith architecture, you can rely on ACID transactions (atomicity, consistency, isolation, durability) against a single database. In a microservices architecture, ensuring data consistency across multiple service-specific databases becomes more challenging. You cannot simply rely on local transactions. You need a cross-service transaction strategy. That’s where the saga pattern comes into play.&lt;/p&gt;</description></item><item><title>.NET 6 support in Google Cloud Buildpacks and Cloud Run</title><link>https://atamel.dev/posts/2022/02-04_dotnet6_buildpacks_cloudrun/</link><pubDate>Fri, 04 Feb 2022 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2022/02-04_dotnet6_buildpacks_cloudrun/</guid><description>&lt;p&gt;I really like the idea of containers, a reproducible context that our apps can
rely on. However, I really dislike having to create a &lt;code&gt;Dockerfile&lt;/code&gt; to define my
container images. I always need to copy &amp;amp; paste it from somewhere and it takes me
some time to get it right.&lt;/p&gt;
&lt;p&gt;That&amp;rsquo;s why I like using &lt;a href="https://github.com/GoogleCloudPlatform/buildpacks"&gt;Google Cloud
Buildpacks&lt;/a&gt; to build
containers without having to worry about a &lt;code&gt;Dockerfile&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;I&amp;rsquo;m happy to announce that, as of yesterday, Google Cloud Buildpacks now
supports .NET 6 (the latest LTS version) apps as well!&lt;/strong&gt;&lt;/p&gt;</description></item><item><title>Auto-completion for Workflows JSON and YAML on Visual Studio Code</title><link>https://atamel.dev/posts/2022/01-28_auto_complete_workflows_vscode/</link><pubDate>Fri, 28 Jan 2022 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2022/01-28_auto_complete_workflows_vscode/</guid><description>&lt;p&gt;If you&amp;rsquo;re like me, you probably use VS Code to author your Workflows JSON or
YAML. You also probably expect some kind syntax validation or auto-completion as
you work on your workflow. Unfortunately, there&amp;rsquo;s no VS Code extension for
Workflows and &lt;a href="https://cloud.google.com/code/docs/vscode"&gt;Cloud Code for VS
Code&lt;/a&gt; does not support Workflows.&lt;/p&gt;
&lt;p&gt;However, there&amp;rsquo;s a way to get &lt;strong&gt;partial&lt;/strong&gt; auto-completion for Workflows in VS Code.&lt;/p&gt;
&lt;h2 id="vs-code-and-json-schema"&gt;VS Code and JSON Schema&lt;/h2&gt;
&lt;p&gt;VS Code has the ability to display auto-complete suggestions for JSON and YAML
files out of the box. It uses &lt;a href="https://www.schemastore.org/json/"&gt;JSON Schema Store&lt;/a&gt;
which hosts &lt;a href="https://json-schema.org/"&gt;JSON Schemas&lt;/a&gt; for popular configuration
files in JSON and YAML.&lt;/p&gt;</description></item><item><title>Introducing the new Eventarc UI, Cloud Run for Anthos destinations</title><link>https://atamel.dev/posts/2022/01-15_introducing-new-eventarc-ui-cloud-run-anthos-destinations/</link><pubDate>Sat, 15 Jan 2022 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2022/01-15_introducing-new-eventarc-ui-cloud-run-anthos-destinations/</guid><description>&lt;p&gt;December was a busy month for the Eventarc team, who landed a number of features at the end of the year. Let’s take a closer look at some of these new capabilities.&lt;/p&gt;
&lt;p&gt;Cloud Storage trigger is GA
Back in September, we announced the public preview of Cloud Storage triggers as the preferred way of routing Cloud Storage events to Cloud Run targets. They are now generally available. For more details, see the documentation on how to create a Cloud Storage trigger and check out my previous blog post on Cloud Storage triggers.&lt;/p&gt;</description></item><item><title>Cross-region and cross-project event routing with Eventarc and Pub/Sub</title><link>https://atamel.dev/posts/2021/12-09_cross-region-and-cross-project-event-routing-eventarc-and-pubsub/</link><pubDate>Thu, 09 Dec 2021 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2021/12-09_cross-region-and-cross-project-event-routing-eventarc-and-pubsub/</guid><description>&lt;p&gt;With event-driven architectures, it’s quite common to read events from a source in one region or project and route them to a destination in another region or another project. Let’s take a look at how you can implement cross-region and cross-project event routing in Google Cloud.&lt;/p&gt;
&lt;p&gt;Cross-region event routing is straightforward in Google Cloud, whether you’re using Pub/Sub directly or Eventarc. Pub/Sub routes messages globally. When applications hosted in any region publish messages to a topic, subscribers from any region can pull from that topic. Eventarc enables you to route events across regions by creating a trigger in the region of the event’s source and specifying a destination in a different region. For more details, take a look at my previous blog post on Eventarc locations.&lt;/p&gt;</description></item><item><title>A closer look at locations in Eventarc</title><link>https://atamel.dev/posts/2021/10-26_closer-look-locations-eventarc/</link><pubDate>Tue, 26 Oct 2021 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2021/10-26_closer-look-locations-eventarc/</guid><description>&lt;p&gt;New locations in Eventarc
Back in August, we announced more Eventarc locations (17 new regions, as well as 6 new dual-region and multi-region locations to be precise). This takes the total number of locations in Eventarc to more than 30. You can see the full list in the Eventarc locations page or by running gcloud eventarc locations list .&lt;/p&gt;
&lt;p&gt;What does location mean in Eventarc?
An Eventarc location usually refers to the single region that the Eventarc trigger gets created in. However, depending on the trigger type, the location can be more than a single region:&lt;/p&gt;</description></item><item><title>Trying out source-based deployment in Cloud Run</title><link>https://atamel.dev/posts/2021/10-19_trying-out-source-based-deployment-cloud-run/</link><pubDate>Tue, 19 Oct 2021 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2021/10-19_trying-out-source-based-deployment-cloud-run/</guid><description>&lt;p&gt;Until recently, this is how you deployed code to Cloud Run:&lt;/p&gt;
&lt;p&gt;Define your container-based app with a Dockerfile.
Build the container image and push it to the Container Registry (typically with Cloud Build).
Deploy the container image to Cloud Run.
Back in December, we announced the beta release of source-based deployments for Cloud Run. This combines steps 2 and 3 above into a single command. Perhaps more importantly, it also eliminates the need for a Dockerfile for supported language versions.&lt;/p&gt;</description></item><item><title>Analyzing Twitter sentiment with new Workflows processing capabilities</title><link>https://atamel.dev/posts/2021/10-09_analyzing-twitter-sentiment-new-workflows-processing-capabilities/</link><pubDate>Sat, 09 Oct 2021 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2021/10-09_analyzing-twitter-sentiment-new-workflows-processing-capabilities/</guid><description>&lt;p&gt;The Workflows team recently announced the general availability of iteration syntax and connectors!&lt;/p&gt;
&lt;p&gt;Iteration syntax supports easier creation and better readability of workflows that process many items. You can use a for loop to iterate through a collection of data in a list or map, and keep track of the current index. If you have a specific range of numeric values to iterate through, you can also use range-based iteration.&lt;/p&gt;</description></item><item><title>Introducing the new Cloud Storage trigger in Eventarc</title><link>https://atamel.dev/posts/2021/09-21_introducing-new-cloud-storage-trigger-eventarc/</link><pubDate>Tue, 21 Sep 2021 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2021/09-21_introducing-new-cloud-storage-trigger-eventarc/</guid><description>&lt;p&gt;Eventarc now supports a new Cloud Storage trigger to receive events from Cloud Storage buckets!&lt;/p&gt;
&lt;p&gt;Wait a minute. Didn’t Eventarc already support receiving Cloud Storage events? You’re absolutely right! Eventarc has long supported Cloud Storage events via the Cloud Audit Logs trigger. However, the new Cloud Storage trigger has a number of advantages and it’s now the preferred way of receiving Cloud Storage events. Let’s take a look at the details.&lt;/p&gt;</description></item><item><title>Get notified when an expensive BigQuery job executes using Eventarc and SendGrid</title><link>https://atamel.dev/posts/2021/06-08_bigquery-jobs-notifier/</link><pubDate>Tue, 08 Jun 2021 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2021/06-08_bigquery-jobs-notifier/</guid><description>&lt;h2 id="events-supported-by-eventarc"&gt;Events supported by Eventarc&lt;/h2&gt;
&lt;p&gt;Last week, I put together a &lt;a href="https://github.com/GoogleCloudPlatform/eventarc-samples/tree/main/eventarc-events"&gt;list of
events&lt;/a&gt;
supported by Eventarc in our
&lt;a href="https://github.com/GoogleCloudPlatform/eventarc-samples"&gt;eventarc-samples&lt;/a&gt;
repo. Thanks to our docs team, this list is now part of our official docs under
&lt;a href="https://cloud.google.com/eventarc/docs/reference/supported-events"&gt;reference
section&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;After looking at the full list, I started thinking about some use cases enabled by
these events. I want to talk about one of those use cases today: &lt;strong&gt;How to get
notified when an expensive BigQuery job executes?&lt;/strong&gt;&lt;/p&gt;</description></item><item><title>Deploying multi-YAML Workflows definitions with Terraform</title><link>https://atamel.dev/posts/2021/05-13_deploying-multi-yaml-workflows-definitions-terraform/</link><pubDate>Thu, 13 May 2021 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2021/05-13_deploying-multi-yaml-workflows-definitions-terraform/</guid><description>&lt;p&gt;I’m a big fan of using Workflows to orchestrate and automate services running on Google Cloud and beyond. In Workflows, you can define a workflow in a YAML or JSON file and deploy it using gcloud or using Google Cloud Console. These approaches work but a more declarative and arguably better approach is to use Terraform.&lt;/p&gt;
&lt;p&gt;Let’s see how to use Terraform to define and deploy workflows and explore options for keeping Terraform configuration files more manageable.&lt;/p&gt;</description></item><item><title>Introducing new connectors for Workflows</title><link>https://atamel.dev/posts/2021/04-27_introducing-new-connectors-workflows/</link><pubDate>Tue, 27 Apr 2021 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2021/04-27_introducing-new-connectors-workflows/</guid><description>&lt;p&gt;Workflows is a service to orchestrate not only Google Cloud services, such as Cloud Functions, Cloud Run, or machine learning APIs, but also external services. As you might expect from an orchestrator, Workflows allows you to define the flow of your business logic, as steps, in a YAML or JSON definition language, and provides an execution API and UI to trigger workflow executions. You can read more about the benefits of Workflows in our previous article.&lt;/p&gt;</description></item><item><title>Choosing the right orchestrator in Google Cloud</title><link>https://atamel.dev/posts/2021/04-22_choosing-right-orchestrator-google-cloud/</link><pubDate>Thu, 22 Apr 2021 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2021/04-22_choosing-right-orchestrator-google-cloud/</guid><description>&lt;p&gt;What is orchestration?
Orchestration often refers to the automated configuration, coordination, and management of computer systems and services.&lt;/p&gt;
&lt;p&gt;In the context of service-oriented architectures, orchestration can range from simply executing a single service at a specific time and day, to a more sophisticated approach of automating and monitoring multiple services over longer periods of time, with the ability to react and handle failures as they crop up.&lt;/p&gt;
&lt;p&gt;In the data engineering context, orchestration is central to coordinating the services and workflows that prepare, ingest, and transform data. It can go beyond data processing and also involve a workflow to train a machine learning (ML) model from the data.&lt;/p&gt;</description></item><item><title>Three ways of receiving events in Cloud Run</title><link>https://atamel.dev/posts/2021/03-12_three-ways-receiving-events-cloud-run/</link><pubDate>Fri, 12 Mar 2021 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2021/03-12_three-ways-receiving-events-cloud-run/</guid><description>&lt;p&gt;Cloud Run and Eventarc is a great combination for building event-driven services with different event routing options. There are two trigger types (Audit Logs and Pub/Sub) to choose from in Eventarc. Eventarc uses Pub/Sub as its underlying transport layer and provides convenience and standardization on top of it. If you wanted to, you could skip Eventarc and read messages directly from Pub/Sub in Cloud Run.&lt;/p&gt;
&lt;p&gt;This blog post details three ways of receiving events in Cloud Run and provides a decision framework on how to choose.&lt;/p&gt;</description></item><item><title>Demystifying event filters in Eventarc</title><link>https://atamel.dev/posts/2021/03-02_demystifying-event-filters-eventarc/</link><pubDate>Tue, 02 Mar 2021 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2021/03-02_demystifying-event-filters-eventarc/</guid><description>&lt;p&gt;Eventarc enables you to read events from Google Cloud sources (via its Audit Logs integration) and custom sources (via its Pub/Sub integration) and then route them to Cloud Run services.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://storage.googleapis.com/gweb-cloudblog-publish/images/eventarc-architecture.max-2000x2000.png"&gt;https://storage.googleapis.com/gweb-cloudblog-publish/images/eventarc-architecture.max-2000x2000.png&lt;/a&gt;
The event routing rules are defined with a trigger. In a trigger, you specify the right event filters such as service name, method name, resource (which effectively defines the event source) and the target of the events (which can only be a Cloud Run service as of today).&lt;/p&gt;</description></item><item><title>Orchestrating the Pic-a-Daily serverless app with Workflows</title><link>https://atamel.dev/posts/2021/02-13_orchestrating-pic-daily-serverless-app-workflows/</link><pubDate>Sat, 13 Feb 2021 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2021/02-13_orchestrating-pic-daily-serverless-app-workflows/</guid><description>&lt;p&gt;Over the past year, we (Mete and Guillaume) have developed a picture sharing application, named Pic-a-Daily, to showcase Google Cloud serverless technologies such as Cloud Functions, App Engine, and Cloud Run. Into the mix, we’ve thrown a pinch of Pub/Sub for interservice communication, a zest of Firestore for storing picture metadata, and a touch of machine learning for a little bit of magic.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://storage.googleapis.com/gweb-cloudblog-publish/images/1_Shqfx7L.max-1400x1400.png"&gt;https://storage.googleapis.com/gweb-cloudblog-publish/images/1_Shqfx7L.max-1400x1400.png&lt;/a&gt;
We also created a hands-on workshop to build the application, and slides with explanations of the technologies used. The workshop consists of codelabs that you can complete at your own pace. All the code is open source and available in a GitHub repository.&lt;/p&gt;</description></item><item><title>Eventarc brings eventing to Cloud Run and is now GA</title><link>https://atamel.dev/posts/2021/01-29_eventarc-is-ga/</link><pubDate>Fri, 29 Jan 2021 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2021/01-29_eventarc-is-ga/</guid><description>&lt;p&gt;Back in October, we announced the public preview of Eventarc, as new eventing functionality that lets developers route events to Cloud Run services. In a previous post, we outlined more benefits of Eventarc: a unified eventing experience in Google Cloud, centralized event routing, consistency with eventing format, libraries and an ambitious long term vision.&lt;/p&gt;
&lt;p&gt;Today, we&amp;rsquo;re happy to announce that Eventarc is now generally available. Developers can focus on writing code to handle events, while Eventarc takes care of the details of event ingestion, delivery, security, observability, and error handling.&lt;/p&gt;</description></item><item><title>Eventarc - A unified eventing experience in Google Cloud</title><link>https://atamel.dev/posts/2021/01-16_eventarc-unified-eventing-experience-google-cloud/</link><pubDate>Sat, 16 Jan 2021 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2021/01-16_eventarc-unified-eventing-experience-google-cloud/</guid><description>&lt;p&gt;I recently talked about orchestration versus choreography in connecting microservices and introduced Workflows for use cases that can benefit from a central orchestrator. I also mentioned Eventarc and Pub/Sub in the choreography camp for more loosely coupled event-driven architectures.&lt;/p&gt;
&lt;p&gt;In this blog post, I talk more about the unified eventing experience by Eventarc.&lt;/p&gt;
&lt;p&gt;What is Eventarc?
We announced Eventarc back in October as a new eventing functionality that enables you to send events to Cloud Run from more than 60 Google Cloud sources. It works by reading Audit Logs from various sources and sending them to Cloud Run services as events in CloudEvents format. It can also read events from Pub/Sub topics for custom applications.&lt;/p&gt;</description></item><item><title>Better service orchestration with Workflows</title><link>https://atamel.dev/posts/2020/12-02_better-service-orchestration-workflows/</link><pubDate>Wed, 02 Dec 2020 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2020/12-02_better-service-orchestration-workflows/</guid><description>&lt;p&gt;Going from a single monolithic application to a set of small, independent microservices has clear benefits. Microservices enable reusability, make it easier to change and scale apps on demand. At the same time, they introduce new challenges. No longer is there a single monolith with all the business logic neatly contained and services communicating with simple method calls. In the microservices world, communication has to go over the wire with REST or some kind of eventing mechanism and you need to find a way to get independent microservices to work toward a common goal.&lt;/p&gt;</description></item><item><title>Introducing Eventarc in Pic-a-Daily Serverless Workshop</title><link>https://atamel.dev/posts/2020/11-30_eventarc-serverless-workshop/</link><pubDate>Mon, 30 Nov 2020 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2020/11-30_eventarc-serverless-workshop/</guid><description>&lt;h2 id="pic-a-daily-serverless-workshop"&gt;Pic-a-Daily Serverless Workshop&lt;/h2&gt;
&lt;p&gt;As you might know, &lt;a href="https://twitter.com/glaforge"&gt;Guillaume Larforge&lt;/a&gt; and I have a
&lt;a href="https://codelabs.developers.google.com/serverless-workshop/"&gt;Pic-a-Daily Serverless
Workshop&lt;/a&gt;. In this
workshop, we build a picture sharing application using Google Cloud serverless
technologies such as Cloud Functions, App Engine, Cloud Run and more.&lt;/p&gt;
&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://atamel.dev/img/2020/picadaily-serverless-workshop.png" alt="Pic-a-Daily Serverless Workshop" /&gt;
 
 &lt;figcaption&gt;Pic-a-Daily Serverless Workshop&lt;/figcaption&gt;
 
&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;We recently added a new service to the workshop. In this blog post, I want to
talk about the new service. I also want to talk about
&lt;a href="https://cloud.google.com/blog/products/serverless/build-event-driven-applications-in-cloud-run"&gt;Eventarc&lt;/a&gt;
and how it helped us to get events to the new service.&lt;/p&gt;</description></item><item><title>.NET 5.0 on Google Cloud</title><link>https://atamel.dev/posts/2020/11-20_net-50-google-cloud/</link><pubDate>Fri, 20 Nov 2020 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2020/11-20_net-50-google-cloud/</guid><description>&lt;p&gt;.NET 5.0 was released just a few days ago with many new features, improvements, C# 9 support, F# 5 support, and more. .NET 5.0 is the first release of the unified .NET vision that was announced last year. Going forward, there will be just one .NET targeting Windows, Linux, macOS, and more.&lt;/p&gt;
&lt;p&gt;Google Cloud already has support for different versions of .NET. You can run traditional Windows based .NET apps on Windows Servers in Compute Engine or on Windows Containers in Google Kubernetes Engine (GKE). For modern Linux based containerized .NET apps, there’s more choice with App Engine (Flex), GKE and my favorite Cloud Run. Not to mention, the .NET Core 3.1 support in Cloud Functions is currently in preview for serverless .NET functions.&lt;/p&gt;</description></item><item><title>Knative v0.18.0 update</title><link>https://atamel.dev/posts/2020/10-12_knative-v0180-update/</link><pubDate>Mon, 12 Oct 2020 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2020/10-12_knative-v0180-update/</guid><description>&lt;p&gt;I got around to updating my &lt;a href="https://github.com/meteatamel/knative-tutorial"&gt;Knative
Tutorial&lt;/a&gt; from Knative &lt;code&gt;v0.16.0&lt;/code&gt;
to the latest Knative Serving &lt;code&gt;v0.18.0&lt;/code&gt;
&lt;a href="https://github.com/knative/serving/releases/tag/v0.18.0"&gt;release&lt;/a&gt; and Knative
Eventing &lt;code&gt;v0.18.1&lt;/code&gt;
&lt;a href="https://github.com/knative/eventing/releases/tag/v0.18.1"&gt;release&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;In this short blog post, I want to outline a couple of minor issues I encountered during
my upgrade. Note that I skipped &lt;code&gt;v0.17&lt;/code&gt; altogether, some of these changes might
have happened in that release.&lt;/p&gt;
&lt;h2 id="istio-installation"&gt;Istio Installation&lt;/h2&gt;
&lt;p&gt;The biggest change I encountered is how Istio is installed for Knative. In
previous releases, I simply pointed to the yaml files in the latest Istio
version in &lt;code&gt;third_party&lt;/code&gt; folder of Knative Serving.&lt;/p&gt;</description></item><item><title>Events for Cloud Run for Anthos &gt;= Knative Eventing on Kubernetes</title><link>https://atamel.dev/posts/2020/10-09_events_cloud_run_anthos_knative_eventing/</link><pubDate>Thu, 08 Oct 2020 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2020/10-09_events_cloud_run_anthos_knative_eventing/</guid><description>&lt;h2 id="introduction"&gt;Introduction&lt;/h2&gt;
&lt;p&gt;We recently
&lt;a href="https://cloud.google.com/blog/products/serverless/cloud-run-for-anthos-adds-events"&gt;announced&lt;/a&gt;
a new feature, &lt;em&gt;Events for Cloud Run for Anthos&lt;/em&gt;, to build event driven systems on
Google Kubernetes Engine (GKE). In the announcement, we also stated that the
solution is based on open-source &lt;a href="https://knative.dev/"&gt;Knative&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;In this blog post, I want to further explain the relationship between this new
feature and Knative. I also want to convince you that our solution is an easier
way to deploy Knative compliant event consuming services on Google Cloud.&lt;/p&gt;</description></item><item><title>Cloud Run for Anthos brings eventing to your Kubernetes microservices</title><link>https://atamel.dev/posts/2020/09-24_cloud-run-for-anthos-adds-events/</link><pubDate>Thu, 24 Sep 2020 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2020/09-24_cloud-run-for-anthos-adds-events/</guid><description>&lt;p&gt;Building microservices on Google Kubernetes Engine (GKE) provides you with maximum flexibility to build your applications, while still benefiting from the scale and toolset that Google Cloud has to offer. But with great flexibility comes great responsibility. Orchestrating microservices can be difficult, requiring non-trivial implementation, customization, and maintenance of messaging systems.&lt;/p&gt;
&lt;p&gt;Cloud Run for Anthos now includes an events feature that allows you to easily build event-driven systems on Google Cloud. Now in beta, Cloud Run for Anthos’ event feature assumes responsibility for the implementation and management of eventing infrastructure, so you don’t have to.&lt;/p&gt;</description></item><item><title>A first look at serverless orchestration with Workflows</title><link>https://atamel.dev/posts/2020/09-08_first_look_at_workflows/</link><pubDate>Tue, 08 Sep 2020 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2020/09-08_first_look_at_workflows/</guid><description>&lt;h2 id="challenges-in-connecting-services"&gt;Challenges in connecting services&lt;/h2&gt;
&lt;p&gt;When I think about my recent projects, I probably spent half of my time coding
new services and the other half in connecting services. Service A calling
Service B, or Service C calling an external service and using the result to feed
into another Service D.&lt;/p&gt;
&lt;p&gt;Connecting services is one of those things that &amp;lsquo;should be easy&amp;rsquo; but in reality,
it takes a lot of time and effort. You need to figure out a common
connection format for services to use, make the connection, parse the results,
and pass the results on. I&amp;rsquo;m not even mentioning error handling, retries and
all those production readiness type features that you ultimately need to do.&lt;/p&gt;</description></item><item><title>Scheduled serverless dbt + BigQuery service</title><link>https://atamel.dev/posts/2020/07-29_scheduled_serverless_dbt_with_bigquery/</link><pubDate>Wed, 29 Jul 2020 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2020/07-29_scheduled_serverless_dbt_with_bigquery/</guid><description>&lt;p&gt;My colleague &lt;a href="https://twitter.com/felipehoffa"&gt;Felipe Hoffa&lt;/a&gt; recently published a
blog post titled &lt;a href="https://medium.com/@hoffa/get-started-with-bigquery-and-dbt-the-easy-way-36b9d9735e35"&gt;Get started with BigQuery and dbt, the easy
way&lt;/a&gt;.
More specifically, he showed how to install &lt;a href="https://getdbt.com/"&gt;dbt&lt;/a&gt; in Google
Cloud Shell, configure it and manually run it to create a temporary dataset in
BigQuery. This is great for testing dbt + BigQuery but how do you run this
kind of setup in production?&lt;/p&gt;
&lt;p&gt;&lt;a href="https://docs.getdbt.com/docs/running-a-dbt-project/running-dbt-in-production/"&gt;dbt
documentation&lt;/a&gt;
states that &lt;strong&gt;Running dbt in production simply means setting up a system to run a
dbt job on a schedule, rather than running dbt commands manually from the
command line.&lt;/strong&gt;&lt;/p&gt;</description></item><item><title>Knative v0.16.0 update</title><link>https://atamel.dev/posts/2020/07-22_knative-v0-16-0-update/</link><pubDate>Wed, 22 Jul 2020 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2020/07-22_knative-v0-16-0-update/</guid><description>&lt;p&gt;I finally got around to updating my &lt;a href="https://github.com/meteatamel/knative-tutorial"&gt;Knative
Tutorial&lt;/a&gt; from Knative &lt;code&gt;v0.14.0&lt;/code&gt;
to the latest Knative
&lt;code&gt;v0.16.0&lt;/code&gt; &lt;a href="https://github.com/knative/serving/releases/tag/v0.16.0"&gt;release&lt;/a&gt;. Since I
skipped &lt;code&gt;v0.15.0&lt;/code&gt;, I&amp;rsquo;m not sure which changes are due to &lt;code&gt;v0.15.0&lt;/code&gt; vs.
&lt;code&gt;v0.16.0&lt;/code&gt;. Regardless, there have been some notable changes that I want to
outline in this blog post. This is not meant to be an exhaustive list. Feel free
to let me know in the comments if there are other notable changes that I should
be aware of.&lt;/p&gt;</description></item><item><title>Google Cloud Functions on .NET</title><link>https://atamel.dev/posts/2020/07-14_dotnet_on_cloud_functions/</link><pubDate>Tue, 14 Jul 2020 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2020/07-14_dotnet_on_cloud_functions/</guid><description>&lt;h2 id="net-for-google-cloud-functions-alpha"&gt;.NET for Google Cloud Functions (Alpha)&lt;/h2&gt;
&lt;p&gt;I spoke at many .NET conferences over the last 3-4 years and one of the top
requests I always received was: When will .NET be supported on Cloud Functions?&lt;/p&gt;
&lt;p&gt;Unfortunately, I didn&amp;rsquo;t have a good answer for a while. That all changed last
month with the following tweet from &lt;a href="https://twitter.com/jonskeet"&gt;Jon Skeet&lt;/a&gt; from our C#
team:&lt;/p&gt;
&lt;blockquote class="twitter-tweet"&gt;&lt;p lang="en" dir="ltr"&gt;I&amp;#39;m thrilled that .NET support is coming to Google Cloud Functions, along with the .NET Functions Framework. Sign up for the public alpha at &lt;a href="https://t.co/nASIACFCrg"&gt;https://t.co/nASIACFCrg&lt;/a&gt;&lt;/p&gt;</description></item><item><title>.NET Core 3.1 updates in Cloud Shell and App Engine flexible environment</title><link>https://atamel.dev/posts/2020/06-29_dotnetcore31-cloudshell-appengine-flex/</link><pubDate>Mon, 29 Jun 2020 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2020/06-29_dotnetcore31-cloudshell-appengine-flex/</guid><description>&lt;h2 id="net-core-31-updates-on-google-cloud"&gt;.NET Core 3.1 updates on Google Cloud&lt;/h2&gt;
&lt;p&gt;.NET Core 3.1 was released on December 3rd, 2019 and is a LTS release, supported
for three years.&lt;/p&gt;
&lt;p&gt;In Google Cloud, you could already deploy .NET Core 3.1 containers in Cloud Run
(see &lt;a href="https://github.com/meteatamel/cloud-run-dotnetcore-31"&gt;cloud-run-dotnetcore-31&lt;/a&gt;)
and also in App Engine flexible environment with a &lt;a href="https://cloud.google.com/appengine/docs/flexible/custom-runtimes"&gt;custom
runtime&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;We recently extended .NET Core 3.1 support in a couple of ways:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Cloud Shell now supports .NET Core 3.1.&lt;/li&gt;
&lt;li&gt;App Engine flexible environment runtime now supports .NET Core 3.1.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id="net-core-31-in-cloud-shell"&gt;.NET Core 3.1 in Cloud Shell&lt;/h2&gt;
&lt;p&gt;Inside Cloud Shell, you can see the latest &lt;code&gt;3.1.301&lt;/code&gt; version:&lt;/p&gt;</description></item><item><title>Daily COVID-19 cases notification Pipeline with Knative Eventing, BigQuery, Matplotlib and SendGrid</title><link>https://atamel.dev/posts/2020/06-15_daily-covid19-cases-notification-pipeline/</link><pubDate>Mon, 15 Jun 2020 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2020/06-15_daily-covid19-cases-notification-pipeline/</guid><description>&lt;h2 id="motivation"&gt;Motivation&lt;/h2&gt;
&lt;p&gt;When I started working from home in mid-March, I was totally obsessed with
COVID-19 news. I was constantly checking number of cases and news from the UK
(where I currently live) and from Cyprus (where I&amp;rsquo;m originally from). It took me
a couple of weeks to realize how unproductive this was. I started limiting
myself to check for news once a day. This definitely helped me to regain sanity
and productivity but it was manual.&lt;/p&gt;</description></item><item><title>Event-Driven Image Processing Pipeline with Knative Eventing</title><link>https://atamel.dev/posts/2020/06-05_event-driven-image-processing-pipeline-knative/</link><pubDate>Fri, 05 Jun 2020 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2020/06-05_event-driven-image-processing-pipeline-knative/</guid><description>&lt;p&gt;In this post, I want to talk about an event-driven image processing pipeline
that I built recently using &lt;a href="https://knative.dev/docs/eventing/"&gt;Knative Eventing&lt;/a&gt;.
Along the way, I&amp;rsquo;ll tell you about event sources, custom events and other
components provided by Knative that simply development of event-driven
architectures.&lt;/p&gt;
&lt;h2 id="requirements"&gt;Requirements&lt;/h2&gt;
&lt;p&gt;Let&amp;rsquo;s first talk about the basic requirements I had for the image processing pipeline:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Users upload files to an input bucket and get processed images in an output
bucket.&lt;/li&gt;
&lt;li&gt;Uploaded images are filtered (eg. no adult or violent images) before sending
through the pipeline.&lt;/li&gt;
&lt;li&gt;Pipeline can contain any number of processing services that can be added or
removed as needed. For the initial pipeline, I decided to go with 3 services:
resizer, watermarker, and labeler. The resizer will resize large images. The
watermarker will add a watermark to resized images and the labeler will
extract information about images (labels) and save it.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Requirement #3 is especially important. I wanted to be able to add services to
the pipeline as I need them or create multiple pipelines with different services
chained together.&lt;/p&gt;</description></item><item><title>Workload Identity Authentication for Knative v0.14.0 on GKE</title><link>https://atamel.dev/posts/2020/05-20_workload_identity_authentication_for_knative_on_gke/</link><pubDate>Wed, 20 May 2020 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2020/05-20_workload_identity_authentication_for_knative_on_gke/</guid><description>&lt;p&gt;If you ever used Knative on Google Cloud, you must have heard of
&lt;a href="https://github.com/google/knative-gcp"&gt;Knative-GCP&lt;/a&gt; project. As the name
suggests, Knative-GCP project provides a number of sources such as
&lt;code&gt;CloudPubSubSource&lt;/code&gt;, &lt;code&gt;CloudStorageSource&lt;/code&gt;, &lt;code&gt;CloudSchedulerSource&lt;/code&gt; and more to
help reading various Google Cloud sources into your Knative cluster.&lt;/p&gt;
&lt;p&gt;I recently updated my &lt;a href="https://github.com/meteatamel/knative-tutorial"&gt;Knative
Tutorial&lt;/a&gt; to use the latest
&lt;a href="https://github.com/knative/eventing/releases/tag/v0.14.2"&gt;Knative Eventing release
v0.14.2&lt;/a&gt; and its
corresponding &lt;a href="https://github.com/google/knative-gcp/releases/tag/v0.14.0"&gt;Knative-GCP release
v0.14.0&lt;/a&gt;. I ran into
a weird authentication problem that I want to outline here.&lt;/p&gt;</description></item><item><title>Knative Eventing Delivery Methods</title><link>https://atamel.dev/posts/2020/03-12_knative-eventing-delivery-methods-79d4ebe30a68/</link><pubDate>Thu, 12 Mar 2020 16:06:51 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2020/03-12_knative-eventing-delivery-methods-79d4ebe30a68/</guid><description>&lt;p&gt;Knative Eventing &lt;a href="https://knative.dev/docs/eventing/"&gt;docs&lt;/a&gt; are a little
confusing when it comes to different event delivery methods it supports. It
talks about &lt;strong&gt;event brokers and triggers&lt;/strong&gt; and it also talks about &lt;strong&gt;sources&lt;/strong&gt;,
&lt;strong&gt;services&lt;/strong&gt;, &lt;strong&gt;channels&lt;/strong&gt;, and &lt;strong&gt;subscriptions&lt;/strong&gt;. What to use and when? It’s
not clear. Let’s break it down.&lt;/p&gt;
&lt;h2 id="delivery-methods"&gt;Delivery methods&lt;/h2&gt;
&lt;p&gt;There are 3 distinct methods in Knative:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Simple delivery&lt;/li&gt;
&lt;li&gt;Complex delivery with optional reply&lt;/li&gt;
&lt;li&gt;Broker and Trigger delivery&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;strong&gt;Broker and Trigger delivery is what you should care about most of the time&lt;/strong&gt;.
However, the simple and complex delivery have been in Knative for a while and
still good to know for what’s happening under the covers.&lt;/p&gt;</description></item><item><title>An app modernization story — Part 4 (Serverless Microservices)</title><link>https://atamel.dev/posts/2020/02-24_an-app-modernization-story---part-4--serverless-microservices/</link><pubDate>Mon, 24 Feb 2020 13:51:54 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2020/02-24_an-app-modernization-story---part-4--serverless-microservices/</guid><description>&lt;p&gt;In &lt;a href="https://atamel.dev/posts/2020/02-12_an-app-modernization-story---part-3--containerize---redeploy/"&gt;part 3&lt;/a&gt; of the blog series, I talked about how we transformed our Windows-only .NET Framework app to a containerized multi-platform .NET Core app.&lt;/p&gt;
&lt;p&gt;This removed our dependency on Windows and enabled us to deploy to Linux-based platforms such as App Engine (Flex). On the other hand, the app still ran on VMs, it was billed per second even if nobody used it, deployments were slow and most importantly, it was a single monolith that was deployed and scaled as a single unit.&lt;/p&gt;</description></item><item><title>An app modernization story — Part 3 (Containerize &amp; Redeploy)</title><link>https://atamel.dev/posts/2020/02-12_an-app-modernization-story---part-3--containerize---redeploy/</link><pubDate>Wed, 12 Feb 2020 11:39:29 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2020/02-12_an-app-modernization-story---part-3--containerize---redeploy/</guid><description>&lt;p&gt;In &lt;a href="https://atamel.dev/posts/2020/01-31_an-app-modernization-story---part-1--prototype/"&gt;part 1&lt;/a&gt;, I talked about the initial app and its challenges. In &lt;a href="https://atamel.dev/posts/2020/02-06_an-app-modernization-story---part-2--lift---shift/"&gt;part 2&lt;/a&gt;, I talked about the lift &amp;amp; shift to the cloud with some unexpected benefits. In this part 3 of the series, I’ll talk about how we transformed our Windows-only .NET Framework app to a containerized multi-platform .NET Core app and the huge benefits we got along the way.&lt;/p&gt;
&lt;h4 id="why"&gt;Why?&lt;/h4&gt;
&lt;p&gt;The initial Windows VM based cloud setup served us well with minimal issues about roughly 2 years (from early 2017 to early 2019). In early 2019, we wanted to revisit the architecture again. This was mainly driven by the advances in the tech scene namely:&lt;/p&gt;</description></item><item><title>An app modernization story — Part 2 (Lift &amp; Shift)</title><link>https://atamel.dev/posts/2020/02-06_an-app-modernization-story---part-2--lift---shift/</link><pubDate>Thu, 06 Feb 2020 09:29:04 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2020/02-06_an-app-modernization-story---part-2--lift---shift/</guid><description>&lt;p&gt;In &lt;a href="https://atamel.dev/posts/2020/01-31_an-app-modernization-story---part-1--prototype/"&gt;part 1&lt;/a&gt; of app modernization series, I introduced a simple news aggregator and some of the challenges in its initial architecture. In part 2, I’ll talk about the journey to the cloud and some unexpected benefits and learnings along the way.&lt;/p&gt;
&lt;h4 id="why-cloud"&gt;Why Cloud?&lt;/h4&gt;
&lt;p&gt;The initial backend had many issues that I outlined in &lt;a href="https://atamel.dev/posts/2020/01-31_an-app-modernization-story---part-1--prototype/"&gt;part 1&lt;/a&gt;. After about 1 year, in late 2016, we decided to look into moving it to a more stable home. Our main goal was &lt;strong&gt;to improve resiliency of the app&lt;/strong&gt;, as the IIS host kept crashing, but &lt;strong&gt;we didn’t want to rewrite or re-architecture&lt;/strong&gt; the app in a major way. Around the same time, I started working at Google and was learning all about Google Cloud. We decided to give it a try and see what it took to move the app there.&lt;/p&gt;</description></item><item><title>An app modernization story — Part 1 (Prototype)</title><link>https://atamel.dev/posts/2020/01-31_an-app-modernization-story---part-1--prototype/</link><pubDate>Fri, 31 Jan 2020 15:02:40 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2020/01-31_an-app-modernization-story---part-1--prototype/</guid><description>&lt;p&gt;We all have apps running some “legacy code” in some “legacy way”. The term “legacy” means different things in different projects but we know when we see it and we want to get the time to modernize those apps in some way.&lt;/p&gt;
&lt;p&gt;I recently went through the latest phase of modernization of a legacy app. Even though it’s a relatively small app, it thought me a number of lessons that’s worth sharing.&lt;/p&gt;</description></item><item><title>Knative v0.12.0 update</title><link>https://atamel.dev/posts/2020/01-27_knative-v0-12-0-update/</link><pubDate>Mon, 27 Jan 2020 15:07:35 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2020/01-27_knative-v0-12-0-update/</guid><description>&lt;p&gt;It’s hard to keep with Knative releases with a release every 6 weeks. I finally managed to update my &lt;a href="https://github.com/meteatamel/knative-tutorial"&gt;Knative Tutorial&lt;/a&gt; for the latest Knative v0.12.0. In this blog post, I want to outline some of the differences I’ve observed.&lt;/p&gt;
&lt;h4 id="knative-serving"&gt;Knative Serving&lt;/h4&gt;
&lt;p&gt;Knative Serving has been pretty stable in the recent releases and &lt;a href="https://github.com/knative/serving/releases/tag/v0.12.0"&gt;Knative Serving v0.12.0&lt;/a&gt; is no exception. I didn’t need to update my tutorial specifically for this release.&lt;/p&gt;
&lt;h4 id="knative-eventing"&gt;Knative Eventing&lt;/h4&gt;
&lt;p&gt;&lt;a href="https://github.com/knative/eventing/releases/tag/v0.12.0"&gt;Knative Eventing v0.12.0&lt;/a&gt; changed the default yaml for Knative Eventing bundles. Now, they are under &lt;code&gt;eventing.yaml&lt;/code&gt; (previously, it was &lt;code&gt;release.yaml&lt;/code&gt;) and this is the yaml you need to point to install eventing. This makes sense as it’s more consistent with Knative Serving and its &lt;code&gt;serving.yaml&lt;/code&gt;.&lt;/p&gt;</description></item><item><title>How to properly install Knative on GKE</title><link>https://atamel.dev/posts/2020/01-13_how-to-properly-install-knative-on-gke/</link><pubDate>Mon, 13 Jan 2020 16:50:06 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2020/01-13_how-to-properly-install-knative-on-gke/</guid><description>&lt;p&gt;The default Knative Installation &lt;a href="https://knative.dev/docs/install/knative-with-gke/"&gt;instructions&lt;/a&gt; for Google Kubernete Engine (GKE) is problematic (see bug &lt;a href="https://github.com/knative/eventing/issues/2266"&gt;2266&lt;/a&gt;). In this post, I want to outline what the problem is, tell you what I do, and also provide you the scripts that work for me until a proper solution is implemented either in gcloud or Knative.&lt;/p&gt;
&lt;h3 id="the-problem"&gt;The problem&lt;/h3&gt;
&lt;p&gt;The default Knative Installation instructions tell you to create a GKE cluster as follows:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;gcloud beta container clusters create &lt;span class="nv"&gt;$CLUSTER_NAME&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="se"&gt;&lt;/span&gt; --addons&lt;span class="o"&gt;=&lt;/span&gt;HorizontalPodAutoscaling,HttpLoadBalancing,Istio &lt;span class="se"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="se"&gt;&lt;/span&gt; --machine-type&lt;span class="o"&gt;=&lt;/span&gt;n1-standard-4 &lt;span class="se"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="se"&gt;&lt;/span&gt; --cluster-version&lt;span class="o"&gt;=&lt;/span&gt;latest --zone&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$CLUSTER_ZONE&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="se"&gt;&lt;/span&gt; --enable-stackdriver-kubernetes --enable-ip-alias &lt;span class="se"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="se"&gt;&lt;/span&gt; --enable-autoscaling --min-nodes&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt; --max-nodes&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="m"&gt;10&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="se"&gt;&lt;/span&gt; --enable-autorepair &lt;span class="se"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="se"&gt;&lt;/span&gt; --scopes cloud-platform
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;Notice the Istio add-on. This command creates a Kubernetes cluster with Istio already installed. This is good because Istio is a dependency of Knative but keep reading.&lt;/p&gt;</description></item><item><title>Cluster local issue with Knative Eventing v0.9.0</title><link>https://atamel.dev/posts/2019/10-14_cluster-local-issue-with-knative-eventing-v0-9-0-a1fee2215cfe/</link><pubDate>Mon, 14 Oct 2019 14:56:33 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2019/10-14_cluster-local-issue-with-knative-eventing-v0-9-0-a1fee2215cfe/</guid><description>&lt;p&gt;In my previous &lt;a href="https://atamel.dev/posts/2019/09-26_knative-v0-9-0/"&gt;post&lt;/a&gt;, I talked about Knative v0.9.0 and some of the eventing changes in the latest release. I’ve been playing with Knative v0.9.0 since then to read Google Cloud Pub/Sub messages using &lt;a href="https://github.com/google/knative-gcp/blob/master/docs/pullsubscription/README.md"&gt;PullSubscription&lt;/a&gt; and I ran into a rather fundamental issue that baffled me for a while. I’d like to outline the problem and the solution here, just in case it’s useful to others.&lt;/p&gt;
&lt;h3 id="knative-services-as-eventingsinks"&gt;Knative Services as eventing sinks&lt;/h3&gt;
&lt;p&gt;In my PullSubscription, I could define Kubernetes Services as event sinks as follows:&lt;/p&gt;</description></item><item><title>How to deploy a Windows container on Google Kubernetes Engine</title><link>https://atamel.dev/posts/2019/09-28_how-to-deploy-a-windows-container-on-google-kubernetes-engine/</link><pubDate>Sat, 28 Sep 2019 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2019/09-28_how-to-deploy-a-windows-container-on-google-kubernetes-engine/</guid><description>&lt;p&gt;Many people who run Windows containers want to use a container management platform like Kubernetes for resiliency and scalability. In a previous post, we showed you how to run an IIS site inside a Windows container deployed to Windows Server 2019 running on Compute Engine. That’s a good start, but you can now also run Windows containers on Google Kubernetes Engine (GKE).&lt;/p&gt;
&lt;p&gt;Support for Windows containers in Kubernetes was announced earlier in the year with version 1.14, followed by GKE announcement on the same. You can sign up for early access and start testing out Windows containers on GKE.&lt;/p&gt;</description></item><item><title>Knative v0.9.0</title><link>https://atamel.dev/posts/2019/09-26_knative-v0-9-0/</link><pubDate>Thu, 26 Sep 2019 13:56:36 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2019/09-26_knative-v0-9-0/</guid><description>&lt;p&gt;Knative has been evolving pretty quickly. There’s a new release roughly every 6
weeks with significant changes in each release. Knative v0.7.0 was all about
changes in Knative Serving (&lt;a href="https://atamel.dev/posts/2019/07-01_knative-serving-0-7/"&gt;my
post&lt;/a&gt;).
Knative v0.8.0 was about deprecation of Knative Build in favor of Tekton
Pipelines (&lt;a href="https://atamel.dev/posts/2019/08-28_migrating-from-knative-build-to-tekton-pipelines/"&gt;my
other post&lt;/a&gt;).&lt;/p&gt;
&lt;p&gt;Knative &lt;a href="https://github.com/knative/serving/releases/tag/v0.9.0"&gt;Serving v0.9.0&lt;/a&gt; and &lt;a href="https://github.com/knative/eventing/releases/tag/v0.9.0"&gt;Eventing v0.9.0&lt;/a&gt; have been released a little over a week ago. In Serving, there’s a &lt;em&gt;v1 API&lt;/em&gt; and a number of improvements on autoscaling and cold starts. In Eventing, the way events are read changed quite a bit. I want to outline some of these changes here.&lt;/p&gt;</description></item><item><title>How to deploy a Windows container on Google Compute Engine</title><link>https://atamel.dev/posts/2019/09-21_how-to-deploy-a-windows-container-on-google-compute-engine/</link><pubDate>Sat, 21 Sep 2019 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2019/09-21_how-to-deploy-a-windows-container-on-google-compute-engine/</guid><description>&lt;p&gt;Last year, we published a blog post and demonstrated how to deploy a Windows container running Windows Server 2016 on Google Compute Engine. Since then, there have been a number of important developments. First, Microsoft announced the availability of Windows Server 2019. Second, Kubernetes 1.14 was released with support for Windows nodes and Windows containers.&lt;/p&gt;
&lt;p&gt;Supporting Windows workloads and helping you modernize your apps using containers and Kubernetes is one of our top priorities at Google Cloud. Soon after the Microsoft and Kubernetes announcements, we added support for Windows Server 2019 in Compute Engine and Windows containers in Google Kubernetes Engine (GKE).&lt;/p&gt;</description></item><item><title>Migrating from Knative Build to Tekton Pipelines</title><link>https://atamel.dev/posts/2019/08-28_migrating-from-knative-build-to-tekton-pipelines/</link><pubDate>Wed, 28 Aug 2019 10:27:47 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2019/08-28_migrating-from-knative-build-to-tekton-pipelines/</guid><description>&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://atamel.dev/img/2019/1__O2nIYbl3W86S__PpyiuGQMw.png" alt="" /&gt;
 
&lt;/figure&gt;&lt;/p&gt;
&lt;h3 id="knative-080-and-build-deprecation"&gt;Knative 0.8.0 and Build Deprecation&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://github.com/knative/serving/releases/tag/v0.8.0"&gt;Knative 0.8.0&lt;/a&gt; came out a couple of weeks ago with a number of fixes and improvements. One of the biggest changes in 0.8.0 is that Knative Build is now deprecated according to &lt;a href="https://knative.dev/docs/build/"&gt;docs&lt;/a&gt;:&lt;/p&gt;
&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://atamel.dev/img/2019/0__7d371__EPwjXfyvQA.jpg" alt="" /&gt;
 
&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Knative Installation docs also only include Knative Serving and Eventing without mentioning Build:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-sh" data-lang="sh"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kubectl apply 
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="se"&gt;\-&lt;/span&gt;f https://github.com/knative/serving/releases/download/v0.8.0/serving.yaml &lt;span class="se"&gt;\\&lt;/span&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="se"&gt;\-&lt;/span&gt;f https://github.com/knative/eventing/releases/download/v0.8.0/release.yaml &lt;span class="se"&gt;\\&lt;/span&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="se"&gt;\-&lt;/span&gt;f https://github.com/knative/serving/releases/download/v0.8.0/monitoring.yaml
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;Good to know but there’s no explanation on why Knative Build was deprecated and any guidance on what is the replacement, if any. After a little bit of research, I have more information on deprecation and also a migration path that I’d like to share in this post.&lt;/p&gt;</description></item><item><title>Migrating from Kubernetes Deployment to Knative Serving</title><link>https://atamel.dev/posts/2019/07-31_migrating-from-kubernetes-deployment-to-knative-serving/</link><pubDate>Wed, 31 Jul 2019 02:18:22 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2019/07-31_migrating-from-kubernetes-deployment-to-knative-serving/</guid><description>&lt;p&gt;When I talk about Knative, I often get questions on how to migrate an app from Kubernetes Deployment (sometimes with Istio) to Knative and what are the differences between the two setups.&lt;/p&gt;
&lt;p&gt;First of all, everything you can do with a Knative Service, you can probably do with a pure Kubernetes + Istio setup and the right configuration. However, it’ll be much harder to get right. The whole point of Knative is to simplify and abstract away the details of Kubernetes and Istio for you.&lt;/p&gt;</description></item><item><title>Serverless gRPC + ASP.NET Core with Knative</title><link>https://atamel.dev/posts/2019/07-24_serverless-grpc---asp-net-core-with-knative/</link><pubDate>Wed, 24 Jul 2019 16:29:17 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2019/07-24_serverless-grpc---asp-net-core-with-knative/</guid><description>&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://atamel.dev/img/2019/0__nv__x9HwUJDwaIYEg.jpg" alt="" /&gt;
 
&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;I was recently going through the &lt;a href="https://devblogs.microsoft.com/aspnet/asp-net-core-updates-in-net-core-3-0-preview-3/"&gt;ASP.NET Core updates in .NET Core 3.0 Preview 3&lt;/a&gt; post, this section got my attention: &lt;em&gt;gRPC template.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Apparently, .NET Core 3.0 got a new &lt;a href="https://grpc.io/"&gt;gRPC&lt;/a&gt; template for easily building gRPC services with ASP.NET Core. I tested gRPC and .NET before and I have some samples in my &lt;a href="https://github.com/meteatamel/grpc-samples-dotnet"&gt;grpc-samples-dotnet&lt;/a&gt; repo. Even though gRPC and .NET worked before, it wasn’t that straightforward to setup. I was curious to try out the new gRPC template and see how it helped.&lt;/p&gt;</description></item><item><title>Knative Serving 0.7</title><link>https://atamel.dev/posts/2019/07-01_knative-serving-0-7/</link><pubDate>Mon, 01 Jul 2019 11:13:59 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2019/07-01_knative-serving-0-7/</guid><description>&lt;p&gt;As you might have heard, &lt;a href="https://github.com/knative/serving/releases/tag/v0.7.0"&gt;Knative
0.7&lt;/a&gt; was out last week.
One of the notable changes in this release is that Knative Serving API
progressed from &lt;code&gt;v1alpha&lt;/code&gt; to &lt;code&gt;v1beta&lt;/code&gt;. While you can still use the old
&lt;code&gt;v1alpha1&lt;/code&gt; API, if you want to update to &lt;code&gt;v1beta&lt;/code&gt;, you need to rewrite your
Knative service definition files. The new API also allows named revisions,
silent latest deploys and a better traffic splitting configuration. In this
post, I want to outline some of these changes.&lt;/p&gt;</description></item><item><title>Cloud Run as an internal async worker</title><link>https://atamel.dev/posts/2019/05-28_cloud-run-as-an-internal-async-worker/</link><pubDate>Tue, 28 May 2019 12:14:20 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2019/05-28_cloud-run-as-an-internal-async-worker/</guid><description>&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://atamel.dev/img/2019/1__w4RasvcFgRjz20RmrbDZlw.jpeg" alt="Cloud Pub/Sub &amp;#43; Cloud Run" /&gt;
 
 &lt;figcaption&gt;Cloud Pub/Sub + Cloud Run&lt;/figcaption&gt;
 
&lt;/figure&gt;
Cloud Pub/Sub + Cloud Run&lt;/p&gt;
&lt;h4 id="introduction"&gt;Introduction&lt;/h4&gt;
&lt;p&gt;If you’ve heard of &lt;a href="https://cloud.google.com/run/"&gt;Cloud Run&lt;/a&gt;, you already know that it’s great for spinning public endpoints inside stateless containers to handle HTTP request/reply type of workloads. And the best part is that you only pay for the duration of the request!&lt;/p&gt;
&lt;p&gt;However, HTTP request/reply handling is not the only use-case for Cloud Run. Combined with Cloud Pub/Sub, Cloud Run is very well suited for internal async worker type use-cases because:&lt;/p&gt;</description></item><item><title>Knative + Buildpacks: Source code to container image without Dockerfile</title><link>https://atamel.dev/posts/2019/05-16_knative---buildpacks--source-code-to-container-image-without-dockerfile/</link><pubDate>Thu, 16 May 2019 00:02:49 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2019/05-16_knative---buildpacks--source-code-to-container-image-without-dockerfile/</guid><description>&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://atamel.dev/img/2019/1__Sj53JniX4gStXhMQ5MkEcQ.png" alt="" /&gt;
 
&lt;/figure&gt;
&lt;figure&gt;
 &lt;img src="https://atamel.dev/img/2019/1__hUr6dlRbTCHwoGbVZg3Ivg.png" alt="" /&gt;
 
&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;I previously &lt;a href="https://medium.com/google-cloud/hands-on-knative-part-3-d8731ad2f23d"&gt;talked about&lt;/a&gt; &lt;a href="https://github.com/knative/build"&gt;Knative Build&lt;/a&gt; and how it enables you to go from source code to a container image in a repository. You can write your Build from scratch or you can rely on many of the &lt;a href="https://github.com/knative/build-templates"&gt;BuildTemplates&lt;/a&gt; Knative already provides. For example, in my &lt;a href="https://github.com/meteatamel/knative-tutorial"&gt;Knative Tutorial&lt;/a&gt;, I &lt;a href="https://github.com/meteatamel/knative-tutorial/blob/master/docs/10.5-kanikobuildtemplate.md"&gt;show&lt;/a&gt; how to install &lt;a href="https://github.com/knative/build-templates/tree/master/kaniko"&gt;Kaniko BuildTemplate&lt;/a&gt; and use Kaniko to build container images.&lt;/p&gt;
&lt;p&gt;You normally need to write a &lt;code&gt;Dockerfile&lt;/code&gt;, so Knative Build (or Kaniko to be more precise) knows how to build the container image. Wouldn’t it be great if there was a an automatic way to build your app without having to define a &lt;code&gt;Dockerfile&lt;/code&gt; ? Well, there is!&lt;/p&gt;</description></item><item><title>Knative to Cloud Run</title><link>https://atamel.dev/posts/2019/05-07_knative-to-cloud-run/</link><pubDate>Tue, 07 May 2019 15:47:44 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2019/05-07_knative-to-cloud-run/</guid><description>&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://atamel.dev/img/2019/1__I0iAulfQwG__rV__xrAKEtqQ.png" alt="Cloud Run" /&gt;
 
 &lt;figcaption&gt;Cloud Run&lt;/figcaption&gt;
 
&lt;/figure&gt;
Cloud Run&lt;/p&gt;
&lt;p&gt;In my Hands on Knative series (&lt;a href="https://medium.com/google-cloud/hands-on-knative-part-1-f2d5ce89944e"&gt;part 1&lt;/a&gt;, &lt;a href="https://medium.com/google-cloud/hands-on-knative-part-2-a27729f4d756"&gt;part 2&lt;/a&gt;, &lt;a href="https://medium.com/google-cloud/hands-on-knative-part-3-d8731ad2f23d"&gt;part 3&lt;/a&gt;), I showed how to use Knative Serving, Eventing and Build on any Kubernetes cluster anywhere. This is great for portability but with that portability comes the overhead of creating and managing a Kubernetes cluster. Not to mention the complexity of Istio which is a dependency of Knative.&lt;/p&gt;
&lt;p&gt;Google Kubernetes Engine (GKE) helps with managing the Kubernetes cluster a little but you still need to worry about all the bells and whistles of a Kubernetes cluster. Wouldn’t it be great to have the Knative Serving experience without having to worry about the underlying infrastructure? Well, you can with &lt;a href="https://cloud.google.com/run/"&gt;Cloud Run&lt;/a&gt;!&lt;/p&gt;</description></item><item><title>Professional Cloud Architect Certification</title><link>https://atamel.dev/posts/2019/03-04_professional-cloud-architect-certification/</link><pubDate>Mon, 04 Mar 2019 09:59:54 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2019/03-04_professional-cloud-architect-certification/</guid><description>&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://atamel.dev/img/2019/1__TvnzxcGuSS86skreK8M__fg.png" alt="" /&gt;
 
&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;TL;DR: I recently went through the preparation and exam of Google Cloud’s &lt;a href="https://cloud.google.com/certification/cloud-architect"&gt;Professional Cloud Architect Certification&lt;/a&gt;. It was great learning experience and I highly recommend it. You can &lt;a href="https://webassessor.com/wa.do?page=publicHome&amp;amp;branding=GOOGLECLOUD"&gt;register here&lt;/a&gt; to get certified yourself!&lt;/p&gt;
&lt;h3 id="why"&gt;Why?&lt;/h3&gt;
&lt;p&gt;As you might know, I’m a Googler and a Developer Advocate for Google Cloud. Why do I want to be certified by Google Cloud when I already work at Google and know a great deal about Google Cloud? I had 2 main motivations:&lt;/p&gt;</description></item><item><title>Hands on Knative — Part 3</title><link>https://atamel.dev/posts/2019/02-19_hands-on-knative---part-3/</link><pubDate>Tue, 19 Feb 2019 13:43:11 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2019/02-19_hands-on-knative---part-3/</guid><description>&lt;p&gt;In &lt;a href="https://atamel.dev/posts/2019/02-04_hands-on-knative---part-1/"&gt;Part 1&lt;/a&gt;, I talked about Knative Serving for rapid deployment and autoscaling of serverless containers. In &lt;a href="https://atamel.dev/posts/2019/02-11_hands-on-knative---part-2/"&gt;Part 2&lt;/a&gt;, I talked about how to connect services in a loosely coupled way with Knative Eventing.&lt;/p&gt;
&lt;p&gt;In third and last part of the series, I want to talk about &lt;a href="https://github.com/knative/docs/tree/master/build"&gt;Knative Build&lt;/a&gt; and show a few examples from my &lt;a href="https://github.com/meteatamel/knative-tutorial"&gt;Knative Tutorial&lt;/a&gt;.&lt;/p&gt;
&lt;h3 id="what-is-knativebuild"&gt;What is Knative Build?&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://github.com/knative/build"&gt;Knative Build&lt;/a&gt; basically allows you to go from source code to a container image in a registry. For example, you can write a build to obtain your source code from a repository, build a container image, push that image to a registry and then run that image, all within the Kubernetes cluster.&lt;/p&gt;</description></item><item><title>Hands on Knative — Part 2</title><link>https://atamel.dev/posts/2019/02-11_hands-on-knative---part-2/</link><pubDate>Mon, 11 Feb 2019 18:29:59 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2019/02-11_hands-on-knative---part-2/</guid><description>&lt;p&gt;In my &lt;a href="https://atamel.dev/posts/2019/02-04_hands-on-knative---part-1/"&gt;previous post&lt;/a&gt;, I talked about &lt;a href="https://github.com/knative/docs/tree/master/serving"&gt;Knative Serving&lt;/a&gt; for rapid deployment and autoscaling of serverless containers. Knative Serving is great if you want your services to be synchronously triggered by HTTP calls. However, in the serverless microservices world, asynchronous triggers are more common and useful. That’s when &lt;a href="https://github.com/knative/docs/tree/master/eventing"&gt;Knative Eventing&lt;/a&gt; comes into play.&lt;/p&gt;
&lt;p&gt;In this second part of Hands on Knative series, I want to introduce Knative Eventing and show some examples from my &lt;a href="https://github.com/meteatamel/knative-tutorial"&gt;Knative Tutorial&lt;/a&gt; on how to integrate it with various services.&lt;/p&gt;</description></item><item><title>Hands on Knative — Part 1</title><link>https://atamel.dev/posts/2019/02-04_hands-on-knative---part-1/</link><pubDate>Mon, 04 Feb 2019 14:56:53 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2019/02-04_hands-on-knative---part-1/</guid><description>&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://atamel.dev/img/2019/1__cTktfYF4u0fHBmZLTYjnKA.png" alt="" /&gt;
 
&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;I’ve been looking into &lt;a href="https://github.com/knative/docs"&gt;Knative&lt;/a&gt; recently. In this 3-part blog series, I want to explain my learnings and show some hands on examples from the &lt;a href="https://github.com/meteatamel/knative-tutorial"&gt;Knative Tutorial&lt;/a&gt; that I published on GitHub.&lt;/p&gt;
&lt;h3 id="what-is-knativeanyway"&gt;What is Knative anyway?&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://github.com/knative/docs"&gt;Knative&lt;/a&gt; is a collection of open source building blocks for serverless containers running on Kubernetes.&lt;/p&gt;
&lt;p&gt;At this point, you might be wondering: “Kubernetes, serverless, what’s going on?” But, when you think about it, it makes sense. Kubernetes is hugely popular container management platform. Serverless is how application developers want to run their code. Knative brings the two worlds together with a set of building blocks.&lt;/p&gt;</description></item><item><title>Application metrics in Istio</title><link>https://atamel.dev/posts/2019/01-07_application-metrics-in-istio/</link><pubDate>Mon, 07 Jan 2019 10:51:38 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2019/01-07_application-metrics-in-istio/</guid><description>&lt;p&gt;The default metrics sent by Istio are useful to get an idea on how the traffic flows in your cluster. However, to understand how your application behaves, you also need application metrics.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://prometheus.io/"&gt;Prometheus&lt;/a&gt; has &lt;a href="https://prometheus.io/docs/instrumenting/clientlibs/"&gt;client libraries&lt;/a&gt; that you can use to instrument your application and send those metrics. This is good but it raises some questions:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Where do you collect those metrics?&lt;/li&gt;
&lt;li&gt;Do you use Istio’s Prometheus or set up your own Prometheus?&lt;/li&gt;
&lt;li&gt;If you use Istio’s Prometheus, what configuration do you need to get those metrics scraped?&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Let’s try to answer these questions.&lt;/p&gt;</description></item><item><title>Istio Routing Basics</title><link>https://atamel.dev/posts/2018/11-02_istio-routing-basics/</link><pubDate>Fri, 02 Nov 2018 11:22:44 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2018/11-02_istio-routing-basics/</guid><description>&lt;p&gt;When learning a new technology like &lt;a href="http://istio.io"&gt;Istio&lt;/a&gt;, it’s always a good idea to take a look at sample apps. Istio repo has a few &lt;a href="https://github.com/istio/istio/tree/master/samples"&gt;sample&lt;/a&gt; apps but they fall short in various ways. &lt;a href="https://github.com/istio/istio/tree/master/samples/bookinfo"&gt;BookInfo&lt;/a&gt; is covered in the docs and it is a good first step. However, it is too verbose with too many services for me and the docs seem to focus on managing the BookInfo app, rather than building it from ground up. There’s a smaller &lt;a href="https://github.com/istio/istio/tree/master/samples/helloworld"&gt;helloworld&lt;/a&gt; sample but it’s more about autoscaling than anything else.&lt;/p&gt;</description></item><item><title>Dialogflow fulfillment with C# and App Engine</title><link>https://atamel.dev/posts/2018/09-24_dialogflow-fulfillment-with-c--and-app-engine/</link><pubDate>Mon, 24 Sep 2018 13:06:27 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2018/09-24_dialogflow-fulfillment-with-c--and-app-engine/</guid><description>&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://atamel.dev/img/2018/1__qekeTye0yYkwNQqTrdc46g.jpeg" alt="" /&gt;
 
&lt;/figure&gt;
&lt;figure&gt;
 &lt;img src="https://atamel.dev/img/2018/1__Qa2RQl3VgBual__IJgh8ADQ.png" alt="" /&gt;
 
&lt;/figure&gt;
&lt;figure&gt;
 &lt;img src="https://atamel.dev/img/2018/1__eMkiv4EGJGTsR0H1l5WFGw.png" alt="" /&gt;
 
&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://dialogflow.com/"&gt;Dialogflow&lt;/a&gt; is a developer platform for building voice or text-based conversational apps on a number of platforms such as Google Assistant, Facebook Messenger, Twilio, Skype and more. Earlier this year, we used Dialogflow to build a Google Assistant app and extended it to use the power of Google Cloud. You can read more about it on Google Cloud blog &lt;a href="https://cloud.google.com/blog/products/gcp/google-home-meets-net-containers-using-dialogflow"&gt;here&lt;/a&gt;, see the app code on GitHub &lt;a href="https://github.com/GoogleCloudPlatform/dotnet-docs-samples/tree/master/applications/googlehome-meets-dotnetcontainers"&gt;here&lt;/a&gt; and one of my talk videos about the app is &lt;a href="https://youtu.be/dd19Gw4WDkU?list=PLQjaCpWNuxVmS_FV4q1aSrZMDuv5-U_FH"&gt;here.&lt;/a&gt;&lt;/p&gt;</description></item><item><title>Istio 101 (1.0) on GKE</title><link>https://atamel.dev/posts/2018/08-06_istio-101--1-0--on-gke/</link><pubDate>Mon, 06 Aug 2018 18:36:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2018/08-06_istio-101--1-0--on-gke/</guid><description>&lt;p&gt;Istio 1.0 is finally &lt;a href="https://istio.io/blog/2018/announcing-1.0/"&gt;announced&lt;/a&gt;! In this post, I updated &lt;a href="https://meteatamel.wordpress.com/2018/06/07/istio-101-0-8-0-on-gke/"&gt;my previous Istio 101 post&lt;/a&gt; with Istio 1.0 specific instructions. Most of the instructions are the same but with a few minor differences about where things live (folder names/locations changed) and also most commands now default to kubectl instead of istioctl.&lt;/p&gt;
&lt;p&gt;For those of you who haven’t read my Istio 101 post, I show how to install Istio 1.0 on Google Kubernetes Engine (GKE), deploy the sample BookInfo app and show some of the add-ons and traffic routing.&lt;/p&gt;</description></item><item><title>Google Home meets .NET containers using Dialogflow</title><link>https://atamel.dev/posts/2018/07-13_google-home-meets-net-containers-using-dialogflow/</link><pubDate>Fri, 13 Jul 2018 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2018/07-13_google-home-meets-net-containers-using-dialogflow/</guid><description>&lt;p&gt;I use my Google Home all the time to check the weather before leaving home, set up alarms, listen to music, but I never considered writing an app for it. What does it take to write an app for the Google Home assistant? And can we make it smarter by leveraging Google Cloud? Those were the questions that my colleague Chris Bacon, and I were thinking about when we decided to build a demo for a conference talk.&lt;/p&gt;</description></item><item><title>.NET Days in Zurich, Shift Conf in Split</title><link>https://atamel.dev/posts/2018/06-11-net-days-in-zurich-shift-conf-in-split/</link><pubDate>Mon, 11 Jun 2018 16:00:55 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2018/06-11-net-days-in-zurich-shift-conf-in-split/</guid><description>&lt;p&gt;Last week was a quite interesting week in terms of travel. First I got to visit Zurich again after a while for &lt;a href="https://dotnetday.ch/"&gt;.NET Day&lt;/a&gt; and then I got to visit the Croatian coastal town Split for the first time for &lt;a href="https://shiftconf.co/"&gt;Shift Conference&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="net-days-in-zurich"&gt;.NET Days in Zurich&lt;/h2&gt;
&lt;p&gt;When I used to work at Adobe, part of my team was based in Basel, Switzerland. As a result, I used to visit Basel, Zurich and other Swiss cities quite often. Since I left Adobe, I visited Switzerland only once 2 years ago. I was naturally excited to visit Zurich again for &lt;a href="https://dotnetday.ch/"&gt;.NET Day&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Istio 101 (0.8.0) on GKE</title><link>https://atamel.dev/posts/2018/06-07_istio-101--0-8-0--on-gke/</link><pubDate>Thu, 07 Jun 2018 07:29:47 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2018/06-07_istio-101--0-8-0--on-gke/</guid><description>&lt;p&gt;In one of my previous &lt;a href="https://meteatamel.wordpress.com/2018/04/24/istio-101-with-minikube/"&gt;posts&lt;/a&gt;, I showed how to install Istio on minikube and deploy the sample BookInfo app. A new Istio version is out (0.8.0) &lt;a href="https://istio.io/about/notes/0.8/"&gt;with a lot of changes&lt;/a&gt;, especially changes on &lt;a href="https://istio.io/blog/2018/v1alpha3-routing/"&gt;traffic management,&lt;/a&gt; which made my steps in the previous post a little obsolete.&lt;/p&gt;
&lt;p&gt;In this post, I want to show how to install Istio 0.8.0 on Google Kubernetes Engine (GKE), deploy the sample BookInfo app and show some of the add-ons and traffic routing.&lt;/p&gt;</description></item><item><title>Codemotion in Amsterdam, Devoxx in London</title><link>https://atamel.dev/posts/2018/05-29_codemotion-in-amsterdam--devoxx-in-london/</link><pubDate>Tue, 29 May 2018 08:32:21 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2018/05-29_codemotion-in-amsterdam--devoxx-in-london/</guid><description>&lt;p&gt;After &lt;a href="https://meteatamel.wordpress.com/2018/05/20/istanbul-the-city-where-the-east-and-the-west-meet/"&gt;my trip in Istanbul&lt;/a&gt;, I visited my parents in Nicosia, Cyprus for a long weekend. Then, I stopped by in Amsterdam for &lt;a href="https://amsterdam2018.codemotionworld.com/"&gt;Codemotion&lt;/a&gt; before coming back to London for &lt;a href="https://www.devoxx.co.uk/"&gt;Devoxx&lt;/a&gt; 4 cities in 4 countries in 1 week was exhausting but also a lot of fun in many ways.&lt;/p&gt;
&lt;h3 id="codemotion-amsterdam"&gt;Codemotion Amsterdam&lt;/h3&gt;
&lt;p&gt;Amsterdam is almost a second home to me nowadays. There’s a great tech scene and a lot of tech events throughout the year, as a result, I end up visiting Amsterdam at least 2–3 times a year.&lt;/p&gt;</description></item><item><title>Istanbul: The city where the East and the West meet</title><link>https://atamel.dev/posts/2018/05-20-istanbul-the-city-where-the-east-and-the-west-meet/</link><pubDate>Sun, 20 May 2018 09:33:47 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2018/05-20-istanbul-the-city-where-the-east-and-the-west-meet/</guid><description>&lt;p&gt;Istanbul is one of those crazy dynamic cities with friendly people, amazing history, great shopping and above all, a food heaven. Naturally, I was excited to be back for &lt;a href="https://javaday.istanbul/" target="_blank" rel="noopener noreferrer"&gt;Java Day Istanbul&lt;/a&gt; conference.&lt;/p&gt;
&lt;p&gt;I came a day early to meet with a partner and visit a customer. They had lots of questions on Kubernetes and hybrid-cloud. It was quite useful for me to hear about their challenges about moving to the cloud and propose some solutions.&lt;/p&gt;</description></item><item><title>Manchester after 20 years</title><link>https://atamel.dev/posts/2018/05-11-manchester-after-20-years/</link><pubDate>Fri, 11 May 2018 15:00:40 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2018/05-11-manchester-after-20-years/</guid><description>&lt;p&gt;After &lt;a href="https://meteatamel.wordpress.com/2018/05/08/first-time-in-china/"&gt;my time in Beijing&lt;/a&gt;, I flew back to the UK but instead of London, I went to Manchester for &lt;a href="http://www.ipexpomanchester.com/" target="_blank" rel="noopener noreferrer"&gt;IPExpo&lt;/a&gt; conference. The first time I visited Manchester was 20 years ago. I was a 15 year old high school kid and I visited my great aunt and uncle one summer with my sister. I remember I had such a good time back then.&lt;/p&gt;
&lt;p&gt;Unfortunately, I never had a chance to visit Manchester again and it was quite nostalgic for me to be back. I was hoping that I would remember Manchester a little bit but to my surprise, it was like a brand new city to me.&lt;/p&gt;</description></item><item><title>First time in China</title><link>https://atamel.dev/posts/2018/@meteatamel/first-time-in-china-142711218529/</link><pubDate>Tue, 08 May 2018 16:58:02 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2018/@meteatamel/first-time-in-china-142711218529/</guid><description>&lt;p&gt;Visiting a new country is always extra special for me and a couple of weeks ago, I got to do just that and visited Beijing, China for the first time.&lt;/p&gt;
&lt;p&gt;After &lt;a href="https://meteatamel.wordpress.com/2018/04/25/trip-report-tdc-in-florianopolis/"&gt;TDC in Floripa&lt;/a&gt;, I flew to Beijing for &lt;a href="https://qconferences.com/"&gt;QCon&lt;/a&gt; conference and for a Google Developer Group (GDG) meetup. It was the first time for me in China (my 48th country!) and first time speaking at QCon, so I was naturally excited. I didn’t know what to expect in Beijing but I was pleasantly surprised. Beijing is a modern city with good public transportation and interesting sites to visit. I got the feeling that it’s also very international with many expats, definitely more than I expected. I visited the iconic sites like Tiananmen Square and the Great Wall of China. I also visited Google’s Beijing office (my 35th Google office visit!)&lt;/p&gt;</description></item><item><title>Trip Report: TDC in Florianópolis</title><link>https://atamel.dev/posts/2018/04-25_trip-report--tdc-in-florian-polis/</link><pubDate>Wed, 25 Apr 2018 16:24:51 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2018/04-25_trip-report--tdc-in-florian-polis/</guid><description>&lt;p&gt;Last week, I was in my favorite country, Brazil, for &lt;a href="http://www.thedevconf.com.br/"&gt;The Developer Conference&lt;/a&gt; (TDC) in Florianópolis (aka Floripa). I went to Brazil for the first time last July. Since then, I’ve been there 3 more times and I gradually fell in love with it. People are friendly, their BBQ is amazing, scenery is beautiful. I always have a good time in Brazil and this time wasn’t an exception.&lt;/p&gt;
&lt;p&gt;In all previous times, I was mainly in Sao Paulo for a conference followed by a short trip to Rio (my favorite city!) for myself. This time, I went to Florianópolis which is a city to the south of Sao Paulo. It has a similar feeling as Rio (i.e. beach town) but smaller and felt safer than Rio. It had amazing views of mountains and sea, great food and friendly people, as always the case with Brazil.&lt;/p&gt;</description></item><item><title>Istio 101 with Minikube</title><link>https://atamel.dev/posts/2018/04-24_istio-101-with-minikube/</link><pubDate>Tue, 24 Apr 2018 05:44:15 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2018/04-24_istio-101-with-minikube/</guid><description>&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://atamel.dev/img/2018/0__xYvYviOaecKomFTU." alt="" /&gt;
 
&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;As part of my Istio 101 talk, I like to show demos locally (because conference Wifi can be unreliable) and &lt;a href="https://github.com/kubernetes/minikube"&gt;Minikube&lt;/a&gt; is perfect for this. Minikube gives you a local Kubernetes cluster on top of which you can install Istio.&lt;/p&gt;
&lt;p&gt;In this post, I want to show how to do Istio 101 on Minikube. More specifically, I will show how to install Istio, deploy a sample application, install add-ons like &lt;a href="https://prometheus.io/"&gt;Prometheus&lt;/a&gt;, &lt;a href="https://grafana.com/"&gt;Grafana&lt;/a&gt;, &lt;a href="https://zipkin.io/"&gt;Zipkin&lt;/a&gt;, &lt;a href="https://github.com/istio/istio/tree/master/addons/servicegraph"&gt;ServiceGraph&lt;/a&gt; and change traffic routes dynamically.&lt;/p&gt;</description></item><item><title>Trip Report: Codemotion in Rome</title><link>https://atamel.dev/posts/2018/04-19_trip-report--codemotion-in-rome/</link><pubDate>Thu, 19 Apr 2018 16:32:23 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2018/04-19_trip-report--codemotion-in-rome/</guid><description>&lt;p&gt;Last week, I was in one of my favorite cities, Rome, for &lt;a href="https://rome2018.codemotionworld.com/"&gt;Codemotion&lt;/a&gt; conference. There are many cities to see in the world and normally, I do not like to revisit cities that I’ve been before. However, Rome is a great exception. It was my fifth visit there over the years and it was still as exciting as the first time. I also found out that Google has a small office in Rome, so I paid a quick visit next morning before attending the conference.&lt;/p&gt;</description></item><item><title>How to run Windows Containers on Compute Engine</title><link>https://atamel.dev/posts/2018/04-03_how-to-run-windows-containers-on-compute-engine/</link><pubDate>Tue, 03 Apr 2018 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2018/04-03_how-to-run-windows-containers-on-compute-engine/</guid><description>&lt;p&gt;Container virtualization is a rapidly evolving technology that can simplify how you deploy and manage distributed applications. When people discuss containers, they usually mean Linux-based containers. This makes sense, because native Linux kernel features like cgroups introduced the idea of resource isolation, eventually leading to containers as we know them today.&lt;/p&gt;
&lt;p&gt;For a long time, you could only containerize Linux processes, but Microsoft introduced support for Windows-based containers in Windows Server 2016 and Windows 10. With this, you can now take an existing Windows application, containerize it using Docker, and run it as an isolated container on Windows. Microsoft supports two flavors of Windows containers: Windows Server and Hyper-V. You can build Windows containers on either the microsoft/windowsservercore and microsoft/nanoserver base images. You can read more about Windows containers in the Microsoft Windows containers documentation.&lt;/p&gt;</description></item><item><title>Istio + Kubernetes on Windows</title><link>https://atamel.dev/posts/2018/02-19_istio---kubernetes-on-windows/</link><pubDate>Mon, 19 Feb 2018 09:31:09 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2018/02-19_istio---kubernetes-on-windows/</guid><description>&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://atamel.dev/img/2018/0__ANZYgpIy7N6V1QXs." alt="" /&gt;
 
&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;I’ve been recently looking into &lt;a href="https://istio.io/"&gt;Istio&lt;/a&gt;, an open platform to connect and manage microservices. After Containers and Kubernetes, I believe that Istio is the next step in our microservices journey where we standardize on tools and methods on how to manage and secure microservices. Naturally, I was very excited to get my hands on Istio.&lt;/p&gt;
&lt;p&gt;While setting up Istio on Google Kubernetes Engine (GKE) is pretty straightforward, it’s always useful to have a local setup for debugging and testing. I specifically wanted to setup Istio on my local &lt;a href="https://github.com/kubernetes/minikube"&gt;Minikube&lt;/a&gt; Kubernetes cluster on my Windows machine. I ran into a few minor issues that I want to outline here in case it is useful to someone out there.&lt;/p&gt;</description></item><item><title>Minikube on Windows</title><link>https://atamel.dev/posts/2018/02-14_minikube-on-windows/</link><pubDate>Wed, 14 Feb 2018 05:16:34 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2018/02-14_minikube-on-windows/</guid><description>&lt;p&gt;When I’m playing with Kubernetes, I usually get a cluster from &lt;a href="https://cloud.google.com/kubernetes-engine/"&gt;Google Kubernetes Engine&lt;/a&gt; (GKE) because it’s literally &lt;a href="https://cloud.google.com/sdk/gcloud/reference/container/clusters/create"&gt;a single gcloud command&lt;/a&gt; to get a Kubernetes cluster up and running on GKE. It is sometimes useful though to have a Kubernetes cluster running locally for testing and debugging. &lt;a href="https://github.com/kubernetes/minikube"&gt;Minikube&lt;/a&gt; is perfect for this.&lt;/p&gt;
&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://atamel.dev/img/2018/0__CUxIJLODCN__MLSa8." alt="" /&gt;
 
&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Minikube runs a single-node Kubernetes cluster inside a VM on your laptop. There are instructions on how to install it on Linux, Mac and Windows. Unfortunately, &lt;a href="https://github.com/kubernetes/minikube#windows"&gt;instructions for Windows&lt;/a&gt; is a little lacking, so I want to document how I got Minikube up and running on my Windows 10 machine.&lt;/p&gt;</description></item><item><title>Little Mermaid and the Balkans</title><link>https://atamel.dev/posts/2017/10-30_little-mermaid-and-the-balkans/</link><pubDate>Mon, 30 Oct 2017 19:09:46 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2017/10-30_little-mermaid-and-the-balkans/</guid><description>&lt;p&gt;I don’t get to visit this many new places in this short amount of time usually but last week I got to visit 4 cities in 4 countries. The amazing thing was that I had never been to any of these cities or countries before!&lt;/p&gt;
&lt;p&gt;My journey started in Copenhagen, Denmark on Monday. I had been in all countries around Denmark but not in Denmark itself, so I was happy to finally add Denmark to the list of visited countries. I had to work on Monday, so I paid a visit the Google office in Copenhagen. This was my the 27th Google office I ever visited 🙂&lt;/p&gt;</description></item><item><title>Ada Lovelace Day in London, Unter den Linden in Berlin and DevFest in beautiful Lviv</title><link>https://atamel.dev/posts/2017/10-16_ada-lovelace-day-in-london--unter-den-linden-in-berlin-and-devfest-in-beautiful-lviv/</link><pubDate>Mon, 16 Oct 2017 20:38:30 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2017/10-16_ada-lovelace-day-in-london--unter-den-linden-in-berlin-and-devfest-in-beautiful-lviv/</guid><description>&lt;p&gt;October 10 was &lt;a href="https://findingada.com/"&gt;Ada Lovelace Day&lt;/a&gt;, a special day to celebrate women in science, technology, engineering and maths. Unfortunately, there are not enough women in software engineering and technology in general. Programs like &lt;a href="https://www.womentechmakers.com/"&gt;Women Techmakers&lt;/a&gt; do a good job to encourage more women participation in technology with meetups, conferences and hackathons. One of those conferences, Tech(k)now Day, happened in London on Ada Lovelace Day and I was happy that Google Cloud was a sponsor. We had a booth and I was there with other Googlers answering questions. I also gave a talk on Containers and Kubernetes to a small group of 30+ people.&lt;/p&gt;</description></item><item><title>Deploying ASP.NET Core apps on Kubernetes/Container Engine</title><link>https://atamel.dev/posts/2017/09-11_deploying-asp-net-core-apps-on-kubernetes-container-engine/</link><pubDate>Mon, 11 Sep 2017 10:27:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2017/09-11_deploying-asp-net-core-apps-on-kubernetes-container-engine/</guid><description>&lt;p&gt;In my previous &lt;a href="https://meteatamel.wordpress.com/2017/08/15/deploying-asp-net-core-apps-on-app-engine/"&gt;post&lt;/a&gt;, I talked about how to deploy a containerised ASP.NET Core app to App Engine (flex) on Google Cloud. App Engine (flex) is an easy way to run containers in production: Just send your container and let Google Cloud figure out how to run it at scale. It comes with some nice default features such as versioning, traffic splitting, dashboards and autoscaling. However, it doesn’t give you much control.&lt;/p&gt;</description></item><item><title>Deploying ASP.NET Core apps on App Engine</title><link>https://atamel.dev/posts/2017/08-15_deploying-asp-net-core-apps-on-app-engine/</link><pubDate>Tue, 15 Aug 2017 04:32:34 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2017/08-15_deploying-asp-net-core-apps-on-app-engine/</guid><description>&lt;p&gt;I love how easy it is to deploy and run containerized ASP.NET Core apps on App Engine (flex). So much so that, I created a Cloud Minute recently to show you how, here it is.&lt;/p&gt;
&lt;p&gt;It basically involves 3 steps:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Create your ASP.NET Core app using &lt;em&gt;dotnet&lt;/em&gt; command line tool inside Cloud Shell and publish your app to get a self-contained DLL.&lt;/li&gt;
&lt;li&gt;Containerize your app by creating a &lt;em&gt;Dockerfile,&lt;/em&gt; relying on the official App Engine image and pointing to the self-contained DLL of your app.&lt;/li&gt;
&lt;li&gt;Create an &lt;em&gt;app.yaml&lt;/em&gt; file for App Engine and use &lt;em&gt;gcloud&lt;/em&gt; to deploy to App Engine.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;That’s it! If you want to go through these steps yourself, we also have a codelab for you that you can access &lt;a href="https://codelabs.developers.google.com/codelabs/cloud-app-engine-aspnetcore"&gt;here.&lt;/a&gt;&lt;/p&gt;</description></item><item><title>Putting gRPC multi-language support to the test</title><link>https://atamel.dev/posts/2017/05-08_putting-grpc-multi-language-support-to-the-test/</link><pubDate>Mon, 08 May 2017 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2017/05-08_putting-grpc-multi-language-support-to-the-test/</guid><description>&lt;p&gt;gRPC is an RPC framework developed and open-sourced by Google. There are many benefits to gRPC, such as efficient connectivity with HTTP/2, efficient data serialization with Protobuf, bi-directional streaming and more, but one of the biggest benefits is often overlooked: multi-language support.&lt;/p&gt;
&lt;p&gt;Out of the box, gRPC supports multiple programming languages : C#, Java, Go, Node.js, Python, PHP, Ruby and more. In the new microservices world, the multi-language support provides the flexibility you need to implement services in whatever language and framework you like and let gRPC handle the low-level connectivity and data transfer between microservices in an efficient and consistent way.&lt;/p&gt;</description></item><item><title>Windows and .NET on Google Cloud Platform</title><link>https://atamel.dev/posts/2017/03-20_windows-and--net-on-google-cloud-platform/</link><pubDate>Mon, 20 Mar 2017 09:25:05 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2017/03-20_windows-and--net-on-google-cloud-platform/</guid><description>&lt;p&gt;Originally published in &lt;a href="https://www.sdn.nl/MAGAZINE/ID/1261/SDN-Magazine-131"&gt;SDN Magazine 131&lt;/a&gt; in February 2017.&lt;/p&gt;
&lt;h3 id="introduction"&gt;Introduction&lt;/h3&gt;
&lt;p&gt;Until recently, there were two distinct camps in the software world: the Windows (A.K.A. closed) world and the Linux (A.K.A. open) world. In the Linux world, we had tools like the bash shell, Java programming language, Eclipse IDE, MySQL database, and many other open-source projects by Apache. In the Windows world, we had similar, yet distinct tools mainly developed by Microsoft, such as the C# programming language, Visual Studio IDE, SQL Server and PowerShell.&lt;/p&gt;</description></item><item><title>Windows and .NET Codelabs - an overview</title><link>https://atamel.dev/posts/2017/02-07_windows-and-net-codelabs-an-overview/</link><pubDate>Tue, 07 Feb 2017 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2017/02-07_windows-and-net-codelabs-an-overview/</guid><description>&lt;p&gt;Google Developers Codelabs provide guided coding exercises to get hands-on experience with a wide range of topics such as Android Wear, Firebase and Web. Google Cloud Platform (GCP) has its own section, with codelabs for Google Compute Engine, Google App Engine, Kubernetes and many more.&lt;/p&gt;
&lt;p&gt;We’re always working to create new content, and I’m happy to announce that we now have new codelabs for running Windows and .NET apps on GCP, with their own dedicated page. Here’s an overview to help you get started.&lt;/p&gt;</description></item><item><title>From the Monolith to Microservices</title><link>https://atamel.dev/posts/2017/01-23_from-the-monolith-to-microservices/</link><pubDate>Mon, 23 Jan 2017 11:26:15 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2017/01-23_from-the-monolith-to-microservices/</guid><description>&lt;p&gt;I remember the old days where we used to package all our modules into a single app (aka the Monolith), deployed everything all at once and called it an enterprise app. I have to admit, the first time I heard the term enterprise app, it felt special. Suddenly, my little module was not so little anymore. It was part of something bigger and more important, at least that’s what I thought. There was a lot of convention and overhead that came with working in this enterprise app model but it was a small price to pay for consistency, right?&lt;/p&gt;</description></item><item><title>Google Cloud Next’17</title><link>https://atamel.dev/posts/2017/01-19_google-cloud-next-17-2a9b5b990c2d/</link><pubDate>Thu, 19 Jan 2017 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2017/01-19_google-cloud-next-17-2a9b5b990c2d/</guid><description>&lt;p&gt;In my previous &lt;a href="https://meteatamel.wordpress.com/2017/01/17/one-year-on/"&gt;post&lt;/a&gt;, I promised to talk about some good conferences I’m attending or speaking over the coming months. One of those conferences that I’m most excited about is &lt;a href="https://cloudnext.withgoogle.com/"&gt;Google Cloud Next’17:&lt;/a&gt; Google’s main cloud conference happening March 8–10 in San Francisco.&lt;/p&gt;
&lt;p&gt;&lt;figure&gt;
 &lt;img src="https://atamel.dev/img/2017/0__85B4CXcvF7Mi__d1F." alt="" /&gt;
 
&lt;/figure&gt;&lt;/p&gt;
&lt;p&gt;Last year, I attended that conference as a Noogler. There were a lot of developers and great technical content. This year’s &lt;a href="https://cloudnext.withgoogle.com/schedule"&gt;schedule&lt;/a&gt; has just been published and it looks even more exciting, especially if you’re a .NET developer!&lt;/p&gt;</description></item><item><title>How to build and launch an ASP.NET Core app from Google Cloud Shell — without ever leaving the browser</title><link>https://atamel.dev/posts/2016/11-11_how-to-build-and-launch-an-aspnet-core-app-from-google-cloud-shell/</link><pubDate>Fri, 11 Nov 2016 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2016/11-11_how-to-build-and-launch-an-aspnet-core-app-from-google-cloud-shell/</guid><description>&lt;p&gt;Google Cloud Shell, my favorite development tool for Google Cloud Platform, just got more awesome with two new features.&lt;/p&gt;
&lt;p&gt;First, we recently integrated Eclipse Orion, an online code editor, with Cloud Shell. If you&amp;rsquo;re not a Vim or Emacs fan, Orion is a welcome addition to Cloud Shell. It enables you to edit code right inside the browser with basic syntax highlighting and minimal effort.&lt;/p&gt;
&lt;p&gt;Second, we added .NET Core command line interface tools to Cloud Shell. This is a new cross-platform toolchain for developing .NET Core applications and it can be used to create, run and publish .NET Core apps from the command line.&lt;/p&gt;</description></item><item><title>Managing containerized ASP.NET Core apps with Kubernetes</title><link>https://atamel.dev/posts/2016/10-14_managing-containerized-aspnet-core-apps-with-kubernetes/</link><pubDate>Fri, 14 Oct 2016 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2016/10-14_managing-containerized-aspnet-core-apps-with-kubernetes/</guid><description>&lt;p&gt;One of our goals here on the Google Cloud Platform team is to support the broadest possible array of platforms and operating systems. That’s why we’re so excited about the ASP.NET Core, the next generation of the open source ASP.NET web framework built on .NET Core. With it, .NET developers can run their apps cross-platform on Windows, Mac and Linux.&lt;/p&gt;
&lt;p&gt;One thing that ASP.NET Core does is allow .NET applications to run in Docker containers. All of a sudden, we’ve gone from Windows-only web apps to lean cross-platform web apps running in containers. This has been great to see!&lt;/p&gt;</description></item><item><title>Running Powershell on Google Cloud SDK</title><link>https://atamel.dev/posts/2016/09-13_running-powershell-on-google-cloud-sdk/</link><pubDate>Tue, 13 Sep 2016 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2016/09-13_running-powershell-on-google-cloud-sdk/</guid><description>&lt;p&gt;It’s exciting to see so many options for .NET developers to manage their cloud resources on Google Cloud Platform. Apart from the usual Google Cloud Console, there&amp;rsquo;s Cloud Tools for Visual Studio, and the subject of this post: Cloud Tools for PowerShell.&lt;/p&gt;
&lt;p&gt;PowerShell is a command-line shell and associated scripting language built on the .NET Framework. It&amp;rsquo;s the default task automation and configuration management tool used in the Windows world. A PowerShell cmdlet is a lightweight command invoked within PowerShell.&lt;/p&gt;</description></item><item><title>Getting started with Cloud Tools for Visual Studio</title><link>https://atamel.dev/posts/2016/09-08_getting-started-with-cloud-tools-for-visual-studio/</link><pubDate>Thu, 08 Sep 2016 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2016/09-08_getting-started-with-cloud-tools-for-visual-studio/</guid><description>&lt;p&gt;If you&amp;rsquo;re a .NET developer, you&amp;rsquo;re used to managing cloud resources right inside Visual Studio. With the recent release of our Cloud Tools for Visual Studio, you can also manage your Google Cloud Platform resources from Visual Studio.&lt;/p&gt;
&lt;p&gt;Cloud Tools is a Visual Studio extension. It has a quickstart page with detailed information on how to install the extension, how to add your credentials and how to browse and manage your cloud resources. In this post, I want to highlight some of the main features and refer to individual how-to pages for details.&lt;/p&gt;</description></item><item><title>Getting started with Google Cloud Client Libraries for .NET</title><link>https://atamel.dev/posts/2016/08-26_getting-started-with-google-cloud-client-libraries-for-net/</link><pubDate>Fri, 26 Aug 2016 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2016/08-26_getting-started-with-google-cloud-client-libraries-for-net/</guid><description>&lt;p&gt;Last week, we introduced new tools and client libraries for .NET developers to integrate with Google Cloud Platform, including Google Cloud Client Libraries for .NET, a set of new client libraries that provide an idiomatic way for .NET developers to interact with GCP services. In this post, we&amp;rsquo;ll explain what it takes to install the new client libraries for .NET in your project.&lt;/p&gt;</description></item><item><title>Scheduling Dataflow pipelines using App Engine Cron Service or Cloud Functions</title><link>https://atamel.dev/posts/2016/04-12_scheduling-dataflow-pipelines-using-app-engine-cron-service-or-cloud-functions/</link><pubDate>Tue, 12 Apr 2016 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/posts/2016/04-12_scheduling-dataflow-pipelines-using-app-engine-cron-service-or-cloud-functions/</guid><description>&lt;p&gt;Google Cloud Dataflow provides a unified programming model for batch and stream data processing along with a managed service to execute parallel data processing pipelines on Google Cloud Platform.&lt;/p&gt;
&lt;p&gt;Once a Dataflow pipeline is created, it can be tested locally using DirectPipelineRunner, and if everything looks good, it can be manually executed as a job in Dataflow Service by triggering DataflowPipelineRunner or BlockingDataflowPipelineRunner with Apache Maven or Dataflow Eclipse Plugin. You can monitor the progress of your submitted job with Dataflow Monitoring Interface from Cloud Platform Console or Dataflow Command-line Interface from gcloud.&lt;/p&gt;</description></item><item><title>My knee journey</title><link>https://atamel.dev/my-knee-journey/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/my-knee-journey/</guid><description>&lt;h2 id="table-of-contents"&gt;Table of Contents&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#why-this-page"&gt;Why this page?&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#first-and-second-injuries-and-first-surgery-in-2009"&gt;First and second injuries and first surgery in 2009&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#third-injury-in-march-2023"&gt;Third injury in March 2023&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#second-surgery-on-april-25-2023"&gt;Second surgery on April 25, 2023&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#week-1"&gt;Week 1&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#week-2"&gt;Week 2&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#week-3"&gt;Week 3&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#week-4"&gt;Week 4&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#week-5"&gt;Week 5&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#week-6"&gt;Week 6&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#week-7"&gt;Week 7&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#week-8"&gt;Week 8&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#month-3"&gt;Month 3&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#month-4"&gt;Month 4&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#month-7"&gt;Month 7&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#month-9"&gt;Month 9&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#month-12"&gt;Month 12&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#month-20"&gt;Month 20&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="why-this-page"&gt;Why this page?&lt;/h2&gt;
&lt;p&gt;In March 2023, I had a knee injury that required a knee surgery on April 25,
2023. I created this page to document my surgery and recovery progress in detail
for myself. I also found it incredibly useful to read about other people&amp;rsquo;s
experiences with knee surgery and recovery, so I hope this also helps someone
else out there today or in the future.&lt;/p&gt;</description></item><item><title>Speaker Profile</title><link>https://atamel.dev/speaker-profile/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/speaker-profile/</guid><description>&lt;h2 id="about"&gt;About&lt;/h2&gt;
&lt;p&gt;I’m a Software Engineer and a Developer Advocate at Google in London. I build
tools, demos, tutorials, and give talks to educate and help developers to be
successful on Google Cloud.&lt;/p&gt;
&lt;h2 id="asking-me-to-speak-at-your-conference"&gt;Asking me to speak at your conference&lt;/h2&gt;
&lt;p&gt;If you want to invite me to speak at your tech conference, please send me a
message on LinkedIn or Twitter with details of the event:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;What&amp;rsquo;s the name, date, and history of the event?&lt;/li&gt;
&lt;li&gt;Is it an online, hybrid, or in-person event?&lt;/li&gt;
&lt;li&gt;What&amp;rsquo;s the attendee profile (software engineers, architects, C-level, etc.)?&lt;/li&gt;
&lt;li&gt;What&amp;rsquo;s the expected attendee number (overall and per session)?&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="speaking-history"&gt;Speaking history&lt;/h2&gt;
&lt;p&gt;Here&amp;rsquo;s some info on my talks and workshops:&lt;/p&gt;</description></item><item><title>Talks List</title><link>https://atamel.dev/talks-list/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><author>atamel@gmail.com (Mete Atamel)</author><guid>https://atamel.dev/talks-list/</guid><description>&lt;p&gt;I did 423 talks/workshops since 2016.&lt;/p&gt;
&lt;h2 id="2026"&gt;2026&lt;/h2&gt;
&lt;p&gt;I had 8 talks/workshops.&lt;/p&gt;
&lt;table&gt;
 &lt;thead&gt;
 &lt;tr&gt;
 &lt;th&gt;Event&lt;/th&gt;
 &lt;th&gt;Date&lt;/th&gt;
 &lt;th&gt;Topic&lt;/th&gt;
 &lt;/tr&gt;
 &lt;/thead&gt;
 &lt;tbody&gt;
 &lt;tr&gt;
 &lt;td&gt;Cloud &amp;amp; AI Train the Trainer - Istanbul&lt;/td&gt;
 &lt;td&gt;2026-03-13&lt;/td&gt;
 &lt;td&gt;Gemini for developers workshop&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Cloud &amp;amp; AI Train the Trainer - Istanbul&lt;/td&gt;
 &lt;td&gt;2026-03-13&lt;/td&gt;
 &lt;td&gt;AI workshop masterclass&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Builders Day Istanbul - Online&lt;/td&gt;
 &lt;td&gt;2026-03-12&lt;/td&gt;
 &lt;td&gt;Gemini for developers&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Startup Sprint Day - Online&lt;/td&gt;
 &lt;td&gt;2026-03-05&lt;/td&gt;
 &lt;td&gt;Gemini for developers&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Cloud &amp;amp; AI Train the Trainer - London&lt;/td&gt;
 &lt;td&gt;2026-03-04&lt;/td&gt;
 &lt;td&gt;AI workshop masterclass&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Cloud &amp;amp; AI Train the Trainer - Berlin&lt;/td&gt;
 &lt;td&gt;2026-02-25&lt;/td&gt;
 &lt;td&gt;AI workshop masterclass&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;AgentCon - Istanbul&lt;/td&gt;
 &lt;td&gt;2026-02-07&lt;/td&gt;
 &lt;td&gt;Agent Protocols: MCP, A2A, and ADK in Action&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Cloud Labs - Limassol&lt;/td&gt;
 &lt;td&gt;2026-01-28&lt;/td&gt;
 &lt;td&gt;Gemini for developers &amp;amp; Google Antigravity&lt;/td&gt;
 &lt;/tr&gt;
 &lt;/tbody&gt;
&lt;/table&gt;
&lt;h2 id="2025"&gt;2025&lt;/h2&gt;
&lt;p&gt;I had 29 talks/workshops.&lt;/p&gt;</description></item></channel></rss>