About
I’m a Software Engineer and a Developer Advocate at Google in London. I build tools, demos, tutorials, and give talks to educate and help developers to be successful on Google Cloud.
Asking me to speak at your conference
If you want to invite me to speak at your tech conference, please send me a message on LinkedIn or Twitter with details of the event:
- What’s the name, date, and history of the event?
- Is it an online, hybrid, or in-person event?
- What’s the attendee profile (software engineers, architects, C-level, etc.)?
- What’s the expected attendee number (overall and per session)?
Speaking history
Here’s some info on my talks and workshops:
- Regular speaker at events since 2016 (full list).
- Recorded talks: Playlist with some of my recorded talks.
- Slides: Speaker Deck profile with some of my talk slides.
Photo
Social
- Blog - atamel.dev
- Medium - meteatamel
- Twitter - @meteatamel
- Mastadon - mastodon.online/@atamel
- LinkedIn - meteatamel
- GitHub - meteatamel
- Facebook - mete.atamel
- Instagram - meteatamel
- Sessionize - meteatamel
- Speaker Deck - meteatamel
Current Talks (2024)
Avoid common LLM pitfalls
It’s easy to generate content with a Large Language Model (LLM), but the output often suffers from hallucinations (fake content), outdated information (not based on the latest data), and reliance on public data only (no private data). Additionally, the output format can be chaotic, often littered with harmful or personally identifiable information (PII), and using a large context window can become expensive—making LLMs less than ideal for real-world applications.
In this talk, we’ll begin with a quick overview of the latest advancements in LLMs. We’ll then explore various techniques to overcome common LLM challenges: grounding and Retrieval-Augmented Generation (RAG) to enhance prompts with relevant data; function calling to provide LLMs with more recent information; batching and context caching to control costs; frameworks for evaluating and security testing your LLMs and more!
By the end of this session, you’ll have a solid understanding of how LLMs can fail and what you can do to address these issues.
Beyond the Prompt: Evaluating, Testing, and Securing LLM Applications
When you change prompts or modify the Retrieval-Augmented Generation (RAG) pipeline in your LLM applications, how do you know it’s making a difference? You don’t—until you measure. But what should you measure, and how? Similarly, how can you ensure your LLM app is resilient against prompt injections or avoids providing harmful responses? More robust guardrails on inputs and outputs are needed beyond basic safety settings.
In this talk, we’ll explore various evaluation frameworks such as Vertex AI Evaluation, DeepEval, and Promptfoo to assess LLM outputs, understand the types of metrics they offer, and how these metrics are useful. We’ll also dive into testing and security frameworks like LLM Guard to ensure your LLM apps are safe and limited to precisely what you need.
Lessons learned building a GenAI powered app
Everyone’s excited about AI and justifiably so, but can it help us build better apps? This session will focus on a case study: a GenAI powered interactive trivia quiz app running in the Cloud. We’ll explore the challenges we faced while building the app and how GenAI proved to be a game changer. Join me for a fun and educational session featuring a live demo with audience participation, and some valuable lessons learned.
Improve Your Development Workflow with an AI assistant
In session, you’ll learn how an AI assistant can speed up your development workflow. We will start with an empty workspace and design, code, and test an application with Google’s Gemini powered AI assistant. Once the app is ready, we’ll deploy it to the cloud and get help in analyzing its logs again with the help of Gemini. Along the way, you’ll also learn about best practices to get the best out of AI assistants in your development workflow.
Multi-modal LLMs for application developers
Multi-modal large language models (LLMs) can understand text, images, or videos and with their ever increasing context size, they open up interesting use cases for application developers. In this talk, we’ll take a tour of Gemini, Google’s multi-modal LLM, and its open source version Gemma, showing what’s possible and how to integrate them in your applications. We’ll also explore techniques such as RAG, function calling, grounding, to supply LLMs with more up-to-date and relevant data and minimize hallucinations.
Workshop: Hands-on Gemini with LangChain on Vertex AI
In this hands-on workshop, you’ll learn how to utilize Large Language Models (LLMs). You will start by familiarizing yourself with Gemini, Google’s multimodal LLM. You’ll then use Gemini on Google Cloud’s Vertex AI in various use cases. You’ll chat and explore multimodal Gemini, extract data from unstructured text, classify text with few-shot prompting, searching in your own private documents with RAG, supplement the model with function calls to external APIs and more. You’ll be using LangChain framework. Come prepared with your laptop and we’ll provide the rest in Google Cloud!
Previous Talks
These are abstracts of some of my previous talks.
Open standards for building event-driven applications in the cloud
AsyncAPI is an open-source specification for describing and documenting asynchronous APIs, similar to OpenAPI specification for documenting RESTful APIs. CloudEvents is a specification for event data in the cloud. Together, they enable developers to design, document, and test event-driven APIs and to easily share and consume event data across different cloud platforms and ecosystems.
In this session, we will explore the benefits of using AsyncAPI and CloudEvents in your tech stack, and how they can help you build asynchronous, event-driven applications that are well-documented and easy to maintain.
Tags: event-driven architecture, open source tools, microservices architecture
Elevator pitch: In this talk, we explore 2 major open-standards for async applications: AsyncAPI and CloudEvents. Anyone wanting to build event-driven applications will find this talk useful.
WebAssembly beyond the browser
WebAssembly (Wasm) allows you to compile native code and run it in a secure and performant way in the browser. The WebAssembly System Interface (WASI) started enabling Wasm to run outside the web browser in environments such as edge computing and cloud microservices. Docker has also recently announced support for Wasm, allowing it to be used as a lightweight alternative to Linux containers.
Whether Wasm will replace containers remains to be seen but it’s definitely worth learning more about it. In this talk, I’ll introduce Wasm, the terminology and landscape around it, and its current state as a server side technology. We will also look at some demos and tools for working with Wasm on the server side.
Tags: webassembly, new tools, cutting edge
Elevator pitch: The talk will provide everything a developer needs to know about this new exciting technology called Wasm.
Service orchestration patterns
Once you have a fleet of services, you need to decide how to organize and get them to cooperate for a common goal. Should they communicate with direct calls? Or should they communicate indirectly in a loosely coupled way via events? Maybe a central orchestrator should direct the communication? What do you do when a service fails? When to use a simple retry or a more sophisticated Saga pattern? In this talk, we’ll look at some orchestration patterns and techniques and to get your services to cooperate in a resilient way.
Serverless beyond functions
Serverless is much more than simple HTTP triggered functions. You can run containers and whole apps serverlessly, group functions behind an API gateway, coordinate services with a central orchestrator or let them communicate indirectly with events. Serverless can be scheduled or made more resilient with task queues. You can even combine serverless orchestration with serverful services. In this talk, we’ll go through the history of serverless, explore how the serverless landscape evolved over the years and see where we are today. We’ll also look at some sample applications with flexible and resilient serverless patterns beyond simple functions.
Choreography vs Orchestration in serverless microservices
We went from a single monolith to a set of microservices that are small, lightweight, and easy to implement. Microservices enable reusability, make it easier to change and scale apps on demand but they also introduce new problems. How do microservices interact with each other toward a common goal? How do you figure out what went wrong when a business process composed of several microservices fails? Should there be a central orchestrator controlling all interactions between services or should each service work independently, in a loosely coupled way, and only interact through shared events? In this talk, we’ll explore the Choreography vs Orchestration question and see demos of some of the tools that can help.
Event-driven serverless architectures using Knative and Cloud Run
When you combine the efficiency of containers, agility of serverless and flexibility of event-driven services, you end up with a more reusable, interoperable and scalable architecture with minimal management overhead.
In this talk, we’ll explore the open-source Knative Eventing and its managed version Cloud Run. We’ll explore what they provide for event-driven serverless containers and we’ll deep dive into some real-world reference architectures.
At the end of this session, you’ll have a solid understanding on how Knative and Cloud Run can power your event-driven apps.
Serverless Containers with Knative and Cloud Run
When you build an app, you typically need to choose agility with serverless or flexibility with containers but not both. But why does it have to be that way? Wouldn’t it be nice to have the best of both worlds?
In this talk, we’ll explore the open source project Knative and its managed version Cloud Run. Through a series of demos, we’ll see how these projects enable you to deploy and manage containers in a serverless way on wherever you want, on-prem or in the cloud.
An app modernization story
Back in 2016, I deployed an ASP.NET monolith app to IIS on Windows. It worked but it was clunky in every sense of the word. Over the years, the app was freed from Windows (thanks to .NET Core), containerized to run consistently in different environments (thanks to Docker) and decomposed into a set of loosely-coupled, event-driven, microservices (thanks to Knative and Cloud Run). The end result is a simpler and portable serverless architecture, easier and cheaper to run and maintain. In this talk, we’ll go through the modernization journey, explore the decision points and deep dive into the final architecture.
Eventing with Knative and Cloud Run: From basics to advanced
Knative Eventing provides composable primitives to connect event sources to event consumers on Kubernetes. Cloud Run is its managed version on Google Cloud. In this demo-driven talk, we’ll start with the basics of Knative and Cloud Run Eventing. We then continue with more advanced topics and see how Knative and Cloud Run can help building event-driven serverless architectures.
Cloud Native on Google Cloud
In this session, we cover some of the Cloud Native technologies, such as Containers, Kubernetes, Istio and Knative and what it means to run them on Google Cloud.
Running multi-regional apps on Google Cloud
There are many reasons why you want to deploy your app in multiple regions. Maybe you’re looking for extra resiliency and redundancy, or you want to minimize latency for your globally distributed users. Either way, running code in multiple regions can be challenging. There is a lot of complexity to uncover and limitations to work around. In this session, we first look at different options when it comes to running code in Google Cloud. Then, we figure out what it takes to run that code in multiple regions. Along the way, we explore the pros and cons of each approach. At the end of the session, you will have a concrete decision tree of the available options.
Google Assistant powered by Containers, Machine Learning and .NET on Google Cloud
What does it take to connect Google Assistant to the cloud? Surprisingly, not much! In this talk, we will create a Dialogflow app to get Google Assistant to talk to a container managed by Kubernetes/App Engine. In the container, we’ll use some of the Machine Learning APIs and BigQuery and see how they can elevate our Google Assistant app to the next level. We will also integrate with Stackdriver and see how you can get more insights about your app with HTTP tracing and live debugging features of Stackdriver.
Stop reinventing the wheel with Istio
Containers provide a consistent and reproducible environment to run our services. Orchestration systems like Kubernetes help us to manage and scale our container cluster with a consistent API. This is a good start for a loosely coupled microservices architecture but it is not enough. How do you control the flow of traffic and enforce policies between services? How do you visualize service dependencies and quickly identify issues? How can you provide verifiable service identities, handle and test for failures? You can implement your own custom solutions or you can rely on Istio, an open platform to connect, manage and secure microservices. In this talk, we will take a look at some of the key capabilities of Istio and see how it can help with your microservices network.
Kubernetes & Istio: the efficient approach to well-managed infrastructure
Google’s Kubernetes management system for container clusters has taken the IT world by storm. Learn how you can slash the time for you to get a change into production, and enable zero-downtime deployments. You’ll see how Google Kubernetes Engine takes away the administrative burden of running infrastructure so your IT teams can spend their time innovating. You’ll also get a taste for Istio, the new open system for securely managing networks of microservices.
Containers, Kubernetes and Google Cloud
Creating a single microservice is a well understood problem. Creating a cluster of load-balanced microservices that are resilient and self-healing is not so easy. Managing that cluster with rollouts and rollbacks, scaling individual services on demand, securely sharing secrets and configuration among services is even harder. Kubernetes, an open-source container management system, can help with this. In this talk, we will start with a simple microservice, containerize it using Docker, and scale it to a cluster of resilient microservices managed by Kubernetes. Along the way, we will learn what makes Kubernetes a great system for automating deployment, operations, and scaling of containerized applications.
Guided Tour of Google Cloud
Come along with us on a whirlwind tour of the Google Cloud Platform! We’ll cover computing, storage, data processing, networking, and machine learning, with a focus on use cases and live demos. You can expect to gain a high level of understanding of the capabilities available, and hopefully be inspired to build something great!
.NET apps on Google Cloud
With high performance Virtual Machines (VM) and networking, blazing fast VM provisioning and autoscaling and a rich set of services, Google Cloud is a great platform to deploy and run your traditional ASP.NET and new containerised ASP.NET Core applications. In this session, we will cover:
- How to run traditional Windows and SQL Server based ASP.NET apps on Compute Engine.
- How to run the new Linux based containerised ASP.NET Core apps App Engine and Kubernetes/Container Engine.
- How to integrate with Google Cloud services such as Cloud Storage and use machine learning APIs such as Vision API and Speech API.
- How to use Google Cloud PowerShell cmdlets and Visual Studio extension to manage your projects.
This is your opportunity to learn about what Google Cloud offers for your .NET apps!
Google Cloud Serverless Workshop
In this workshop, you will discover the various serverless options offered by Google Cloud Platform, such Cloud Functions (functions as a service), App Engine (application as a service), and Cloud Run (container as a service). You will create an application that lets users upload, analyse, and share pictures. Data will be stored in Cloud Storage (images), Cloud Firestore (structured data). Along the way, additional services will be used such as Vision API (to analyze pictures), Cloud Logging (to track interesting events) and Cloud Scheduler (to invoke workloads on a schedule)
Serverless with Knative Workshop
When you build a serverless app, you either tie yourself to a cloud provider, or you end up building your own serverless stack. Knative provides a better choice. Knative extends Kubernetes to provide a set of middleware components (build, serving, events) for modern, source-centric, and container-based apps that can run anywhere. In this workshop, we’ll go through Knative components (Build. Eventing, Serving) and see how they can help to build serverless pipelines built with open-source technologies.
Kubernetes from Basics to Advanced Workshop
In this workshop, we’ll take a look at the basic of containers and Kubernetes such as pods, volumes, services, labels/selectors, replica sets. Then, we will get into more advanced details of scaling, namespaces and all the other great features of Kubernetes.