Announcing Workflows execution steps history

As you orchestrate more services with Workflows, the workflow gets more complicated with more steps, jumps, iterations, parallel branches. When the workflow execution inevitably fails at some point, you need to debug and figure out which step failed and why. So far, you only had an execution summary with inputs/outputs and logs to rely on in your execution debugging. While this was good enough for basic workflows, it didn’t provide step level debugging information. Read More ↗︎

C# library and samples for GenAI in Vertex AI

In my previous post, I talked about multi-language libraries and samples for GenAI. In this post, I want to zoom into some C# specific information for GenAI in Vertex AI. C# GenAI samples for Vertex AI If you want to skip this blog post and just jump into code, there’s a collection of C# GenAI samples for Vertex AI. These samples show how to invoke GenAI from C# for different use cases such as text classification, extraction, summarization, sentiment analysis and more using the C# client library. Read More →

Deploy and manage Kubernetes applications with Workflows

Workflows is a versatile service in orchestrating and automating a wide range of use cases: microservices, business processes, Data and ML pipelines, IT operations, and more. It can also be used to automate deployment of containerized applications on Kubernetes Engine (GKE) and this got even easier with the newly released (in preview) Kubernetes API Connector. The new Kubernetes API connector enables access to GKE services from Workflows and this in turn enables Kubernetes based resource management or orchestration, scheduled Kubernetes jobs, and more. Read More ↗︎

Multi-language libraries and samples for GenAI in Vertex AI

You might think that you need to know Python to be able to use GenAI with VertexAI. While Python is the dominant language in GenAI (and Vertex AI is no exception in that regard), you can actually use GenAI in Vertex AI from other languages such as Java, C#, Node.js, Go, and more. Let’s take a look at the details. Vertex AI SDK for Python The official SDK for Vertex AI is Vertex AI SDK for Python and as expected, it’s in Python. Read More →

Introducing a new Eventarc destination - internal HTTP endpoint in a VPC network

Introduction Eventarc helps users build event-driven architectures without having to implement, customize, or maintain the underlying infrastructure. Eventarc has added support (in public preview) for delivering events to internal HTTP endpoints in a Virtual Private Cloud (VPC) network. Customers, especially large enterprises, often run compute (typically GKE or GCE) on VPC-private IPs, often behind internal load balancers. This launch will enable these services to consume Eventarc events. Internal HTTP endpoints can be an internal IP address or fully qualified DNS name (FQDN) for any HTTP endpoint in the VPC network. Read More →

What languages are supported in WebAssembly outside the browser?

What languages are supported in WebAssembly running outside the browser? This is a question I often hear people ask. It’s has a complicated answer because: WebAssembly outside the browser needs WASI and not all languages have WASI support in their toolchain. Even if WASI is supported well in a language, WASI has its own limitations that you need to take into account. In short, you can’t take any code written in any language and expect to compile and run it as a Wasm+Wasi module right now. Read More →

Adding HTTP around Wasm with Wagi

In my previous posts, I talked about how you can run WebAssembly (Wasm) outside the browser with Wasi and run it in a Docker container with runwasi. The Wasi specification allows Wasm modules access to things like the filesystem and environment variables (and I showed how in this blog post) but networking and threading are not implemented yet. This is severely limiting if you want to run HTTP based microservices on Wasm for example. Read More →

Buffer workflow executions with a Cloud Tasks queue

Introduction In my previous post, I talked about how you can use a parent workflow to execute child workflows in parallel for faster overall processing time and easier detection of errors. Another useful pattern is to use a Cloud Tasks queue to create Workflows executions and that’s the topic of this post. When your application experiences a sudden surge of traffic, it’s natural to want to handle the increased load by creating a high number of concurrent workflow executions. Read More →

Workflows executing other parallel workflows: A practical guide

Introduction There are numerous scenarios where you might want to execute tasks in parallel. One common use case involves dividing data into batches, processing each batch in parallel, and combining the results in the end. This approach not only enhances the speed of the overall processing but it also allows for easier error detection in smaller tasks. On the other hand, setting up parallel tasks, monitoring them, handling errors in each task, and combining the results in the end is not trivial. Read More →

Generative AI Short Courses by DeepLearning.AI

Introduction In my previous couple posts (post1, post2), I shared my detailed notes on Generative AI Learning Path in Google Cloud’s Skills Boost. It’s a great collection of courses to get started in GenAI, especially on the theory underpinning GenAI. Since then, I discovered another great resource to learn more about GenAI: Learn Generative AI Short Courses by DeepLearning.AI from Andrew Ng. In this post, I summarize what each course teaches you to help you decide which course to take. Read More →
GenAI  AI