Orchestrate Vertex AI’s PaLM and Gemini APIs with Workflows

Everyone is excited about generative AI (gen AI) nowadays and rightfully so. You might be generating text with PaLM 2 or Gemini Pro, generating images with ImageGen 2, translating code from language to another with Codey, or describing images and videos with Gemini Pro Vision. No matter how you’re using gen AI, at the end of the day, you’re calling an endpoint either with an SDK or a library or via a REST API. Read More ↗︎

Announcing Workflows execution steps history

As you orchestrate more services with Workflows, the workflow gets more complicated with more steps, jumps, iterations, parallel branches. When the workflow execution inevitably fails at some point, you need to debug and figure out which step failed and why. So far, you only had an execution summary with inputs/outputs and logs to rely on in your execution debugging. While this was good enough for basic workflows, it didn’t provide step level debugging information. Read More ↗︎

Deploy and manage Kubernetes applications with Workflows

Workflows is a versatile service in orchestrating and automating a wide range of use cases: microservices, business processes, Data and ML pipelines, IT operations, and more. It can also be used to automate deployment of containerized applications on Kubernetes Engine (GKE) and this got even easier with the newly released (in preview) Kubernetes API Connector. The new Kubernetes API connector enables access to GKE services from Workflows and this in turn enables Kubernetes based resource management or orchestration, scheduled Kubernetes jobs, and more. Read More ↗︎

Introducing a new Eventarc destination - internal HTTP endpoint in a VPC network

Introduction Eventarc helps users build event-driven architectures without having to implement, customize, or maintain the underlying infrastructure. Eventarc has added support (in public preview) for delivering events to internal HTTP endpoints in a Virtual Private Cloud (VPC) network. Customers, especially large enterprises, often run compute (typically GKE or GCE) on VPC-private IPs, often behind internal load balancers. This launch will enable these services to consume Eventarc events. Internal HTTP endpoints can be an internal IP address or fully qualified DNS name (FQDN) for any HTTP endpoint in the VPC network. Read More →

What languages are supported in WebAssembly outside the browser?

What languages are supported in WebAssembly running outside the browser? This is a question I often hear people ask. It’s has a complicated answer because: WebAssembly outside the browser needs WASI and not all languages have WASI support in their toolchain. Even if WASI is supported well in a language, WASI has its own limitations that you need to take into account. In short, you can’t take any code written in any language and expect to compile and run it as a Wasm+Wasi module right now. Read More →

Adding HTTP around Wasm with Wagi

In my previous posts, I talked about how you can run WebAssembly (Wasm) outside the browser with Wasi and run it in a Docker container with runwasi. The Wasi specification allows Wasm modules access to things like the filesystem and environment variables (and I showed how in this blog post) but networking and threading are not implemented yet. This is severely limiting if you want to run HTTP based microservices on Wasm for example. Read More →

Buffer workflow executions with a Cloud Tasks queue

Introduction In my previous post, I talked about how you can use a parent workflow to execute child workflows in parallel for faster overall processing time and easier detection of errors. Another useful pattern is to use a Cloud Tasks queue to create Workflows executions and that’s the topic of this post. When your application experiences a sudden surge of traffic, it’s natural to want to handle the increased load by creating a high number of concurrent workflow executions. Read More →

Workflows executing other parallel workflows: A practical guide

Introduction There are numerous scenarios where you might want to execute tasks in parallel. One common use case involves dividing data into batches, processing each batch in parallel, and combining the results in the end. This approach not only enhances the speed of the overall processing but it also allows for easier error detection in smaller tasks. On the other hand, setting up parallel tasks, monitoring them, handling errors in each task, and combining the results in the end is not trivial. Read More →

Running Wasm in a container

Docker recently announced experimental support for running Wasm modules (see Announcing Docker+Wasm Technical Preview 2). In this blog post, I explain what this means and how to run a Wasm module in Docker. Why run Wasm in a container? In my Exploring WebAssembly outside the browser post, I mentioned how Wasm is faster, smaller, more secure, and more portable than a container. You might be wondering: Why take something faster, smaller, more secure, and more portable and run it in a container? Read More →

Compile Rust & Go to a Wasm+Wasi module and run in a Wasm runtime

In my Exploring WebAssembly outside the browser post, I talked about how WebAssembly System Interface (WASI) enables Wasm modules to run outside the browser and interact with the host in a limited set of use cases that Wasi supports (see Wasi proposals). In this blog post, let’s look into details of how to compile code to a Wasm+Wasi module and then run it in a Wasm runtime. Notice that I use Wasm+Wasi module deliberately (instead of just Wasm) because some languages have Wasm support and can run perfectly fine in the browser but they have no or limited Wasi support to run outside the browser. Read More →