Applying a path pattern when filtering in Eventarc

Eventarc
Eventarc

You can now apply a path pattern when filtering in Eventarc. This is especially useful when you need to filter on resource names beyond exact match. Path pattern syntax allows you to define a regex-like expression that matches events as broadly as you like.

Let’s take a look at a concrete example.

Without path patterns

Let’s say you want to listen for new file creations in a Cloud Storage bucket with an AuditLog trigger.

Read More →

Building APIs with Cloud Functions and API Gateway

Building APIs with Cloud Run

If I want to build an API, I usually use Cloud Run. In Cloud Run, you run a container and in that container, you run a web server that handles a base URL in this format:

https://<service-name>-<hash>-<region>.a.run.app

You can then have the web server handle any path under that base URL such as:

https://<service-name>-<hash>-<region>.a.run.app/hello
https://<service-name>-<hash>-<region>.a.run.app/bye

Building APIs with Cloud Functions

In Cloud Functions, you only have access to a function (no web server) and that function can only handle the base path:

Read More →

Long-running containers with Workflows and Compute Engine

Sometimes, you need to run a piece of code for hours, days, or even weeks. Cloud Functions and Cloud Run are my default choices to run code. However, they both have limitations on how long a function or container can run. This rules out the idea of executing long-running code in a serverless way.

Thanks to Workflows and Compute Engine, you can have an almost serverless experience with long running code.

Read More ↗︎

Implementing the saga pattern in Workflows

It’s common to have a separate database for each service in microservices-based architectures. This pattern ensures that the independently designed and deployed microservices remain independent in the data layer as well. But it also introduces a new problem: How do you implement transactions (single units of work, usually made up of multiple operations) that span multiple microservices each with their own local database?

In a traditional monolith architecture, you can rely on ACID transactions (atomicity, consistency, isolation, durability) against a single database. In a microservices architecture, ensuring data consistency across multiple service-specific databases becomes more challenging. You cannot simply rely on local transactions. You need a cross-service transaction strategy. That’s where the saga pattern comes into play.

.NET 6 support in Google Cloud Buildpacks and Cloud Run

I really like the idea of containers, a reproducible context that our apps can rely on. However, I really dislike having to create a Dockerfile to define my container images. I always need to copy & paste it from somewhere and it takes me some time to get it right.

That’s why I like using Google Cloud Buildpacks to build containers without having to worry about a Dockerfile.

I’m happy to announce that, as of yesterday, Google Cloud Buildpacks now supports .NET 6 (the latest LTS version) apps as well!

Read More →

Auto-completion for Workflows JSON and YAML on Visual Studio Code

If you’re like me, you probably use VS Code to author your Workflows JSON or YAML. You also probably expect some kind syntax validation or auto-completion as you work on your workflow. Unfortunately, there’s no VS Code extension for Workflows and Cloud Code for VS Code does not support Workflows.

However, there’s a way to get partial auto-completion for Workflows in VS Code.

VS Code and JSON Schema

VS Code has the ability to display auto-complete suggestions for JSON and YAML files out of the box. It uses JSON Schema Store which hosts JSON Schemas for popular configuration files in JSON and YAML.

Read More →

Introducing the new Eventarc UI, Cloud Run for Anthos destinations

December was a busy month for the Eventarc team, who landed a number of features at the end of the year. Let’s take a closer look at some of these new capabilities.

Cloud Storage trigger is GA Back in September, we announced the public preview of Cloud Storage triggers as the preferred way of routing Cloud Storage events to Cloud Run targets. They are now generally available. For more details, see the documentation on how to create a Cloud Storage trigger and check out my previous blog post on Cloud Storage triggers.

Read More ↗︎

Cross-region and cross-project event routing with Eventarc and Pub/Sub

With event-driven architectures, it’s quite common to read events from a source in one region or project and route them to a destination in another region or another project. Let’s take a look at how you can implement cross-region and cross-project event routing in Google Cloud.

Cross-region event routing is straightforward in Google Cloud, whether you’re using Pub/Sub directly or Eventarc. Pub/Sub routes messages globally. When applications hosted in any region publish messages to a topic, subscribers from any region can pull from that topic. Eventarc enables you to route events across regions by creating a trigger in the region of the event’s source and specifying a destination in a different region. For more details, take a look at my previous blog post on Eventarc locations.

A closer look at locations in Eventarc

New locations in Eventarc Back in August, we announced more Eventarc locations (17 new regions, as well as 6 new dual-region and multi-region locations to be precise). This takes the total number of locations in Eventarc to more than 30. You can see the full list in the Eventarc locations page or by running gcloud eventarc locations list .

What does location mean in Eventarc? An Eventarc location usually refers to the single region that the Eventarc trigger gets created in. However, depending on the trigger type, the location can be more than a single region:

Read More ↗︎

Trying out source-based deployment in Cloud Run

Until recently, this is how you deployed code to Cloud Run:

Define your container-based app with a Dockerfile. Build the container image and push it to the Container Registry (typically with Cloud Build). Deploy the container image to Cloud Run. Back in December, we announced the beta release of source-based deployments for Cloud Run. This combines steps 2 and 3 above into a single command. Perhaps more importantly, it also eliminates the need for a Dockerfile for supported language versions.

Read More ↗︎