Executing commands (gcloud, kubectl) from Workflows


In a previous post, I showed how to manage the lifecycle of a virtual machine using Workflows and the Compute Engine connector. This works well when there’s a connector for the resource you’re trying to manage. When there is no connector, you can try to use the API of the resource from Workflows, if there’s one. Alternatively, you can also use my favorite command line tool to manage the resource: gcloud. Or if you’re managing a Kubernetes cluster, maybe you want to call kubectl instead.

At this point, you might ask: How do you execute a command line tool such as gcloud or kubectl from Workflows?

Your intuition is right. There’s no direct way to call gcloud or kubectl from Workflows. However, Cloud Build provides some container images with gcloud and kubectl included. You can run gcloud and kubectl commands in those containers from a Cloud Build step. You can also create a Cloud Build step from a Workflows step using the Cloud Build connector.

You see where I’m going with this? Let’s look at the details.

Create a workflow for gcloud

Let’s first create a workflow to run gcloud commands, for example, gcloud workflows list.

Here’s a workflow-gcloud.yaml file with an execute_command step:

main:
  steps:
  - execute_command:
      call: gcloud
      args:
          args: "workflows list"
      result: result
  - return_result:
      return: ${result}

The execute_command step calls a sub-workflow named gcloud with the command you want to run workflows list as arguments. The sub-workflow is defined as follows:

gcloud:
  params: [args]
  steps:
  - create_build:
      call: googleapis.cloudbuild.v1.projects.builds.create
      args:
        projectId: ${sys.get_env("GOOGLE_CLOUD_PROJECT_ID")}
        parent: ${"projects/" + sys.get_env("GOOGLE_CLOUD_PROJECT_ID") + "/locations/global"}
        body:
          serviceAccount: ${sys.get_env("GOOGLE_CLOUD_SERVICE_ACCOUNT_NAME")}
          options:
            logging: CLOUD_LOGGING_ONLY
          steps:
          - name: gcr.io/google.com/cloudsdktool/cloud-sdk
            entrypoint: /bin/bash
            args: ${["-c", "gcloud " + args + " > $$BUILDER_OUTPUT/output"]}
      result: result_builds_create
  - return_gcloud_result:
      return: ${text.split(text.decode(base64.decode(result_builds_create.metadata.build.results.buildStepOutputs[0])), "\n")}

The sub-workflow uses the Cloud Build connector to create a build with a single step. In that step, you use the gcr.io/google.com/cloudsdktool/cloud-sdk image to run the gcloud command with the supplied arguments. You also capture the output, which is a base64-encoded multiline string. In the last step, you access the output, decode it, and split by new lines to return an array of lines.

We’re basically calling gcloud commands with the help of Cloud Build, capturing output and returning the output as a multi-line array!

Create a workflow for kubectl

Let’s see how to do the same with kubectl commands.

Here’s a workflow-kubectl.yaml file that executes kubectl --help command by calling the execute_command sub-workflow:

main:
  steps:
  - execute_command:
      call: kubectl
      args:
          args: "--help"
      result: result
  - return_result:
      return: ${result}

The kubectl sub-workflow is defined as follows:

kubectl:
  params: [args]
  steps:
  - create_build:
      call: googleapis.cloudbuild.v1.projects.builds.create
      args:
        projectId: ${sys.get_env("GOOGLE_CLOUD_PROJECT_ID")}
        parent: ${"projects/" + sys.get_env("GOOGLE_CLOUD_PROJECT_ID") + "/locations/global"}
        body:
          serviceAccount: ${sys.get_env("GOOGLE_CLOUD_SERVICE_ACCOUNT_NAME")}
          options:
            logging: CLOUD_LOGGING_ONLY
          steps:
          - name: gcr.io/cloud-builders/kubectl
            entrypoint: /bin/bash
            args: ${["-c", "kubectl " + args + " > $$BUILDER_OUTPUT/output"]}
      result: result_builds_create
  - return_build_result:
      return: ${text.split(text.decode(base64.decode(result_builds_create.metadata.build.results.buildStepOutputs[0])), "\n")}

Notice that the sub-workflow is very similar to the previous one, except it relies on the the gcr.io/cloud-builders/kubectl image to run the kubectl command.

Admittedly, this is a very simple sample, as kubectl --help does not interface with a real cluster. You’d need to figure out how to authenticate kubectl with a real cluster but that’s not the point of this blog post.

Deploy the workflows

Make sure you have a Google Cloud project and the project id is set in gcloud:

PROJECT_ID=your-project-id
gcloud config set project $PROJECT_ID

Run setup.sh to enable required services, assign necessary roles, and deploy both workflows.

Run the workflow for gcloud

Run the workflow from Google Cloud Console or gcloud:

gcloud workflows run workflow-gcloud

You should see a new build successfully running gcloud in Cloud Build:

Build in Cloud Build

You should also see the output of the gcloud command when the workflow succeeds:

result: '["NAME                                                                                        STATE   REVISION_ID  UPDATE_TIME",
"projects/atamel-workflows-gcloud/locations/us-central1/workflows/workflow-gcloud            ACTIVE  000001-7fa   2022-10-07T09:26:27.470230358Z",
"projects/atamel-workflows-gcloud/locations/us-central1/workflows/workflow-kubectl           ACTIVE  000001-215   2022-10-07T09:26:32.511757767Z",
""]'

Run the workflow for kubectl

Run the workflow from Google Cloud Console or gcloud:

gcloud workflows run workflow-kubectl

You should see a new build successfully running kubectl in Cloud Build:

Build in Cloud Build

You should also see the output of the kubectl command when the workflow succeeds:

result: '["kubectl controls the Kubernetes cluster manager.",""," Find more information
  at: https://kubernetes.io/docs/reference/kubectl/overview/","","Basic Commands (Beginner):","  create        Create
  a resource from a file or from stdin","  expose        Take a replication controller,
  service, deployment or pod and expose it as a new Kubernetes service","  run           Run
  a particular image on the cluster","  set           Set specific features on objects","","Basic
  Commands (Intermediate):","  explain       Get documentation for a resource","  get           Display
  one or many resources","  edit          Edit a resource on the server","  delete        Delete
  resources by file names, stdin, resources and names, or by resources and label selector","","Deploy
  Commands:","  rollout       Manage the rollout of a resource","  scale         Set
  a new size for a deployment, replica set, or replication controller","  autoscale     Auto-scale
  a deployment, replica set, stateful set, or replication controller","","Cluster
  Management Commands:","  certificate   Modify certificate resources.","  cluster-info  Display
  cluster information","  top           Display resource (CPU/memory) usage","  cordon        Mark
  node as unschedulable","  uncordon      Mark node as schedulable","  drain         Drain
  node in preparation for maintenance","  taint         Update the taints on one or
  more nodes","","Troubleshooting and Debugging Commands:","  describe      Show details
  of a specific resource or group of resources","  logs          Print the logs for
  a container in a pod","  attach        Attach to a running container","  exec          Execute
  a command in a container","  port-forward  Forward one or more local ports to a
  pod","  proxy         Run a proxy to the Kubernetes API server","  cp            Copy
  files and directories to and from containers","  auth          Inspect authorization","  debug         Create
  debugging sessions for troubleshooting workloads and nodes","","Advanced Commands:","  diff          Diff
  the live version against a would-be applied version","  apply         Apply a configuration
  to a resource by file name or stdin","  patch         Update fields of a resource","  replace       Replace
  a resource by file name or stdin","  wait          Experimental: Wait for a specific
  condition on one or many resources","  kustomize     Build a kustomization target
  from a directory or URL.","","Settings Commands:","  label         Update the labels
  on a resource","  annotate      Update the annotations on a resource","  completion    Output
  shell completion code for the specified shell (bash or zsh)","","Other Commands:","  api-resources
  Print the supported API resources on the server","  api-versions  Print the supported
  API versions on the server, in the form of \"group/version\"","  config        Modify
  kubeconfig files","  plugin        Provides utilities for interacting with plugins","  version       Print
  the client and server version information","","Usage:","  kubectl [flags] [options]","","Use
  \"kubectl \u003ccommand\u003e --help\" for more information about a given command.","Use
  \"kubectl options\" for a list of global command-line options (applies to all commands).",""]'

This is an example of how to run gcloud and kubectl commands from Workflows. However, there’s a bigger pattern here. Any tool you can run in a container can potentially be invoked from Workflows via a Cloud Build step. This is huge and opens up a lot of opportunities!

Thanks to our Workflows Product Manager, Kris Braun, for the idea and the initial prototype of the sample. For questions or feedback, feel free to reach out to me on Twitter @meteatamel.


See also