Migrating from Kubernetes Deployment to Knative Serving


When I talk about Knative, I often get questions on how to migrate an app from Kubernetes Deployment (sometimes with Istio) to Knative and what are the differences between the two setups.

First of all, everything you can do with a Knative Service, you can probably do with a pure Kubernetes + Istio setup and the right configuration. However, it’ll be much harder to get right. The whole point of Knative is to simplify and abstract away the details of Kubernetes and Istio for you.

In this blog post, I want to answer the question in a different way. I want to start with a Knative Service and show how to setup the same service with Kubernetes + Istio the ‘hard way’.

Knative Service

In my previous post, I showed how to deploy an autoscaled, gRPC enabled, ASP.NET Core service with Knative. This was the Knative service definition yaml file:

apiVersion: serving.knative.dev/v1beta1  
kind: Service  
metadata:  
  name: grpc-greeter  
  namespace: default  
spec:  
  template:  
    spec:  
      containers:  
        - image: docker.io/meteatamel/grpc-greeter:v1  
          ports:  
          - name: h2c  
            containerPort: 8080

Notice the simplicity of the yaml file. It had the container image and the port info (HTTP2/8080) and not much else. Once deployed, Knative Serving took care of all the details of deploying the container in a Kubernetes pod, exposing that pod to the outside world via Istio ingress and also setting up autoscaling.

What does it take to deploy the same service in a Kubernetes + Istio cluster without Knative? Let’s take a look.

Kubernetes Deployment

First, we need a Kubernetes Deployment to encapsulate the container in a pod. This is how the deployment yaml looks like:

apiVersion: apps/v1  
kind: Deployment  
metadata:  
  name: grpc-greeter  
spec:  
  selector:  
      matchLabels:  
        app: grpc-greeter  
  template:  
    metadata:  
      labels:  
        app: grpc-greeter  
    spec:  
      containers:  
      - name: grpc-greeter  
        image: docker.io/meteatamel/grpc-greeter:v1  
        ports:  
        - name: h2c  
          containerPort: 8080

This is already more verbose than a Knative service definition. Once deployed, we’ll have a pod running the container.

Kubernetes Service

Next step is to expose the pod behind a Kubernetes Service:

apiVersion: v1  
kind: Service  
metadata:  
  name: grpc-greeter-service  
spec:  
  ports:  
  - name: http2  
    port: 80  
    targetPort: h2c  
  selector:  
    app: grpc-greeter

This will expose the pod behind port 80. However, it’s not publicly accessible yet until we setup networking in Istio.

Istio Gateway and VirtualService

In an Istio cluster, we need to first setup a Gateway to enable external traffic on a port/protocol. In our case, our app requires HTTP on port 80. This is the Gateway definition we need:

apiVersion: networking.istio.io/v1alpha3  
kind: Gateway  
metadata:  
  name: grpc-gateway  
spec:  
  selector:  
    istio: ingressgateway # use istio default controller  
  servers:  
  - port:  
      number: 80  
      name: http  
      protocol: HTTP  
    hosts:  
    - "\*"

We now have traffic enabled on port 80 but we need to map the traffic to the Kubernetes Service we created earlier. That’s done via a VirtualService:

apiVersion: networking.istio.io/v1alpha3  
kind: VirtualService  
metadata:  
  name: grpc-virtualservice  
spec:  
  hosts:  
  - "\*"  
  gateways:  
  - grpc-gateway  
  http:  
  - route:  
    - destination:  
        host: grpc-greeter-service

Our pod is finally publicly accessible. You can use the GrpcGreeterClient from my previous blog to point to the Istio Ingress Gateway IP and you should see a response from our service:

\> dotnet run   
Greeting: Hello GreeterClient   
Press any key to exit...

Phew! A lot of steps to deploy a publicly accessible container without Knative. We still need to setup autoscaling of pods to get parity with Knative Serving but I’ll leave that as an exercise to the reader.

I hope it’s clear now that Knative makes it easier to deploy autoscaled containers with much less configuration. Knative’s higher level APIs allow you to focus more on your code in a container than the underlying details of how that container is deployed and how its traffic is managed with Kubernetes and Istio.

Thanks to Matt Moore from the Knative team for giving me the idea for the blog post.


See also