Deploying a "Canary" Pod to an Existing Kubernetes Cluster

Harry Whitehouse 65 Reputation points
2024-12-11T23:57:01.67+00:00

I have 3 PODs running an address verification process. The PODs accept JSON requests on port 8356. As you can see, I'm encountering restarts and I don't know why.

avmscpp-897c6f4cb-4lt78                                     1/1     Running     1 (25d ago)     27d

avmscpp-897c6f4cb-7sgcb                                     1/1     Running     4 (4d3h ago)    27d

avmscpp-897c6f4cb-9ztkw                                     1/1     Running     0               27d

The requests come from other containers in the same cluster. Those containers control communication to and from the outside world and then communicate "internally" to other containers like avmscpp. Here are the pods that field external requests:

rbapi-649fb78844-296m2                                      1/1     Running     0               155m

rbapi-649fb78844-544h5                                      1/1     Running     0               158m

rbapi-649fb78844-58mgl                                      1/1     Running     0               158m

I have created an updated, "instrumented" version of the avmscpp application which I'd like to deploy as a single pod and join the existing avmscpp pods servicing address verification requests. I've also created a deployment (with Copilot's help) YAML file like this:

apiVersion: apps/v1

kind: Deployment

metadata:

name: avmscpp-canary

spec:

replicas: 1

selector:

matchLabels:

  app: avmscpp-canary

template:

metadata:

  labels:

    app: avmscpp-canary

spec:

  containers:

  - name: avmscpp

    image: intlbridge.azurecr.io/avmscpp-0.5.7

    ports:

    - containerPort: 8356

apiVersion: v1

kind: Service

metadata:

name: avmscpp-canary-service

spec:

selector:

app: avmscpp-canary

ports:

  • protocol: TCP port: 8356 targetPort: 8356

type: ClusterIP

My question: Is this the right approach? Once deployed, I'd like the cluster to send 1 out of every 4 requests to my Canary pod. Will this happen automatically given the deployment script? Or do I need something else so that the new pod will share in the processing? Should I be using the name "avmscpp-canary" in my script or simply "avmscpp"?

Azure Kubernetes Service
Azure Kubernetes Service
An Azure service that provides serverless Kubernetes, an integrated continuous integration and continuous delivery experience, and enterprise-grade security and governance.
2,447 questions
0 comments No comments
{count} votes

Accepted answer
  1. Prrudram-MSFT 28,201 Reputation points Moderator
    2024-12-12T05:55:06.2+00:00

    Hello @Harry Whitehouse

    Your approach is mostly correct, but there are a few additional steps and considerations to ensure that your new pod will share in the processing as intended.

    First, let's address your deployment script. The deployment YAML file you provided is correctly set up to deploy a single canary pod. However, to achieve the desired traffic distribution (1 out of every 4 requests to the canary pod), you will need to configure a load balancer or an ingress controller that supports traffic splitting.

    Here are some key points to consider:

    Service Configuration: Ensure that your service is correctly configured to route traffic to both the stable and canary pods. You might need to update your service definition to include both sets of pods.

    Traffic Splitting: To achieve the 1:4 traffic split, you can use an ingress controller like NGINX or Istio, which supports traffic splitting. You will need to define an ingress rule that specifies the traffic distribution between the stable and canary pods.

    Label Selector: Ensure that your service selector matches the labels of both the stable and canary pods. This will allow the service to route traffic to both sets of pods.

    Monitoring and Rollback: Monitor the performance and behavior of the canary pod closely. If any issues arise, be prepared to roll back to the stable version.

    Here's an example of how you might configure an ingress rule for traffic splitting using NGINX:

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: avmscpp-ingress
    spec:
      rules:
      - host: avmscpp.example.com
        http:
          paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: avmscpp-service
                port:
                  number: 8356
      - host: avmscpp-canary.example.com
        http:
          paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: avmscpp-canary-service
                port:
                  number: 8356
    

    In this example, traffic is split between the stable and canary services based on the host header. You can adjust the configuration to achieve the desired traffic distribution.

    For more detailed guidance on canary deployments in Kubernetes, you can refer to the Azure Pipelines tutorial on canary deployments

    If you have any further questions or need additional assistance, feel free to ask!

    If I have answered your question, please accept this as answer as a token of appreciation and don't forget to thumbs up for "Was it helpful"!

    1 person found this answer helpful.

0 additional answers

Sort by: Most helpful

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.