쿠버네티스 1.33: 드디어! Pod Vertical Scaling 기능 제공 🎉

Kubernetes 1.33 버전이 출시됐고, 많은 사람들이 기다려 온 기능이 포함됐습니다. 바로 in-place pod vertical scaling에 대한 설명입니다.

다이어그램과 함께 실제로 동작하는 모습을 보여주는 간단한 데모를 확인하세요. (쿠버네티스 1.33(Rapid Channel)**이 적용된 GKE(Google Kubernetes Engine) 환경에서, 쿠버네티스 API를 통한 파드 설정 확인)

1. Create a Resource-Monitoring Pod
Start by creating a Pod that continuously monitors its own resource allocations:

kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
  name: resize-demo
spec:
  containers:
  - name: resource-watcher
    image: ubuntu:22.04
    command:
    - "/bin/bash"
    - "-c"
    - |
      apt-get update && apt-get install -y procps bc
      echo "=== Pod Started: $(date) ==="

      # Functions to read container resource limits
      get_cpu_limit() {
        if [ -f /sys/fs/cgroup/cpu.max ]; then
          # cgroup v2
          local cpu_data=$(cat /sys/fs/cgroup/cpu.max)
          local quota=$(echo $cpu_data | awk '{print $1}')
          local period=$(echo $cpu_data | awk '{print $2}')

          if [ "$quota" = "max" ]; then
            echo "unlimited"
          else
            echo "$(echo "scale=3; $quota / $period" | bc) cores"
          fi
        else
          # cgroup v1
          local quota=$(cat /sys/fs/cgroup/cpu/cpu.cfs_quota_us)
          local period=$(cat /sys/fs/cgroup/cpu/cpu.cfs_period_us)

          if [ "$quota" = "-1" ]; then
            echo "unlimited"
          else
            echo "$(echo "scale=3; $quota / $period" | bc) cores"
          fi
        fi
      }

      get_memory_limit() {
        if [ -f /sys/fs/cgroup/memory.max ]; then
          # cgroup v2
          local mem=$(cat /sys/fs/cgroup/memory.max)
          if [ "$mem" = "max" ]; then
            echo "unlimited"
          else
            echo "$((mem / 1048576)) MiB"
          fi
        else
          # cgroup v1
          local mem=$(cat /sys/fs/cgroup/memory/memory.limit_in_bytes)
          echo "$((mem / 1048576)) MiB"
        fi
      }

      # Print resource info every 5 seconds
      while true; do
        echo "---------- Resource Check: $(date) ----------"
        echo "CPU limit: $(get_cpu_limit)"
        echo "Memory limit: $(get_memory_limit)"
        echo "Available memory: $(free -h | grep Mem | awk '{print $7}')"
        sleep 5
      done
    resizePolicy:
    - resourceName: cpu
      restartPolicy: NotRequired
    - resourceName: memory
      restartPolicy: NotRequired
    resources:
      requests:
        memory: "128Mi"
        cpu: "100m"
      limits:
        memory: "128Mi"
        cpu: "100m"
EOF

2. Explore the Pod’s Initial State
Let’s look at the Pod’s resources from the Kubernetes API perspective:

kubectl describe pod resize-demo | grep -A8 Limits:
You’ll see output like:

Limits:
      cpu:     100m
      memory:  128Mi
    Requests:
      cpu:     100m
      memory:  128Mi

Now, let’s see what the Pod itself thinks about its resources:

kubectl logs resize-demo --tail=8
You should see output including CPU and memory limits from the container’s perspective.

3. Resize CPU Seamlessly
Let’s double the CPU without any restart:

kubectl patch pod resize-demo --subresource resize --patch \
  '{"spec":{"containers":[{"name":"resource-watcher", "resources":{"requests":{"cpu":"200m"}, "limits":{"cpu":"200m"}}}]}}'

Check the resize status:

kubectl get pod resize-demo -o jsonpath='{.status.conditions[?(@.type=="PodResizeInProgress")]}'

Note: On GKE with Kubernetes 1.33, you might not see the PodResizeInProgress condition reported in the Pod status, even though the resize operation works correctly. Don’t worry if kubectl get pod resize-demo -o jsonpath='{.status.conditions}' doesn’t show resize information - check the actual resources instead.

Once that shows the resize has completed, check the updated resources from the Kubernetes API:

kubectl describe pod resize-demo | grep -A8 Limits:

And verify the Pod now sees the new CPU limit:

kubectl logs resize-demo --tail=8

Notice how the CPU limit doubled from 100m to 200m without the Pod restarting! The Pod’s logs will show the cgroup CPU limit changed from approximately 10000/100000 to 20000/100000(representing 100m to 200m CPU).

4. Resize Memory Without Drama
Now, let’s double the memory allocation:

kubectl patch pod resize-demo --subresource resize --patch \
  '{"spec":{"containers":[{"name":"resource-watcher", "resources":{"requests":{"memory":"256Mi"}, "limits":{"memory":"256Mi"}}}]}}'

After a moment, verify from the API:

kubectl describe pod resize-demo | grep -A8 Limits:

And from inside the Pod:

kubectl logs resize-demo --tail=8

You’ll see the memory limit changed from 64Mi to 128Mi without any container restart!

5. Verify No Container Restarts Occurred
Confirm the container never restarted during our resize operations:

kubectl get pod resize-demo -o jsonpath=‘{.status.containerStatuses[0].restartCount}’

Should output 0 - proving we achieved the impossible dream of resource adjustment without service interruption.

6. Cleanup
When you’re done experimenting:

kubectl delete pod resize-demo

[출처] https://medium.com/itnext/kubernetes-1-33-resizing-pods-without-the-drama-finally-88e4791be8d1

4 Likes

기대하던 기능이 드디어 나왔네요. 정보 감사합니다.

1 Like