Kubernetes v1.36, codenamed Haru, ships with 70 enhancements: 18 Stable, 25 Beta, 25 Alpha. The logo reimagines Hokusai’s Fine Wind, Clear Morning.
The release lands three changes platform teams should care about: workload-aware scheduling, fine-grained kubelet API authorization, and Pod-level hardware health reporting. Each has been building for releases.
Workload-aware scheduling
In v1.35, native gang scheduling (KEP-4671) acknowledged what ML training and HPC orchestration share: they don’t fit the one-pod-at-a-time scheduling model Kubernetes was born with.
v1.36 goes further. Workload Aware Scheduling enters alpha, with a decoupled PodGroup API (KEP-5832) and native Job controller integration. Related pods become a single logical entity: the group is evaluated for atomicity rather than each pod winning or losing placement on its own.
The direction matters more than the specific API shape, which will still evolve. Kubernetes is retrofitting HPC semantics into a scheduler designed for stateless web services. Over the next two or three releases, expect scheduler primitives that handle training groups, inference fleets, and multi-node jobs as first-class citizens.
Where workloads span cluster boundaries — the default in any multi-tenant or multi-region operator — workload-aware scheduling is more valuable paired with a control plane that can place the whole workload, not just its pods. This is what kcp has been modelling at the control-plane layer, and why Kubermatic has bet on it. If you’re building for AI or HPC-adjacent workloads, evaluate your scheduling stack this cycle.
Fine-grained kubelet authorization
Fine-grained kubelet API Authorization (KEP-2862) graduates to GA. The nodes/proxy permission, the catch-all that monitoring and diagnostic tools have historically used to talk to the kubelet API, can now be split into per-endpoint authorization. Prometheus gets only the endpoints it scrapes; log collectors get only the endpoints they consume. The capability has been beta and default-on since v1.33, so most clusters can already enable it; v1.36 adds the stability commitment.
Pair this with v1.35’s Pod certificates for workload identity (KEP-4317) and the pattern is clear: the authorization surface is being broken into smaller, auditable grants, with state and rotation logic moved closer to the components that need them.
The timing matters. DORA’s March 2026 reporting deadline already shifted how European financial institutions think about third-party authorization. NIS2 transposition is uneven but accelerating, and the EU AI Act’s high-risk requirements activate in August. An auditor who sees every monitoring component holding broad nodes/proxy will ask why least privilege was optional. In managed environments like Kubermatic Kubernetes Platform, fine-grained kubelet authz can be a platform default rather than a per-tenant audit exercise.
Resource health status for Pods
Resource Health Status for Pods (KEP-4680) enters beta. A new allocatedResourcesStatus field on Pod .status reports the health of allocated devices, surfaced through kubectl describe pod, and works across both the older Device Plugins framework and the newer Dynamic Resource Allocation (DRA) pipeline.
For most of Kubernetes’ history, a Pod was considered ready by its process alone. The hardware behind it — the GPU under the CUDA library, the SmartNIC behind the VF, the FPGA — was invisible to the scheduler and to the Pod’s own status. If a device degraded, the Pod kept reporting Ready while its workload failed.
1.36 recognises that where heterogeneous compute is the default, Pod readiness is incomplete unless the devices a Pod depends on are healthy. Expect this field to become the foundation for automated remediation: evicting pods whose GPUs have entered a degraded state, rescheduling to a node whose accelerators are alive, or surfacing the real failure mode in kubectl describe rather than in vendor driver logs.
Deprecations and removals
The biggest items for platform teams:
- Service
.spec.externalIPsis deprecated. A long-standing security weak point (CVE-2020-8554). Removal scheduled for v1.43; migrate to LoadBalancer, NodePort, or Gateway API. (KEP-5707.) gitRepovolume driver is permanently disabled. Deprecated since v1.11; a security issue allowed code execution as root on the node. Replace with init containers or a git-sync sidecar. (KEP-5040.)- Portworx in-tree volume plugin migration to CSI graduates to GA. CSI migration is mandatory.
- Ingress NGINX is retired. As of March 24, 2026, SIG Network ended releases, bugfixes, and security updates. Existing deployments continue but are unmaintained. Our KubeLB-based migration walkthrough is the operational playbook.
Review the v1.36 changelog before upgrading, and pay attention to anything beta or deprecated for more than two releases.
What this means for Kubermatic users
Ingress NGINX → Gateway API, made operational. If you’re running Ingress NGINX on KKP or anywhere else, the retirement is now in effect. KubeLB v1.3 ships an automated Ingress-to-Gateway-API converter that audits your cluster, previews the Gateway + HTTPRoute resources it will generate, handles TLS secret movement, and tracks conversion status per resource. Our earlier writeup Ingress NGINX is Retiring: Use KubeLB to Transition to Gateway API walks the migration end to end.
Fine-grained kubelet authz, now stable. The feature has been beta and default-on since Kubernetes v1.33, so KKP customers running supported Kubernetes versions (1.33+) can adopt it today. v1.36 promotes it to GA — turning the beta promise into a stability commitment. For operators running regulated workloads under DORA, NIS2, or the EU AI Act, that shortens the path to a defensible authorization posture.
Community
The v1.36 cycle ran 15 weeks, January 12 to April 22, 2026, under release lead Ryota Sawada. Contributions to Kubernetes core came from 491 individuals across 106 companies; in the wider cloud-native ecosystem, 2,235 contributors from 370 companies took part. To everyone who landed a line of code, reviewed a PR, fixed a doc, drafted a KEP, or shadowed a release-team role: thank you. The full release team roster is in the release-1.36 repository.
Looking forward
Three things from 1.36 belong on this quarter’s review agenda.
Audit your RBAC for components ever granted nodes/proxy. Most don’t need it. Fine-grained kubelet authz being GA means closing the gap between how your observability stack works today and how an auditor will expect it to work is a small config change.
Take stock of your scheduling posture. If your platform will run AI, ML, or HPC-adjacent workloads in the next 12 months, the scheduler evolution that started in 1.35 and accelerated in 1.36 isn’t going to stop. Gang scheduling, workload-aware scheduling, and the settling of the PodGroup API will reshape capacity planning and multi-cluster placement.
Extend your observability stack to include device health. allocatedResourcesStatus makes the hardware layer visible alongside Pod status. Adopt it before you need it.





