Kubermatic branding element

kcp v0.31: Kubernetes 1.35 Rebase, Cross-Shard Identity, and a Load Testing Framework

The kcp community shipped v0.31.0 on April 13. This was a coordinated release: kcp-operator (v0.7.0), api-syncagent (v0.6.0), multicluster-provider (v0.6.0), init-agent (v0.3.0), and updated Helm charts all shipped the same week. It’s a substantial release that touches the foundation (Kubernetes 1.35 rebase), addresses a long-standing operational limitation (cross-shard service accounts), and lays groundwork for external extensibility and performance validation.

Here’s what changed and why it matters.

Kubernetes 1.35.1 Rebase

kcp now tracks Kubernetes 1.35.1 with Go 1.25.7. This is not a simple version bump. A Kubernetes rebase in kcp touches the entire codebase: API types, generated clients, admission plugins, controller behavior, and test fixtures all need to adapt. The fact that the community keeps pace with upstream Kubernetes releases is what makes kcp a credible foundation for production platforms.

For operators, this means kcp workspaces now behave consistently with Kubernetes 1.35 semantics, including any new API fields, admission changes, and deprecation removals that came with the upstream release.

Cross-Shard Service Account Lookup

Until now, service account validation in kcp was limited to the shard where the account was created. If a controller on shard A needed to authenticate with a service account from shard B, it would fail. This was a real operational limitation for anyone running multi-shard deployments.

v0.31 introduces a TTL-based cache that enables service account validation across shard boundaries. The GlobalServiceAccount feature gate has been removed because the capability is now always on. This is a quiet but significant change: it removes a class of authentication failures that made multi-shard topologies harder to operate.

APIResourceSchema Virtual Workspace

A new Virtual Workspace type gives API providers access to the APIResourceSchemas of consumer workspaces. This is particularly relevant for the kube-bind integration, where a provider needs to understand the schema of the resources it is serving to consumers.

In practical terms, if you are building a service that exposes APIs to multiple kcp workspaces, you can now inspect and react to the schemas your consumers are using, without requiring direct access to their workspaces.

Virtual Workspace Framework Extraction

The Virtual Workspace framework (pkg/virtual/framework and pkg/virtual/options) has been moved to a dedicated staging repository. This is an architectural change aimed at external developers: if you are building a custom Virtual Workspace, you no longer need to vendor the entire kcp codebase. You can import just the framework.

This reduces the dependency footprint for external VW projects and makes it easier for the ecosystem to build on kcp’s extensibility model without tracking kcp’s full dependency tree.

Load Testing Framework

kcp now has a load testing framework inspired by Kubernetes’ clusterloader2. It supports scenario definitions (e.g., “create 10,000 empty workspaces”), P99 latency statistics, and structured reporting. The framework ships with three components: a scenario runner, a metrics collector, and a report generator.

For the project, this is about proving that kcp’s multi-tenant architecture scales. For operators evaluating kcp, it means the project now has a way to publish reproducible performance benchmarks, not just anecdotal claims.

Security and Data Integrity Fixes

Two critical fixes deserve attention:

Etcd key poisoning. Unresolved workspace paths could corrupt etcd keys with malformed cluster names. This is a data integrity issue: once a bad key lands in etcd, it can affect lookups for other workspaces. Fixed in v0.31.

Virtual Workspace proxy impersonation isolation. A concurrency issue caused impersonation headers to leak between requests in the VW proxy. In a multi-tenant system, impersonation header leakage means one request could briefly carry the identity of another. This has been resolved with per-request isolation.

Both fixes are the kind of issue that matters most in production multi-tenant environments where data isolation and identity integrity are non-negotiable.

Several dependency updates also address published CVEs:

API and CLI Improvements

The kcp CLI gains claims accept and claims reject subcommands, along with --accept-all-permission-claims and --reject-all-permission-claims flags. This makes PermissionClaim management scriptable, which is important for automation workflows where API bindings need to be approved or denied programmatically.

On the API side, a new defaultSelector field on PermissionClaim in APIExport lets providers specify default permission claim selectors that are automatically applied when APIBindings are created via WorkspaceType. This reduces the manual configuration needed when onboarding consumers to an API export.

Also worth noting: parallel resource installation during startup reduces cold-start time by approximately 5 seconds, and container image size has been reduced by roughly 25% through stripped debugging symbols.

Ecosystem Releases

The v0.31 release is coordinated across the kcp ecosystem. Here’s what shipped alongside the core:

kcp-operator v0.7.0

  • Support for topologySpreadConstraints for better pod distribution
  • Multi-replica CacheServer deployments for scalability
  • Initial support for external Virtual Workspaces
  • CEL validation for Virtual Workspace configuration
  • Go 1.26.2, gRPC CVE fix

api-syncagent v0.6.0

  • Watch and sync changes to related resources (not just primary objects)
  • Fix cleanup of related resources on primary object deletion
  • Fix APIResourceSchema agent annotations and labels
  • Security updates for OpenTelemetry SDK CVEs and gRPC CVE

multicluster-provider v0.6.0

  • Fix for factory managing multiple providers
  • Fix WildcardCache.Start with multiple Providers
  • Cluster filter and ready-only APIBinding engagement
  • Separate client module for cleaner dependency management
  • Adapted to multicluster-runtime v0.23.3 ClusterName type change

init-agent v0.3.0

  • Virtual Workspace URL path support in --config-workspace
  • Support for multiple InitTargets for the same WorkspaceType

Helm Charts

All corresponding charts updated. The proxy, shard, certificates, and cache charts are now deprecated in favor of the kcp-operator.

Deprecations

  • --external-hostname flag: now determined automatically from --shard-base-url or --bind-address
  • --shard-external-url flag for Virtual Workspaces: marked unused
  • Legacy MachineAnnotations API: removed
  • Helm charts for proxy, shard, certificates, and cache: deprecated, use kcp-operator instead

Getting Started

Contributors

Thanks to everyone who contributed to kcp v0.31.0 across the core, operator, syncagent, multicluster-provider, and init-agent repositories. The coordinated release reflects a maturing project with a growing contributor base.

If you want to get involved, the kcp-dev GitHub organization is the place to start. Join #kcp-dev on Kubernetes Slack to connect with the community.

Build Your Internal Developer Platform on kcp with KDP

kcp is powerful, but turning it into a production platform takes work: workspace lifecycle management, API publishing workflows, identity integration, day-2 operations, and the glue that makes it usable for your developers.

That’s exactly what the Kubermatic Developer Platform (KDP) is built for. KDP is a fully managed internal developer platform built on kcp, designed to give platform teams a productized path from zero to a multi-tenant, API-driven developer experience, without having to assemble and operate the stack yourself.

With KDP you get:

  • Multi-tenant workspaces with role-based access and policy guardrails
  • API publishing and consumption patterns built on kcp’s APIExport/APIBinding model
  • Day-2 operations, upgrades, backups, observability, and multi-shard scaling handled for you
  • Enterprise support from the team that contributes to kcp upstream

If you are evaluating kcp for an internal developer platform, talk to us.

Abubakar Siddiq Ango

Abubakar Siddiq Ango

Senior Developer Advocate

Kubermatic named in the 2025 Gartner® Magic Quadrant™ for Container Management

Access the Report