Canonical Kubernetes 1.29 is now generally available

A new upstream Kubernetes release, 1.29, is generally available, with significant new features and bugfixes. Canonical closely follows upstream development, harmonising our releases to deliver timely and up-to-date enhancements backed by our commitment to security and support – which means that MicroK8s 1.29 is now generally available as well and Charmed Kubernetes 1.29 will join shortly.

What’s new in Canonical Kubernetes 1.29

Canonical Kubernetes distributions, MicroK8s and Charmed Kubernetes, provide all the features available in the upstream Kubernetes 1.29. We’ve also added a number of new capabilities. For the complete list of changes and enhancements please refer to the MicroK8s and Charmed Kubernetes release notes.

MicroK8s 1.29 highlights 

AI/ML at scale with NVIDIA integrations

We have included the GPU and network NVIDIA operators in the new nvidia addon. The NVIDIA GPU operator automates the management of all NVIDIA software components needed to provision GPUs, such as kernel drivers or the NVIDIA Container Toolkit. The Network Operator works in tandem with the GPU operator and enables GPU-Direct RDMA on compatible systems.

For more information please read the following blog post: Canonical Kubernetes enhances AI/ML development capabilities with NVIDIA integrations

Usability and performance improvements for DQLite

Much of the recent focus of the MicroK8s team has been on improving stability and efficiency of the default datastore shipped together with our Kubernetes distribution. Among others, you can find the following changes available in this MicroK8s version:

  • DQlite node role reassignment in case of failure domain availability/changes
  • Optional admission control to protect the performance of the datastore
  • Handling the out of disk storage case
  • Performance improvements related to static linking of DQlite and SQL query preparation

Growing community and partner ecosystem

We welcome the addition of three new addons offered by Canonical partners and community members:

  • Falco: the cloud-native security tool that employs custom rules on kernel events to provide real-time alerts
  • CloudNative PG Operator: Leveraging cloud native Postgres, EDB Postgres for Kubernetes adds speed, efficiency and protection for your infrastructure modernisation
  • ngrok: Ingress Controller which instantly adds connectivity, load balancing, authentication, and observability to your services

Charmed Kubernetes 1.29 highlights

Charmed Operator Framework (Ops)

We’re pleased to announce the completion of the Charmed Kubernetes refactor that began earlier this year. Charms have moved from the reactive and pod-spec styles to the ops framework in order to enable access to common charm libraries, better Juju support, and a more consistent charming experience for community engagement.

Out of the box monitoring enhancements

The Canonical Observability Stack (COS) gathers, processes, visualises and alerts on telemetry signals generated by workloads running both within and outside of Juju. COS provides an out of the box observability suite relying on the best-in-class open-source observability tools.

This release expands our COS integration so that it includes rich monitoring for the control plane and worker node components of Charmed Kubernetes.

Container networking enhancements

Kube-OVN 1.12

Charmed Kubernetes continues its commitment to advanced container networking with support for the Kube-OVN CNI. This release includes a Kube-OVN upgrade to v1.12. You can find more information about features and fixes in the upstream release notes.

Tigera Calico Enterprise

The calico-enterprise charm debuts as a new container networking option for Charmed Kubernetes in this release. This charm brings advanced Calico networking/network policy support and is offered as an alternative to the default Calico CNI.

Component upgrades and fixes

For a full list of component upgrades, features, and bug fixes for the Charmed Kubernetes 1.29 release go to the Launchpad milestone page.

Notable changes in upstream Kubernetes 1.29

You can read the full changelog for defaults regarding features, deprecations and bug fixes included in 1.29 release. Here are the most significant changes.

Sidecar Containers go beta and enabled by default

This hugely popular pattern of running sidecar containers goes beta and slowly but surely makes it into first class citizenship. With explicitly defined sidecar containers, among others, you can start your logs grabbing sidecar before your main application or init container. No need to worry about service mesh availability on your app startup or pod termination for your job – sidecar containers have got you covered. This feature is entering beta stage, and starting with 1.29 it will be enabled by default.

Common Expression Language (CEL) for Admission Control improvements

Admission validation policies use the Common Expression Language (CEL) to declare admission policies for Kubernetes resources through simple expressions (for example, do not allow creating pods without a required label, or pods with privileged host path mounts). They are highly configurable and enable policy authors to define policies that can be parameterized and scoped to resources as needed by cluster administrators. CEL for Admission Control has been available since 1.26. It is disabled by default and available behind a ValidatingAdmissionPolicy feature flag.

CRI-full Container and Pod stats go to alpha

The monitoring of workloads is one of the most crucial aspects of running your cluster in production. After all, how else can you know what your containers and pods resource usage is? Right now, this information comes from both CRI and cAdvisor, which leads to duplication of work and sometimes unclear origin of metrics. The goal of this enhancement is to extend CRI API and implementations so they can provide all the metrics needed for proper observability of containers and pods. You can enable this feature with the PodAndContainerStatsFromCRI flag. 

Improvements for supporting User Namespaces in pods

Currently, the container process user ID (UID) and group ID (GID) are the same inside the pod and on the host. As a result, it creates a particular security challenge when such a process is able to break out of the pod into the host – it still uses the same UID/GID. If there is any other container running with the same UID/GID, a rogue process could interfere with it. In the worst case scenario, such a process running as root inside the pod would still run as a root on the host. This enhancement proposes supporting User Namespaces, which enable running containers inside pods with different user and group IDs than on the host. If you would like to enable User Namespaces Support, it is still alpha in K8s 1.29 and is available behind a UserNamespacesSupport feature flag.

Learn more about Canonical Kubernetes or talk to our team

kubernetes logo

What is Kubernetes?

Kubernetes, or K8s for short, is an open source platform pioneered by Google, which started as a simple container orchestration tool but has grown into a platform for deploying, monitoring and managing apps and services across clouds.

Learn more about Kubernetes ›

Newsletter signup

Get the latest Ubuntu news and updates in your inbox.

By submitting this form, I confirm that I have read and agree to Canonical's Privacy Policy.

Related posts

How should a great K8s distro feel? Try the new Canonical Kubernetes, now in beta

Try the new Canonical Kubernetes beta, our new distribution that combines ZeroOps for small clusters and intelligent automation for larger production...

Turbocharge your API and microservice delivery on MicroK8s with Microcks

Give Microcks on MicroK8s a try and experience the benefits of accelerated development cycles and robust testing.

Canonical joins the Sylva project

Canonical is proud to announce that we have joined the Sylva project of Linux Foundation Europe as a General Member. We aim to bring our open source...