Announcing the Charmed Kafka beta

robgibbon

on 12 December 2023

Tags: Kafka

This article was last updated 1 year ago.


Charmed Kafka is a complete solution to manage the full lifecycle of Apache Kafka.

The Canonical Data Fabric team is pleased to announce the first beta release of Charmed Kafka, our solution for Apache Kafka®.

Apache Kafka® is a free, open source message broker for event processing at massive scale. Kafka is ideal for building streaming applications, including data hubs where timely access to information is a necessity. It can also be used as the backbone of your microservices solutions and as a data processing engine in its own right.

In order to help enterprises with the deployment, operation and long term security maintenance of their Kafka clusters, Canonical is introducing Charmed Kafka, which is now entering public beta.

A comprehensive solution for Apache Kafka®

Canonical’s Charmed Kafka is an advanced, fully supported solution for Kafka, designed to be run on both cloud virtual machines directly, or on Kubernetes clusters, as users prefer. This beta release is the first preview on the road to building a comprehensive solution for Kafka users, delivering additional automation capabilities and support beyond what is available upstream.

The beta release includes features for:

  • Canonical maintained distributions of Kafka 3.5 and ZooKeeper 3.6
  • Deploying, configuring and clustering Kafka broker on VMs and on K8s
  • Deploying, configuring and clustering ZooKeeper on VMs and on K8s
  • Optimising the underlying OS configuration for Kafka broker and ZooKeeper on VMs
  • Securing Kafka with TLS, mTLS and SCRAM
  • Horizontally scaling Kafka and ZooKeeper on VMs and on K8s
  • In place minor upgrade of Kafka and ZooKeeper on VMs
  • Integration with Canonical Observability Stack for centralised logging, monitoring and alerting

Charmed Kafka is a part of Canonical Data Fabric, a set of solutions for data processing, including Charmed Spark and Charmed MongoDB, with additional solutions to be announced. The Data Fabric suite enables users to flexibly build, maintain and operate a comprehensive data processing environment founded on best of breed open source software. Appropriate solutions can be deployed for data processing at any scale on a range of cloud infrastructure.

Kubernetes users can deploy Charmed Kafka to MicroK8s, Charmed Kubernetes and AWS Elastic Kubernetes Service (EKS).

Cloud infrastructure users can deploy Charmed Kafka to Charmed OpenStack, VMWare, AWS EC2 and Azure VMs.

Share your feedback

At Canonical, we always value the community’s feedback about our products. We would like to ask you to try out Canonical’s Charmed Kafka and send us your comments, bug reports and general feedback so we can include them in our future releases.

To get started, head over to the Data Fabric documentation pages and follow the Kafka quickstart tutorial.

Chat with us on our chat server or file bug reports and feature requests in Github.

Be the first to know: Join Canonical Data Fabric Beta Program

Canonical is building a suite of advanced, open source system solutions for data management applications including:

We would like to invite you to join our Data Fabric beta programme and be among the first to try out our new solutions.

Talk to us today

Interested in running Ubuntu in your organisation?

Newsletter signup

Get the latest Ubuntu news and updates in your inbox.

By submitting this form, I confirm that I have read and agree to Canonical's Privacy Policy.

Related posts

Canonical announces the general availability of Charmed Kafka

27 February 2024: Today, Canonical announced the release of Charmed Kafka – an advanced solution for Apache Kafka® that provides everything users need to run...

Apache Kafka service design for low latency and no data loss

Designing a production service environment around Apache Kafka that delivers low latency and zero-data loss at scale is non-trivial. Indeed, it’s the holy...

Data Pipelines Overview

A Data Pipeline is a series of processes that collects raw data from various sources, filters the disqualified data, transforms them into the appropriate...