Apache Kafka Monitoring & Management
The Apache Kafka UI for developers, data scientists, and operations.
Get started
Kafka Monitoring starts with data
Kafka Monitoring starts with the question of what data an application generates in the respective streams, through questions about data quality and the domain-specific correctness of the data, to analyses based on this, for example, reports on critical values or events.
- Direct insight into all data in Kafka
- Access to extensive filter and data transformations (up to JavaScript processors)
- Create new data, e.g. for testing or correction purposes, with a few clicks
Detect consumer & producer errors
Detect failures of data deliveries or gaps in a data stream at a glance, or check consumer performance based on offsets lag, i.e., how many records a consumer has to read before it catches up to the current record.
Test your applications
Create records in a topic from scratch or based on existing records, control the position from which a consumer starts consuming records, and empty or delete the topic when testing is complete.
Control the whole Kafka ecosystem
Of course, a complete Apache Kafka UI also includes the management of topics as well as ACLs, Kafka Connect clusters, and schemas (e.g. in Avro, JSON, or Protobuf format) in the associated schema registry.
Rights management (Web UI only)
With KaDeck Web you get a central web service for managing and monitoring Apache Kafka and all related components. Define namespaces, roles and groups (RBAC), anonymize personal information (GDPR) and always be on the safe side with the built-in audit log.