Announcing support for Confluent Kafka metadata

Announcing our newest metadata collector - Confluent Kafka! We know how important it is to have the most up-to-date streaming data, so we’ve created this collector to allow you to easily monitor and collect Kafka metadata from your Confluent streaming platform. 

With Kafka metadata in, you and your teams can: 

  • Easily discover and monitor streaming metadata for real-time applications
  • Understand what is being streamed from on-prem and cloud Confluent
  • Have a single source of truth for your Confluent schemas for better discovery and governance

The Confluent Collector is actually two collectors, one for Confluent Platform (on-prem) and one for Confluent Cloud. With these collectors, you can capture, store, and analyze metadata including Cluster, Consumer, Producer, Broker, Partition, Schema, Consumer Group, Topic, and Environment (for Cloud). The collectors can optionally harvest metadata from Avro, JSON-schema, and Protobuf schemas stored in Confluent Schema Registry.

These Collectors are Tier 2 for Enterprise Customers. You can read the full documentation for Confluent Platform here and for Confluent Cloud here.

Avro Schema example metadata

An example of metadata for an Avro Schema in the platform