Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
This article shows you how to add Confluent Cloud for Apache Kafka source to an eventstream.
Confluent Cloud for Apache Kafka is a streaming platform offering powerful data streaming and processing functionalities using Apache Kafka. By integrating Confluent Cloud for Apache Kafka as a source within your eventstream, you can seamlessly process real-time data streams before routing them to multiple destinations within Fabric.
Note
This source is not supported in the following regions of your workspace capacity: West US3, Switzerland West.
Prerequisites
- Access to a workspace in the Fabric capacity license mode (or) the Trial license mode with Contributor or higher permissions.
- A Confluent Cloud for Apache Kafka cluster and an API Key.
- Your Confluent Cloud for Apache Kafka cluster must be publicly accessible and not be behind a firewall or secured in a virtual network.
- If you don't have an eventstream, create an eventstream.
Launch the Select a data source wizard
If you haven't added any source to your eventstream yet, select Use external source tile.
If you're adding the source to an already published eventstream, switch to Edit mode, select Add source on the ribbon, and then select External sources.
Configure and connect to Confluent Cloud for Apache Kafka
On the Select a data source page, select Confluent Cloud for Apache Kafka.
To create a connection to the Confluent Cloud for Apache Kafka source, select New connection.
In the Connection settings section, enter Confluent Bootstrap Server. Navigate to your Confluent Cloud home page, select Cluster Settings, and copy the address to your Bootstrap Server.
In the Connection credentials section, If you have an existing connection to the Confluent cluster, select it from the dropdown list for Connection. Otherwise, follow these steps:
- For Connection name, enter a name for the connection.
- For Authentication kind, confirm that Confluent Cloud Key is selected.
- For API Key and API Key Secret:
Navigate to your Confluent Cloud.
Select API Keys on the side menu.
Select the Add key button to create a new API key.
Copy the API Key and Secret.
Paste those values into the API Key and API Key Secret fields.
Select Connect
Scroll to see the Configure Confluent Cloud for Apache Kafka data source section on the page. Enter the information to complete the configuration of the Confluent data source.
- For Topic name, enter a topic name from your Confluent Cloud. You can create or manage your topic in the Confluent Cloud Console.
- For Consumer group, enter a consumer group of your Confluent Cloud. It provides you with the dedicated consumer group for getting the events from Confluent Cloud cluster.
- For Reset auto offset setting, select one of the following values:
Earliest – the earliest data available from your Confluent cluster
Latest – the latest available data
None – don't automatically set the offset.
Depending on whether your data is encoded using Confluent Schema Registry:
- If not encoded, select Next. On the Review and create screen, review the summary, and then select Add to complete the setup.
- If encoded, proceed to the next step: Connect to Confluent schema registry to decode data (preview)
Connect to Confluent schema registry to decode data (preview)
Eventstream's Confluent Cloud for Apache Kafka streaming connector is capable of decoding data produced with Confluent serializer and its Schema Registry from Confluent Cloud. Data encoded with this serializer of Confluent schema registry require schema retrieval from the Confluent Schema Registry for decoding. Without access to the schema, Eventstream can't preview, process, or route the incoming data.
You may expand Advanced settings to configure Confluent Schema Registry connection:
Define and serialize data: Select Yes allows you to serialize the data into a standardized format. Select No keeps the data in its original format and passes it through without modification.
If your data is encoded using a schema registry, select Yes when choosing whether the data is encoded with a schema registry. Then, select New connection to configure access to your Confluent Schema Registry:
- Schema Registry URL: The public endpoint of your schema registry.
- API Key and API Key Secret: Navigate to Confluent Cloud Environment's Schema Registry to copy the API Key and API Secret. Ensure the account used to create this API key has DeveloperRead or higher permission on the schema.
- Privacy Level: Choose from None, Private, Organizational, or Public.
JSON output decimal format: Specifies the JSON serialization format for Decimal logical type values in the data from the source.
- NUMERIC: Serialize as numbers.
- BASE64: Serialize as base64 encoded data.
Select Next. On the Review and create screen, review the summary, and then select Add to complete the setup.
You see that the Confluent Cloud for Apache Kafka source is added to your eventstream on the canvas in Edit mode. To implement this newly added Confluent Cloud for Apache Kafka source, select Publish on the ribbon.
After you complete these steps, the Confluent Cloud for Apache Kafka source is available for visualization in Live view.
Limitations
- Confluent Cloud for Apache Kafka with JSON and Avro formats, using schema registry, is currently not supported.
- Decoding data from Confluent Cloud for Apache Kafka using the Confluent Schema Registry is currently not supported.
Related content
Other connectors: