Collect Confluent Cloud on Microsoft Azure
Container Observability (CO) lets you collect and monitor metrics from your Confluent Cloud clusters. To do this, you will connect Confluent Cloud to CO Prometheus and provide the required configuration in your CO deployment. Once set up, Prometheus will periodically scrape cluster metrics, giving you real-time insights into cluster health and performance.
Prerequisites
Ensure you have the following:
Access to the Confluent Cloud UI.
An existing Confluent Cloud cluster to monitor.
Virtana CO South helm deployment deployed or ready for deployment.
Access to the
opscruise-values.yaml
configuration file for your CO deployment.Appropriate permissions to generate API credentials in Confluent Cloud.
Helm CLI installed and configured.
Generate Prometheus configuration in Confluent Cloud
To authenticate Prometheus with Confluent Cloud’s telemetry endpoint, create API credentials and download the configuration snippet.
Steps
Log in to your Confluent Cloud account.
Open the environment that contains the cluster you want to monitor.
Select the cluster from the list to view its Cluster Overview.
In the Cluster Overview section, click Explore Metrics.
In the Integrate with your monitoring service section, select Prometheus.
Generate Cloud API credentials to authenticate the scraper:
Click Generate a Cloud API Key (or use an existing key).
Copy and securely store the generated API Key (Username) and Secret (Password).
Warning
Store these credentials securely. You won't be able to retrieve the secret again.
In the Resources dropdown, select All resources to ensure the configuration includes parameters for all available resource types, for example, Kafka, Schema Registry, ksqlDB, and others.
Copy the entire Prometheus configuration snippet displayed on the screen.
Example 2. Example configuration snippetThe copied snippet looks similar to this example, with placeholders for your API credentials and cluster IDs:
- job_name: Confluent Cloud scrape_interval: 1m scrape_timeout: 1m # Set 'honor_timestamps' to false to use the current time # for scraped metrics (recommended for Confluent Cloud) honor_timestamps: false static_configs: - targets: - api.telemetry.confluent.cloud scheme: https basic_auth: username: <YourCloudAPIKey> password: <YourCloudAPISecret> metrics_path: /v2/metrics/cloud/export params: "resource.kafka.id": - <YourKafkaClusterID> "resource.schema_registry.id": - <YourSchemaRegistryID> "resource.ksql.id": - <YourKsqlClusterID> "resource.compute_pool.id": - <YourComputePoolID> "resource.connector.id": - <YourKafkaConnectorID>
Update the snippet by replacing <YourCloudAPIKey> and <YourCloudAPISecret> with the credentials you generated in step 6.
Note
Confluent Cloud metrics often have a slight delay. Setting honor_timestamps: false instructs Prometheus to use the scraping time instead of the metric timestamp, which improves data correlation in Container Observability.
Configure the Container Observability South Deployment
You can integrate the configuration into your CO South Deployment using either the embedded Prometheus instance or the OpenTelemetry (Otel) metric collector.
Option A: Configure Prometheus scraping
Use this option if you're leveraging the built-in Prometheus instance in your CO deployment.
Open your opscruise-values.yaml file.
Locate the prometheus: section.
Add the configuration under nonIstioConfigMap.additionalScrapeConfigs:.
Paste your updated Confluent Cloud Prometheus configuration.
##### prometheus configs ##### prometheus: nonIstioConfigMap: additionalScrapeConfigs: - job_name: Confluent Cloud scrape_interval: 1m scrape_timeout: 1m honor_timestamps: false static_configs: - targets: - api.telemetry.confluent.cloud scheme: https basic_auth: username: <YourCloudAPIKey> # Your actual API Key password: <YourCloudAPISecret> # Your actual API Secret metrics_path: /v2/metrics/cloud/export params: "resource.kafka.id": - <YourKafkaClusterID> # Include other 'resource.id' parameters as needed
Option B: Configure Open Telemetry (Otel) metric collector
Use this option if you're using the OpenTelemetry (Otel) metric collector within your CO deployment.
Open your opscruise-values.yaml file.
Locate the otel-metric-collector: section.
Add the configuration under additional_receivers_configs:.
Paste your updated Confluent Cloud Prometheus configuration.
##### OTEL Metric Collector ##### otel-metric-collector: additional_receivers_configs: prometheus: config: scrape_configs: - job_name: Confluent Cloud scrape_interval: 1m scrape_timeout: 1m honor_timestamps: false static_configs: - targets: - api.telemetry.confluent.cloud scheme: https basic_auth: username: <YourCloudAPIKey> # Your actual API Key password: <YourCloudAPISecret> # Your actual API Secret metrics_path: /v2/metrics/cloud/export params: "resource.kafka.id": - <YourKafkaClusterID> # Include other 'resource.id' parameters as needed
Deploy the Container Observability South Deployment
After updating the opscruise-values.yaml file with the Confluent Cloud scrape configuration, use Helm to deploy or upgrade your CO South Deployment.
To refer to the helm command,
Go to Container Observability and select a cluster. On the page that opens, click System Status and select South Deployment Guide in the dropdown menu.
Example
helm upgrade --install opscruise-bundle virtana-repo/virtana-co --namespace opscruise \ --create-namespace -f <ORG_ID>-<CLUSTER_NAME>-opscruise-values.yaml \ --version <LATEST_VERSION>
Verify the deployment
After deployment, the embedded collector (Prometheus or Otel) begins scraping metrics from your Confluent Cloud cluster using the provided configuration.
Check the pod status:
kubectl get pods -n opscruise