Skip to main content

Kubernetes deployment process for Virtana IO

The Kubernetes deployment process for Virtana IO now supports profile-based resource configuration. Administrators can select a deployment profile: small, medium, or large, which automatically adjusts:

  • CPU and memory requests/limits

  • Vertica tuning values

  • Service-level resource allocations

This is handled through the deploymentSize parameter in Helm values or via the --set global.deploymentSize flag during deployment.

Selecting a Deployment Profile

The deployment profile is defined at installation time by passing the deploymentSize value to Helm.

Using Helm Command

Profile is defined at the time of deployment via Helm:

helm upgrade --install virtana-io-infra virtana-io \
   --namespace virtana-io --create-namespace \
   -f io-values.yaml \
   --set global.deploymentSize=small \
   --set tags.infra=true \
   --version 2025.9.1
   
   
helm upgrade --install virtana-io-dbs virtana-io \
   --namespace virtana-io --create-namespace \
   -f io-values.yaml \
   --set global.deploymentSize=small \
   --set tags.dbs=true \
   --version 2025.9.1

Using values.yaml

You can also define the deployment profile directly inside values.yaml:

deploymentSize: medium

Profile Definitions

Each profile provides preset CPU and memory values for all IO services.

  • Small Profile: Use the Small profile for lower-capacity environments, POCs, or resource-constrained clusters. This profile requires 160 GB of memory, 24 CPU cores, and up to 85 pods.

  • Medium Profile: Use the Medium profile for standard production deployments requiring balanced performance. This profile requires 180 GB of memory, 26 CPU cores, and up to 85 pods.

  • Large Profile: Use the Large profile for high-scale production environments with heavy data requirements. This profile requires 256 GB of memory, 32 CPU cores, and up to 85 pods.

How It Works

  1. Helm reads the deployment profile from the deploymentSize value.

  2. Based on the selected profile, the IO chart loads predefined CPU/memory settings.

  3. Resources are applied automatically to each IO service, ensuring optimized cluster usage.