Ambari Kubernetes Manager View (Tech Preview)
This feature will be included in ODP 1.3.2.0 as a Tech Preview, currently in qualification. It is available for early enterprise testing.
Interested in early access? Contact our team to join the enterprise early access program.
The Ambari Kubernetes Manager View is an Ambari plugin that extends cluster management to Kubernetes workloads. It provides a unified interface for deploying, configuring, monitoring, and managing the full lifecycle of Helm-based applications running on a connected Kubernetes or OpenShift cluster — all within the same Ambari UI used for managing HDFS, YARN, Hive, and other cluster services.
How the Kubernetes View Works
The Kubernetes View operates as a server-side plugin within Ambari. When you deploy an application through the View, Ambari:
- Reads the current cluster configuration (Hive Metastore URI, Kerberos realm, Ranger REST URL, LDAP settings, etc.)
- Generates the appropriate Helm values, merging cluster-derived settings with any user-supplied overrides
- Executes the Helm install or upgrade against the configured Kubernetes cluster
- Tracks the deployment operation asynchronously and reports progress through the Ambari background operations interface
- Monitors the released Helm release via Flux for ongoing status
This approach ensures that configuration values derived from the ODP cluster (URIs, hostnames, security parameters) are always consistent with the actual cluster state, rather than being duplicated and potentially drifting.
Installing the Kubernetes View Plugin
The Kubernetes Manager View is bundled with Ambari 2.8.2.0. If you are running Ambari 2.8.2.0, the view is available in your Ambari installation.
To activate the view:
- Log into Ambari as a cluster administrator.
- Navigate to Admin > Views.
- Locate KUBERNETES_MANAGER in the list of available views.
- Click Create Instance and provide:
- Instance Name: a label for this view instance (e.g.,
k8s-prod) - Display Name: the label shown in the Ambari UI sidebar
- Description: optional description
- Instance Name: a label for this view instance (e.g.,
Once the instance is created, the view appears in the Ambari Views menu and is accessible to users with the appropriate Ambari role.
Connecting Ambari to a Kubernetes Cluster
Before deploying workloads, you must configure the connection between Ambari and your Kubernetes cluster.
Service Account Setup
Create a dedicated service account in your Kubernetes cluster for Ambari to use:
# Create namespace for ODP-managed apps
kubectl create namespace odp-apps
# Create service account
kubectl create serviceaccount ambari-manager -n odp-apps
# Create ClusterRole (or namespace-scoped Role) with required permissions
kubectl create clusterrolebinding ambari-manager-binding \
--clusterrole=cluster-admin \
--serviceaccount=odp-apps:ambari-manager
The cluster-admin binding above is used for simplicity. In production, restrict the role to the specific API groups and resources required: apps/deployments, core/services, core/configmaps, core/secrets, core/persistentvolumeclaims, and batch/jobs.
Kubeconfig Configuration
In the Kubernetes View settings, provide the kubeconfig or connection parameters:
| Parameter | Description |
|---|---|
| Kubernetes API URL | The API server endpoint (e.g., https://k8s-api.example.com:6443) |
| CA Certificate | The cluster CA certificate (PEM format) |
| Service Account Token | Token for the ambari-manager service account |
| Namespace | Target namespace for deployments (e.g., odp-apps) |
| Helm Binary Path | Path to the Helm 3 binary on the Ambari server |
Ambari stores the service account token encrypted in the Ambari database.
Verifying the Connection
After saving the connection parameters, click Test Connection in the Kubernetes View. Ambari will attempt to list resources in the configured namespace. A successful test confirms that the API URL, credentials, and network connectivity are all working.
The Kubernetes Manager UI
Once connected, the Kubernetes View provides the following management flows:
Application Catalog
The main screen lists the applications available for deployment: currently Trino and Apache Superset. Each entry shows:
- Application name and version
- Deployment status (Not Deployed / Deployed / Upgrading / Failed)
- Helm release name
- Last operation timestamp
Deploy Flow
- Select an application from the catalog.
- Click Deploy.
- The configuration wizard presents grouped settings:
- General: replica counts, resource requests and limits
- Security: Kerberos settings (pre-populated from the cluster), OIDC parameters
- Connectivity: connector URIs (pre-populated from the cluster)
- Advanced: raw Helm values override (YAML editor)
- Click Deploy to submit. Ambari creates a background operation.
Upgrade Flow
When a new chart version is available:
- Select the deployed application.
- Click Upgrade.
- Review the configuration diff between the current and new version.
- Confirm and submit. Ambari executes
helm upgradeand tracks the rollout.
Rollback
If an upgrade fails or produces issues:
- Select the deployed application.
- Click Rollback.
- Select the target revision from the Helm release history.
- Confirm. Ambari executes
helm rollbackand returns the release to the selected revision.
Uninstall
To remove a deployed application:
- Select the deployed application.
- Click Uninstall.
- Confirm. Ambari executes
helm uninstalland removes all Kubernetes resources created by the chart.
Background Operations and Progress Tracking
All Helm operations (install, upgrade, rollback, uninstall) run as background operations in Ambari. This means:
- You do not need to keep the browser window open for the operation to complete.
- Progress is visible in the Ambari Background Operations panel (the clock icon in the Ambari toolbar).
- Each operation produces a structured log that shows Helm output and any errors.
- Operations have a configurable timeout (default: 10 minutes).
If an operation fails, the background operation log contains the full error output from Helm, which is essential for troubleshooting.
GitOps and Flux Release Status Monitoring
The Kubernetes View integrates with Flux (GitOps toolkit) to provide ongoing release status monitoring. When Flux is configured in your Kubernetes cluster and managing the Helm releases deployed by Ambari, the View displays:
- Flux HelmRelease status: whether the release is reconciled, pending, or in error
- Last reconcile time: when Flux last checked the release against the desired state
- Drift detection: if manual changes have been made to Kubernetes resources outside of Ambari/Flux, the status reflects the drift
This is particularly useful in environments where infrastructure changes go through a GitOps review process. Ambari's Helm install creates or updates the Flux HelmRelease custom resource; Flux handles the actual reconciliation from the Git repository.
To use Flux integration, install Flux in your Kubernetes cluster before connecting it to Ambari:
flux install
The Kubernetes View will automatically detect Flux if the Flux CRDs are present in the cluster.
Kerberos Keytab Delegation
For applications that need to authenticate to ODP services (Hive Metastore, HDFS, Ranger), Ambari handles keytab provisioning:
- Ambari generates or retrieves a service keytab from the cluster's Kerberos infrastructure (FreeIPA or MIT KDC).
- The keytab is stored as a Kubernetes
Secretin the application namespace. - The Helm chart mounts the keytab secret into the application containers.
- Application configuration (e.g., Trino's
core-site.xml) references the keytab path.
The service principal used for each application is configurable in the deployment wizard. The default naming convention follows: <service>/<hostname>@<REALM>.
Keytab rotation: when the keytab is renewed in Kerberos, re-triggering the Helm upgrade from Ambari will update the Kubernetes Secret with the new keytab.
OIDC Authentication Integration
For workloads that expose a web UI (Apache Superset), Ambari supports configuring OIDC (OpenID Connect) authentication:
| Parameter | Description |
|---|---|
| OIDC Provider URL | The OIDC provider's issuer URL (e.g., your Keycloak or Dex instance) |
| Client ID | The OAuth2 client ID registered for this application |
| Client Secret | The OAuth2 client secret (stored encrypted in Ambari) |
| Allowed Groups | LDAP/AD groups whose members are permitted to access the application |
| Admin Groups | Groups granted administrator access within the application |
When OIDC is configured, the Helm chart is deployed with the OIDC proxy sidecar or native OIDC configuration (depending on the application). Users accessing Superset are redirected to the OIDC provider for authentication.
OIDC and Kerberos are complementary in this architecture: Kerberos secures backend service-to-service communication, while OIDC secures user-facing web interfaces.
Ranger and LDAP Configuration Materialization
One of the key benefits of the Kubernetes View is that it reads existing ODP security configuration and injects it into Helm values automatically. At deployment time, Ambari materializes:
| ODP Config Source | Materialized Into |
|---|---|
| Ranger REST URL and admin credentials | Trino Ranger plugin configuration |
| Hive Metastore URIs (from Hive config) | Trino Hive catalog hive.metastore.uri |
| Kerberos realm and KDC address | krb5.conf configmap in Kubernetes |
| LDAP/AD server URL and bind DN | Superset auth configuration |
| HDFS NameNode URI | Trino HDFS config |
This eliminates the need to manually copy configuration values from your Ambari configs into Helm values files — a process that is error-prone and often leads to misconfiguration.
Known Limitations (Tech Preview)
| Limitation | Notes |
|---|---|
| No YARN integration | Trino resource management is independent of YARN queues |
| No Atlas lineage for Trino | Queries through Trino are not captured in Atlas in this release |
| Superset HA not configured via Ambari | Multiple Superset replicas require manual Helm override |
| Keytab rotation requires manual re-deploy | No automatic keytab renewal trigger yet |
| Limited to one Kubernetes cluster per Ambari View instance | Multi-cluster support is planned |
| OpenShift Security Context Constraints | May require additional SCC configuration for some charts on OpenShift |
These limitations will be addressed in future ODP releases as the Kubernetes integration moves from Tech Preview toward general availability.