Pular para o conteúdo principal

Começando

Quickstart (One Command)

If you want to get Ilum running as fast as possible with zero decisions, use ilum quickstart. It detects your cluster (or creates a local minikube), configures a default profile, and installs Ilum with all default modules — all in a single command.

$ ilum quickstart
╭─ ilum quickstart ────────────────────────────────────────╮
│ Ilum Quickstart │
│ │
│ One-command setup: preflight checks, cluster detection, │
│ and install. │
╰──────────────────────────────────────────────────────────╯

ℹ Step 1/4: Checking prerequisites...
✓ helm 3.16.3
✓ kubectl 1.31.2
✓ docker 27.3.1
✓ All prerequisites met.

ℹ Step 2/4: Detecting Kubernetes cluster...
✓ Cluster reachable (context: minikube)

ℹ Step 3/4: Configuring profile...
✓ Profile 'default' configured with 7 modules.

ℹ Step 4/4: Installing Ilum...
Command: helm install ilum ilum/ilum --namespace default ...
⠋ Installing Ilum (this may take several minutes)...
✓ Ilum installed successfully!

ℹ Next steps:
ilum status Show release and pod status
ilum module enable Enable additional modules
ilum logs core View Ilum Core logs
ilum upgrade Upgrade to a newer version

If no cluster is reachable, quickstart automatically creates a local minikube cluster with the Dev preset (4 CPUs, 8 GB RAM):

$ ilum quickstart
...
ℹ Step 2/4: Detecting Kubernetes cluster...
ℹ No reachable cluster found. Creating a local cluster...
ℹ Creating minikube cluster 'ilum-dev'...
✓ Cluster 'ilum-dev' created (context: ilum-dev)
...

If a required tool is missing, quickstart offers to install it automatically (interactive mode only -- see Section 1.5):

$ ilum quickstart
...
ℹ Step 1/4: Checking prerequisites...
✓ helm 3.16.3
✗ kubectl not found
? Install kubectl now? Yes
ℹ Installing kubectl via direct download...
✓ kubectl installed successfully
✓ docker 27.3.1
✓ All prerequisites met.
...

You can override the provider with --provider:

$ ilum quickstart --provider k3d

To enable extra modules beyond the defaults, pass -m :

$ ilum quickstart -m sql -m airflow
ponta

When to use quickstart vs. init + install:Usar ilum quickstart when you want the fastest path to a working Ilum cluster with no customization. Use ilum init followed by ilum install when you need to pick specific modules, configure a custom namespace, choose a release name, or fine-tune the deployment before installing.

Deployment Presets

Deployment presets are curated module bundles designed for common scenarios. Instead of hand-picking individual modules, you select a preset that includes a tested combination of modules and Helm flags appropriate for your use case.

The four built-in presets:

PresetDescrição Módulos
developmentLightweight local setup for development and testingcore, ui, mongodb, postgresql, minio, jupyter, gitea (7 modules)
productionHardened setup with security, monitoring, and replicationcore, ui, mongodb, postgresql, minio, jupyter, gitea, monitoring, loki, openldap (10 modules)
data-engineeringFull data stack with SQL, orchestration, and BI toolingcore, ui, mongodb, postgresql, minio, jupyter, gitea, sql, airflow, superset, trino, hive-metastore (12 modules)
air-gappedOffline deployment with a private container registrycore, ui, mongodb, postgresql, minio, jupyter, gitea (7 modules)

O production preset also sets global.security.enabled=truee ilum-core.replicaCount=2. The air-gapped preset requires you to provide global.imageRegistry at install time (see Air-Gapped Deployments).

List all available presets:

$ ilum preset list
╭─ Deployment Presets ─────────────────────────────────────────────────────────╮
│ Name Description Modules │
├──────────────────────────────────────────────────────────────────────────────┤
│ development Lightweight local setup for development and ... 7 │
│ production Hardened setup with security, monitoring, an... 10 │
│ data-engineering Full data stack with SQL, orchestration, and... 12 │
│ air-gapped Offline deployment with a private container ... 7 │
╰──────────────────────────────────────────────────────────────────────────────╯

Use a preset with ilum install:

$ ilum install --preset production --yes
╭─ Install Summary ────────────────────────────────────────╮
│ Release ilum │
│ Namespace default │
│ Chart ilum/ilum │
│ Version latest │
│ Preset production │
│ Modules core, gitea, jupyter, loki, minio, mongodb, │
│ monitoring, openldap, postgresql, ui │
│ Atomic True │
╰──────────────────────────────────────────────────────────╯

Command: helm install ilum ilum/ilum \
--namespace default \
--timeout 10m \
--atomic \
--set ilum-core.enabled=true \
--set ilum-ui.enabled=true \
--set mongodb.enabled=true \
--set postgresql.enabled=true \
--set minio.enabled=true \
--set ilum-jupyter.enabled=true \
--set gitea.enabled=true \
--set kube-prometheus-stack.enabled=true \
--set global.logAggregation.enabled=true \
--set global.logAggregation.loki.enabled=true \
--set global.logAggregation.promtail.enabled=true \
--set openldap.enabled=true \
--set global.security.enabled=true \
--set ilum-core.replicaCount=2

⠋ Installing Ilum...
✓ Ilum installed successfully (release: ilum).

Use a preset with ilum quickstart:

$ ilum quickstart --preset data-engineering

This deploys Ilum with the full data engineering stack (SQL, Airflow, Superset, Trino, Hive Metastore) in a single command.

Presets in the ilum init wizard. When you run ilum init, the wizard presents a preset selection step before module selection. Choosing a preset pre-populates the module list; choosing "Custom" drops you into the manual module picker:

$ ilum init
...
? Choose a deployment preset:
development — Lightweight local setup for development and testing (7 modules)
production — Hardened setup with security, monitoring, and replication (10 modules)
data-engineering — Full data stack with SQL, orchestration, and BI tooling (12 modules)
air-gapped — Offline deployment with a private container registry (7 modules)
> Custom

Combining presets with extra modules. Presets define a base set of modules. You can layer additional modules on top with the -m flag:

$ ilum install --preset production -m airflow -m superset --yes

This installs the production preset (10 modules) plus Airflow and Superset, for a total of 12 modules. Dependencies are resolved automatically as usual.

Air-gapped preset.O air-gapped preset requires global.imageRegistry to be set to your private registry URL. You must provide it via --set at install time:

$ ilum install --preset air-gapped --set global.imageRegistry=registry.local:5000 --yes

If you omit the --set flag, the CLI reports the placeholder and prompts you to provide the value. See Air-Gapped Deployments for the full air-gapped workflow.

Air-Gapped Deployments

An air-gapped environment is a network that has no outbound internet access. In these environments, you cannot pull container images from public registries or download Helm charts at install time. The ilum airgap command group provides a workflow for exporting everything you need on a connected machine, transferring the bundle to the isolated network, and importing it into a private registry.

The full workflow has five steps:

Step 1: List required images.

On a machine with internet access, use ilum airgap images to discover every container image needed for your module selection. The command runs helm template under the hood and parses all image: references from the rendered manifests.

$ ilum airgap images --preset air-gapped
╭─ Container Images (23) ──────────────────────────────────────────────╮
│ Image Reference │
├──────────────────────────────────────────────────────────────────────┤
│ docker.io/ilumcloud/ilum-core:6.7.0 │
│ docker.io/ilumcloud/ilum-ui:6.7.0 │
│ docker.io/bitnami/mongodb:7.0 │
│ docker.io/bitnami/postgresql:16 │
│ docker.io/minio/minio:latest │
│ docker.io/ilumcloud/ilum-jupyter:6.7.0 │
│ docker.io/gitea/gitea:1.21 │
│ ... │
╰──────────────────────────────────────────────────────────────────────╯

You can also use --module to discover images for a custom module set, or combine --presetcom -m for additional modules:

$ ilum airgap images --preset air-gapped -m sql -m airflow

For scripting, use --format plain to get one image reference per line with no table formatting:

$ ilum airgap images --preset air-gapped --format plain
docker.io/ilumcloud/ilum-core:6.7.0
docker.io/ilumcloud/ilum-ui:6.7.0
docker.io/bitnami/mongodb:7.0
...

Step 2: Export the bundle.

O ilum airgap export command pulls every image, saves them as Docker tar archives, and writes a manifest.json describing the bundle contents.

$ ilum airgap export /tmp/bundle --preset air-gapped
ℹ Pulling docker.io/ilumcloud/ilum-core:6.7.0...
ℹ Saving docker.io/ilumcloud/ilum-core:6.7.0 → ilumcloud_ilum-core_6.7.0.tar
ℹ Pulling docker.io/ilumcloud/ilum-ui:6.7.0...
ℹ Saving docker.io/ilumcloud/ilum-ui:6.7.0 → ilumcloud_ilum-ui_6.7.0.tar
ℹ Pulling docker.io/bitnami/mongodb:7.0...
ℹ Saving docker.io/bitnami/mongodb:7.0 → bitnami_mongodb_7.0.tar
...
✓ Bundle exported to /tmp/bundle (23 images, 7 modules)

The resulting bundle directory looks like this:

/tmp/bundle/
├── manifest.json
└── images/
├── ilumcloud_ilum-core_6.7.0.tar
├── ilumcloud_ilum-ui_6.7.0.tar
├── bitnami_mongodb_7.0.tar
├── bitnami_postgresql_16.tar
└── ...

Step 3: Transfer the bundle.

Copy the entire /tmp/bundle directory to the air-gapped environment using whatever transfer mechanism your organization allows (USB drive, secure file transfer, sneakernet). The bundle is fully self-contained.

# Example: rsync to a bastion host inside the air-gapped network
$ rsync -avz /tmp/bundle/ bastion.internal:/opt/ilum-bundle/

Step 4: Import into the private registry.

On a machine inside the air-gapped network that has Docker and access to your private registry, run ilum airgap import. This loads each tar archive into Docker, re-tags it for your registry, and pushes it.

$ ilum airgap import /tmp/bundle --registry registry.local:5000
ℹ Loading ilumcloud_ilum-core_6.7.0.tar...
ℹ Pushing registry.local:5000/ilumcloud/ilum-core:6.7.0...
✓ docker.io/ilumcloud/ilum-core:6.7.0 → registry.local:5000/ilumcloud/ilum-core:6.7.0
ℹ Loading ilumcloud_ilum-ui_6.7.0.tar...
ℹ Pushing registry.local:5000/ilumcloud/ilum-ui:6.7.0...
✓ docker.io/ilumcloud/ilum-ui:6.7.0 → registry.local:5000/ilumcloud/ilum-ui:6.7.0
ℹ Loading bitnami_mongodb_7.0.tar...
ℹ Pushing registry.local:5000/bitnami/mongodb:7.0...
✓ docker.io/bitnami/mongodb:7.0 → registry.local:5000/bitnami/mongodb:7.0
...
✓ All 23 images imported to registry.local:5000

ℹ To install with the private registry, use:
ℹ ilum install --set global.imageRegistry=registry.local:5000

O --registry flag is required. It specifies the hostname (and optional port) of your private Docker registry. Each image is re-tagged by replacing the original registry prefix with the target registry while preserving the repository path and tag.

Step 5: Install Ilum.

With all images available in the private registry, install Ilum using the air-gapped preset and point global.imageRegistry at your registry:

$ ilum install --preset air-gapped --set global.imageRegistry=registry.local:5000 --yes
╭─ Install Summary ────────────────────────────────────────╮
│ Release ilum │
│ Namespace default │
│ Chart ilum/ilum │
│ Version latest │
│ Preset air-gapped │
│ Modules core, gitea, jupyter, minio, mongodb, │
│ postgresql, ui │
│ Atomic True │
╰──────────────────────────────────────────────────────────╯

⠋ Installing Ilum...
✓ Ilum installed successfully (release: ilum).

Kubernetes pulls all images from registry.local:5000 instead of the public internet. No outbound connectivity is required.

ponta

You can also use the air-gapped workflow with any preset or custom module set -- it is not limited to the air-gapped preset. For example, to prepare an air-gapped bundle for the production preset, run ilum airgap export /tmp/bundle --preset production.

Running ilum init

O ilum init command launches an interactive setup wizard that walks you through five steps: preflight checks, cluster selection, profile settings, module selection, and a confirmation summary.

$ ilum init
╭─ ilum init ──────────────────────────────────────────────╮
│ Welcome to the Ilum CLI Setup Wizard │
│ │
│ This wizard will help you configure your Ilum deployment.│
╰──────────────────────────────────────────────────────────╯

ℹ Checking prerequisites...
✓ helm 3.16.3
✓ kubectl 1.31.2
✓ docker 27.3.1
✓ All prerequisites met.

? Select a Kubernetes context:
k3d-ilum-dev
minikube
arn:aws:eks:eu-central-1:123456:cluster/staging
> >> Create a new local cluster

? Cluster provider: k3d
? Resource preset: dev (CPUs: 4, Memory: 8g)
ℹ Creating k3d cluster 'ilum-dev'...
✓ Cluster 'ilum-dev' created (context: k3d-ilum-dev)

? Profile name: default
? Helm release name: ilum
? Kubernetes namespace: default

? Select modules to enable:
(↑↓ move, space toggle, a toggle all, d defaults, enter confirm)
── CORE ──
◉ core - Ilum backend API (Spring Boot on Spark) [required]
◉ ui - Ilum web frontend (React + Nginx reverse proxy) [required]
○ livy-proxy - Livy-compatible Spark session proxy
── NOTEBOOK ──
◉ jupyter - Jupyter notebook server with Sparkmagic
○ jupyterhub - Multi-user JupyterHub (Kubernetes >= 1.28)
○ zeppelin - Apache Zeppelin notebook server
── INFRASTRUCTURE ──
◉ mongodb - MongoDB document store for metadata [required]
◉ postgresql - PostgreSQL relational database
◉ gitea - Gitea self-hosted Git service
── STORAGE ──
◉ minio - MinIO S3-compatible object storage
...

? Selected 7 modules. What would you like to do?
> Continue with this selection
Reset to defaults
Re-select modules

╭─ Configuration Summary ─────────────────────────────────╮
│ Profile default │
│ Release Name ilum │
│ Kubecontext k3d-ilum-dev │
│ Namespace default │
│ Modules core, ui, mongodb, postgresql, minio │
│ ... +3 more │
╰──────────────────────────────────────────────────────────╯

? Save this configuration? Yes
ℹ Creating new profile 'default'
✓ Configuration saved to /home/user/.config/ilum/config.yaml
✓ Setup complete!
ℹ Active profile: default
ℹ Run 'ilum install' to deploy Ilum.

Automatic prerequisite installation. If any required tool is missing or outdated, the wizard offers to install it before continuing (see Pré-requisitos ):

$ ilum init
╭─ ilum init ──────────────────────────────────────────────╮
│ Welcome to the Ilum CLI Setup Wizard │
╰──────────────────────────────────────────────────────────╯

ℹ Checking prerequisites...
✓ helm 3.16.3
✗ kubectl not found
? Install kubectl now? Yes
ℹ Installing kubectl via direct download...
✓ kubectl installed successfully
✗ docker not found
? Install docker now? Yes
ℹ Installing Docker via official install script (may require sudo)...
ℹ Adding user 'dev' to the docker group...
✓ docker installed successfully
⚠ You may need to log out and back in (or run 'newgrp docker')
for Docker group permissions to take effect.
✓ All prerequisites met.
...

If you decline the installation or it fails, the wizard exits with an error message and a link to the tool's manual installation page.

The wizard enforces that the three required modules -- núcleo , Ui e mongodb -- are always selected. If you pick a module that has dependencies (for example, SQL requires postgresqle núcleo ), those dependencies are automatically added and reported.

Fast path with defaults:

If you want to skip all prompts and accept every default, pass the --yes flag:

$ ilum init --yes

This uses the current kubeconfig context, creates a profile named inadimplência with release name ilum No inadimplência namespace, and enables the default module set (core, ui, mongodb, postgresql, minio, jupyter, gitea).

You can also name the profile explicitly:

$ ilum init --profile staging --yes

Creating a Local Cluster

If you do not already have a Kubernetes cluster, the ilum cluster create command provisions one locally. The default provider is k3d with the Dev preset (4 CPUs, 8 GB memory).

$ ilum cluster create
ℹ Creating k3d cluster 'ilum-dev'...
✓ Cluster 'ilum-dev' created (context: k3d-ilum-dev)
ℹ Kubecontext: k3d-ilum-dev
ℹ Run 'ilum init' or 'ilum install' to deploy Ilum on this cluster.

You can switch the provider and preset:

$ ilum cluster create --provider minikube --preset full
ℹ Creating minikube cluster 'ilum'...
✓ Cluster 'ilum' created (context: ilum)
ℹ Kubecontext: ilum
ℹ Run 'ilum init' or 'ilum install' to deploy Ilum on this cluster.

The two available presets are:

PresetCPUsMemória Default Name
Dev 6 12 GBilum-dev
full8 18 GBilum

The three supported providers are k3d, minikubee tipo . Each produces a different kubecontext name format: k3d uses k3d-<name>, kind uses kind-<name>, and minikube uses the cluster name directly.

To see your tracked clusters:

$ ilum cluster list
╭─ Local Clusters ─────────────────────────────────────────────────────╮
│ Name Provider Context Created │
├──────────────────────────────────────────────────────────────────────┤
│ ilum-dev k3d k3d-ilum-dev 2026-02-14T09:31:42 │
╰──────────────────────────────────────────────────────────────────────╯

Instalando o Ilum

With a cluster running and a profile configured, run ilum install to deploy the Ilum platform via Helm. The command reads your active profile for the release name, namespace, and context, then presents a summary for confirmation.

$ ilum install --yes
╭─ Install Summary ────────────────────────────────────────╮
│ Release ilum │
│ Namespace default │
│ Chart ilum/ilum │
│ Version latest │
│ Modules core, gitea, jupyter, minio, mongodb, │
│ postgresql, ui │
│ Atomic True │
╰──────────────────────────────────────────────────────────╯

Command: helm install ilum ilum/ilum \
--namespace default \
--timeout 10m \
--atomic \
--set ilum-core.enabled=true \
--set ilum-ui.enabled=true \
--set mongodb.enabled=true \
--set postgresql.enabled=true \
--set minio.enabled=true \
--set ilum-jupyter.enabled=true \
--set gitea.enabled=true

⠋ Installing Ilum...
✓ Ilum installed successfully (release: ilum).

Dry-run mode lets you preview the full Helm command without executing it:

$ ilum install --dry-run
╭─ Install Summary ────────────────────────────────────────╮
│ Release ilum │
│ Namespace default │
│ Chart ilum/ilum │
│ Version latest │
│ Modules core, gitea, jupyter, minio, mongodb, │
│ postgresql, ui │
│ Atomic True │
╰──────────────────────────────────────────────────────────╯

Command: helm install ilum ilum/ilum \
--namespace default \
--timeout 10m \
--atomic \
--set ilum-core.enabled=true \
--set ilum-ui.enabled=true \
--set mongodb.enabled=true \
--set postgresql.enabled=true \
--set minio.enabled=true \
--set ilum-jupyter.enabled=true \
--set gitea.enabled=true

ℹ Dry-run mode — no changes applied.

Enable specific modules at install time with the -m flag (repeatable):

$ ilum install -m sql -m airflow --yes
╭─ Install Summary ────────────────────────────────────────╮
│ Release ilum │
│ Namespace default │
│ Chart ilum/ilum │
│ Version latest │
│ Modules sql, airflow │
│ Atomic True │
╰──────────────────────────────────────────────────────────╯

Command: helm install ilum ilum/ilum \
--namespace default \
--timeout 10m \
--atomic \
--set ilum-sql.enabled=true \
--set ilum-core.sql.enabled=true \
--set airflow.enabled=true \
--set postgresql.enabled=true \
--set ilum-core.enabled=true

⠋ Installing Ilum...
✓ Ilum installed successfully (release: ilum).

Notice that the SQL module's dependencies (postgresqle núcleo ) are automatically resolved and included.

Other useful install flags include --version to pin a chart version, --values/ -f to supply a custom values file, --set for individual Helm overrides, --namespace/ -n to target a specific namespace, --release/ -r to set the Helm release name, and --timeout to adjust the Helm timeout (default: 10m). See Command Reference for the complete flag listing.

Checking Status

O ilum status command shows three panels: Release Info, Enabled Modules, and Pod Status.

$ ilum status
╭─ Release Info ───────────────────────────────────────────╮
│ Release: ilum │
│ Namespace: default │
│ Status: deployed │
│ Chart: ilum-6.7.0 │
│ Revision: 1 │
│ Deployed: 2026-02-14 09:35:12 │
╰──────────────────────────────────────────────────────────╯

╭─ Enabled Modules (Live) ────────────────────────────────╮
│ Module │
├──────────────────────────────────────────────────────────┤
│ core │
│ gitea │
│ jupyter │
│ minio │
│ mongodb │
│ postgresql │
│ ui │
╰──────────────────────────────────────────────────────────╯

╭─ Pod Status ─────────────────────────────────────────────────────────╮
│ Name Phase Ready Restarts │
├──────────────────────────────────────────────────────────────────────┤
│ ilum-core-6f8b4c7d9-xk2pl Running ✓ 0 │
│ ilum-ui-5c9d3a1b8-mn4qr Running ✓ 0 │
│ ilum-mongodb-0 Running ✓ 0 │
│ ilum-postgresql-0 Running ✓ 0 │
│ ilum-minio-7b2e5f9a1-jt8ws Running ✓ 0 │
│ ilum-jupyter-4d6c8e0f2-rv3hp Running ✓ 0 │
│ ilum-gitea-0 Running ✓ 0 │
╰──────────────────────────────────────────────────────────────────────╯

The status command also performs config drift detection. If the modules tracked in your local CLI config diverge from what is actually enabled on the cluster (for example, someone ran a manual atualização do leme ), you see warnings like:

⚠ Config drift: 'airflow' enabled on cluster but not tracked in config

You can suppress sections with flags:

$ ilum status --no-pods       # Hide pod readiness table
$ ilum status --no-modules # Hide the enabled modules panel

Wait for all pods to be ready with --wait:

O --wait flag polls pod readiness until all pods reach the Running state (or a timeout is hit). This is useful in CI/CD pipelines where you need to gate subsequent steps on a fully-ready cluster:

$ ilum status --wait
⠋ Waiting for pods to be ready...
Pods: 5/7 ready
Pods: 7/7 ready
✓ All pods are ready.

╭─ Pod Status ─────────────────────────────────────────────────────────╮
│ Name Phase Ready Restarts │
├──────────────────────────────────────────────────────────────────────┤
│ ilum-core-6f8b4c7d9-xk2pl Running ✓ 0 │
│ ilum-ui-5c9d3a1b8-mn4qr Running ✓ 0 │
│ ... │
╰──────────────────────────────────────────────────────────────────────╯

The default timeout is 300 seconds (5 minutes). Override it with --wait-timeout:

$ ilum status --wait --wait-timeout 600

If pods do not become ready within the timeout, the command exits with code 1.

Show recent Kubernetes events with --events:

O --events flag appends an Events panel showing recent cluster events for the namespace. This surfaces scheduling failures, image pull errors, and resource issues without requiring a separate kubectl get events call:

$ ilum status --events
...
╭─ Events ─────────────────────────────────────────────────────────────────────────╮
│ Type Reason Object Message Count Age │
├──────────────────────────────────────────────────────────────────────────────────┤
│ Normal Scheduled pod/ilum-core-6f8b... Successfully 1 3m │
│ Normal Pulled pod/ilum-core-6f8b... Container im 1 3m │
│ Warning BackOff pod/ilum-airflow-wo... Back-off res 4 1m │
╰──────────────────────────────────────────────────────────────────────────────────╯

Filter events by type or age:

$ ilum status --events --events-type Warning
$ ilum status --events --events-since 30m
$ ilum status --events --events-type Warning --events-since 1h

Accessing the UI

The Ilum web UI is exposed by default on NodePort 31777. After installation, open it in your browser:

http://localhost:31777

If you are running on a remote cluster, replace Anfitrião local with the node IP address. You can find the node address with:

$ kubectl get nodes -o wide
nota

If NodePort 31777 is already in use by another service on the cluster, the ilum install command detects the conflict automatically. In interactive mode it prompts you to accept a suggested free port or enter a custom one. In non-interactive mode (--yes) it auto-assigns the next available port in the 30000-32767 range. You can also override the port explicitly with --set ilum-ui.service.nodePort=31780.

Running Doctor

O ilum doctor command runs a suite of 13 health checks against your environment and cluster. It validates tool versions, Helm repo configuration, cluster connectivity, namespace existence, pod health, PVC status, RBAC permissions, release state, Kubernetes version compatibility, service endpoints, and health endpoints.

All checks passing:

$ ilum doctor
╭─ ilum doctor ────────────────────────────────────────────────────────╮
│ Status Check Message │
├──────────────────────────────────────────────────────────────────────┤
│ ✓ helm helm 3.16.3 │
│ ✓ kubectl kubectl 1.31.2 │
│ ✓ docker docker 27.3.1 │
│ ✓ helm-repo ilum Helm repo configured │
│ ✓ cluster Connected to Kubernetes v1.31.2 │
│ ✓ namespace Namespace 'default' exists │
│ ✓ pods All pods are healthy │
│ ✓ pvcs 7 PVCs all bound │
│ ✓ rbac Required RBAC permissions granted │
│ ✓ release Release 'ilum' status: deployed │
│ ✓ compatibility Kubernetes v1.31.2 is compatible │
│ ✓ service-endpoints All services have endpoints │
│ ✓ health-endpoints 7 health endpoints responding │
╰──────────────────────────────────────────────────────────────────────╯

A failure example with suggestion:

$ ilum doctor
╭─ ilum doctor ────────────────────────────────────────────────────────╮
│ Status Check Message │
├──────────────────────────────────────────────────────────────────────┤
│ ✓ helm helm 3.16.3 │
│ ✗ kubectl kubectl 1.26.5 < required 1.28 │
│ ✓ docker docker 27.3.1 │
│ ✓ helm-repo ilum Helm repo configured │
│ ✗ cluster Cannot connect to Kubernetes cluster │
│ – namespace Skipped (cluster unreachable) │
│ – pods Skipped (cluster unreachable) │
│ – pvcs Skipped (cluster unreachable) │
│ – rbac Skipped (cluster unreachable) │
│ – release Skipped (cluster unreachable) │
│ – compatibility Skipped (cluster unreachable) │
│ – service-endpoints Skipped (cluster unreachable) │
│ – health-endpoints Skipped (cluster unreachable) │
╰──────────────────────────────────────────────────────────────────────╯
kubectl: Upgrade kubectl to at least 1.28
cluster: Check your kubeconfig: kubectl cluster-info

When the cluster connectivity check fails, all cluster-dependent checks are automatically skipped to avoid cascading errors.

You can also run a single check by name:

$ ilum doctor --check pods
╭─ ilum doctor ────────────────────────────────────────────────────────╮
│ Status Check Message │
├──────────────────────────────────────────────────────────────────────┤
│ ! pods Unhealthy pods: ilum-airflow-worker-0 │
╰──────────────────────────────────────────────────────────────────────╯
pods: Check pod status: kubectl get pods

The available check names are: helm, kubectl , docker, helm-repo, cluster , Namespace , pods, pvcs, rbac, release, compatibility, service-endpointse health-endpoints.