Everything in Rancher 2.6 is a Kubernetes resource; for example, downstream kubernetes clusters, you can run:
kubectl get clusters
and you'll see your clusters that Rancher manages:
NAMESPACE NAME READY KUBECONFIG
fleet-default hci00 true hci00-kubeconfig
fleet-default svcs-internal-k3s true svcs-internal-k3s-kubeconfig
fleet-local local true local-kubeconfig
and that includes Rancher Management resources like Node Drivers which control provisioning and templating Kubernetes cluster nodes on various IaaS providers like AWS and Linode or on-prem solutions like OpenStack or VMware.
You can also import drivers for custom backends not included with Rancher by default which is typically a Docker Machine driver and a UI component (forms, etc. for configuring the driver to connect to a backend-- things like endpoint URL and credentials). This is what I'd like to demonstrate-- in addition to the above method using the Rancher UI, if you'd like to manage these resources as YAML which can then be generated dynamically or applied as static files.
For example, let's look at a NodeDriver
YAML definition:
apiVersion: management.cattle.io/v3
kind: NodeDriver
metadata:
annotations:
lifecycle.cattle.io/create.node-driver-controller: "true"
privateCredentialFields: password
publicCredentialFields: endpoint,username,port
name: nutanix-3.0.0
spec:
active: false
addCloudCredential: false
builtin: false
checksum: e8f4f2e7ae7e927534884b5a3a45a38a5bd2c2872de1d65375f6e009bed75dba
description: ""
displayName: nutanix
externalId: ""
uiUrl: https://nutanix.github.io/rancher-ui-driver/v3.0.0/component.js
url: https://github.com/nutanix/docker-machine/releases/download/v3.0.0/docker-machine-driver-nutanix_v3.0.0_linux
whitelistDomains:
- nutanix.github.io
In Rancher 2.6.4, Nutanix's node driver version 3.1.0 is built-in, but let's say I'm on an earlier 2.6 version of Rancher, or I want to use an earlier version (3.0.0 for example) of the driver. I would either use the UI and plugin the above values, but if I want to define it in yaml and have it applied to, for example, subsequent Rancher clusters, or make it part of a playbook for configuring the Rancher installation, I can do that here.
Now, if I wanted to manage this as part of a Helm chart, for example, I might do something like this:
checksum: {{ .Values.driver_checksum }}
uiUrl: https://nutanix.github.io/rancher-ui-driver/{{ .Values.driver_version }}/component.js
url: https://github.com/nutanix/docker-machine/releases/download/{{ .Values.driver_version }}/docker-machine-driver-nutanix_{{ .Values.driver_version }}_linux
in the above block as part of a Template, or if these are generated via Terraform, in your plan you might use a template file like:
checksum: ${DRIVER_CHECKSUM}
uiUrl: https://nutanix.github.io/rancher-ui-driver/${DRIVER_VERSION}/component.js
url: https://github.com/nutanix/docker-machine/releases/download/${DRIVER_VERSION}/docker-machine-driver-nutanix_${DRIVER_VERSION}_linux
and then in my Terraform plan:
data "template_file" "controller-peer" {
template = file("${path.module}/templates/node-drivers.yaml.tpl")
vars = {
DRIVER_VERSION = "v3.0.0"
DRIVER_CHECKSUM = "e8f4f2e7ae7e927534884b5a3a45a38a5bd2c2872de1d65375f6e009bed75dba" # Normally these values should be in TF_VARs
}
}
and then applied via some other mechanism (Kubernetes TF provider, a GitOps flow, etc.)
Top comments (0)