Spec Templates & Building k8s clusters

Hi,

Anyone an idea about what’s wrong? I am building my own cluster layouts and I want to add a spec file to a k8s cluster.

However I seem to stumble upon failure to deploy the spec file. A regular kubectl apply on the file seems to work, however Morpheus won’t deploy the spec file.

It doesn’t look that exotic compared to the other spec files found inside the library. (Especially compared to: prometheus 0.9 operator node exporter v1 spec file)

kind: ServiceAccount
apiVersion: v1
metadata:
  name: nfs-client-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
parameters:
  archiveOnDelete: "false"
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: nfs-client-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: k8s-sigs.io/nfs-subdir-external-provisioner
            - name: NFS_SERVER
              value: 10.5.25.60
            - name: NFS_PATH
              value: /srv/nfs/kubedata/morpheus
      volumes:
        - name: nfs-client-root
          nfs:
            server: 10.5.25.60
            path: /srv/nfs/kubedata/morpheus

Morpheus seems to be erroring on a nullpointer

2022-05-10_21:56:39.24637 ''[2022-05-10 21:56:39,315] [appJobHigh-13] ESC[1;31mERRORESC[0;39m ESC[36mc.m.h.KubernetesHostServiceESC[0;39m - applyTemplate error: java.lang.NullPointerException: Cannot get property 'template' on null object
2022-05-10_21:56:39.31557 'java.lang.NullPointerException: Cannot get property 'template' on null object
202
1 Like

Appreciate sharing the code excerpt. It gives me something to work off of and test. Just so I’m clear, you are placing this entirely into a single spec and then trying to deploy, correct?

Yes, I am placing this code in one spec file, however I did test it out split into three parts and the issue still remains the same.