Ubuntu 20.04 데스크탑에서 Minikube를 사용해 Kubeflow를 설치하려고 공식 문서대로 kubectl create -f bootstrapper.yaml
명령을 실행했더니, 다음과 같은 오류가 발생했어요.
pgsql
CopyEdit
Error from server (AlreadyExists): error when creating "bootstrapper.yaml": namespaces "kubeflow-admin" already exists
Error from server (AlreadyExists): error when creating "bootstrapper.yaml": persistentvolumeclaims "kubeflow-ksonnet-pvc" already exists
[unable to recognize "bootstrapper.yaml": no matches for kind "ClusterRoleBinding" in version "rbac.authorization.k8s.io/v1beta1",
unable to recognize "bootstrapper.yaml": no matches for kind "StatefulSet" in version "apps/v1beta2"]
그래서 namespace랑 persistent volume을 삭제한 다음 같은 명령을 다시 실행했는데, 이번엔 버전 관련 오류가 떴어요.
pgsql
CopyEdit
namespace/kubeflow-admin created
persistentvolumeclaim/kubeflow-ksonnet-pvc created
unable to recognize "bootstrapper.yaml": no matches for kind "ClusterRoleBinding" in version "rbac.authorization.k8s.io/v1beta1"
unable to recognize "bootstrapper.yaml": no matches for kind "StatefulSet" in version "apps/v1beta2"
그래서 ClusterRoleBinding
과 StatefulSet
의 버전을 공식 문서 참고해서 v1
으로 수정했어요. 그랬더니 또 다른 오류가 나왔어요.
pgsql
CopyEdit
persistentvolumeclaim/kubeflow-ksonnet-pvc created
statefulset.apps/kubeflow-bootstrapper created
Error from server (AlreadyExists): error when creating "bootstrapper.yaml": clusterrolebindings.rbac.authorization.k8s.io "kubeflow-cluster-admin" already exists
그래서 kubeflow-cluster-admin
이라는 ClusterRoleBinding 리소스를 삭제하고 다시 kubectl create -f bootstrapper.yaml
을 실행했더니, 이번에는 정상적으로 작동했어요.
bash
CopyEdit
namespace/kubeflow-admin created
clusterrolebinding.rbac.authorization.k8s.io/kubeflow-cluster-admin created
persistentvolumeclaim/kubeflow-ksonnet-pvc created
statefulset.apps/kubeflow-bootstrapper created
이후 kubectl get ns
로 생성된 namespace들을 확인해 봤어요:
mathematica
CopyEdit
NAME STATUS AGE
default Active 8h
kube-node-lease Active 8h
kube-public Active 8h
kube-system Active 8h
kubeflow-admin Active 60s
kubernetes-dashboard Active 8h
그런데 kubectl -n kubeflow get svc
명령으로 확인해보니까 No resources found in kubeflow namespace.
라고 나왔어요. 아무 서비스도 없는 거예요.
이미 관련 포스팅도 확인했고, 꽤 오래 기다려봤는데도 아무 결과가 없었어요.
docker images
명령으로 로컬 이미지를 확인해봤는데 gcr.io/kubeflow-images-public/bootstrapper:v0.2.0
이미지가 존재하지 않았어요. 이걸로 봤을 때 bootstrap 자체가 실패한 것 같아요.
---
# Namespace for bootstrapper
apiVersion: v1
kind: Namespace
metadata:
name: kubeflow-admin
---
# Make kubeflow-admin admin
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kubeflow-cluster-admin
subjects:
- kind: ServiceAccount
name: default
namespace: kubeflow-admin
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
---
# Store ksonnet apps
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: kubeflow-ksonnet-pvc
namespace: kubeflow-admin
labels:
app: kubeflow-ksonnet
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
---
apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
name: kubeflow-bootstrapper
namespace: kubeflow-admin
spec:
selector:
matchLabels:
app: kubeflow-bootstrapper
serviceName: kubeflow-bootstrapper
template:
metadata:
name: kubeflow-bootstrapper
labels:
app: kubeflow-bootstrapper
spec:
containers:
- name: kubeflow-bootstrapper
image: gcr.io/kubeflow-images-public/bootstrapper:v0.2.0
workingDir: /opt/bootstrap
command: [ "/opt/kubeflow/bootstrapper"]
args: [
"--in-cluster",
"--namespace=kubeflow",
"--apply",
# change config here if you want to use customized config.
# "--config=/opt/kubeflow/default.yaml"
# app-dir: path to store your ks apps in pod's PersistentVolume
"--app-dir=/opt/bootstrap/default"
]
volumeMounts:
- name: kubeflow-ksonnet-pvc
mountPath: /opt/bootstrap
volumes:
- name: kubeflow-ksonnet-pvc
persistentVolumeClaim:
claimName: kubeflow-ksonnet-pvc