Page tree
Skip to end of metadata
Go to start of metadata

配置Ceph集群

以下操作都是在Ceph服务器上操作。

创建和初始化RBD池:

docker exec -it ceph-mon ceph osd pool create kubernetes
docker exec -it ceph-mon rbd pool init kubernetes

设置客户端身份验证:

docker exec -it ceph-mon ceph auth get-or-create client.kubernetes mon 'profile rbd' osd 'profile rbd pool=kubernetes' mgr 'profile rbd pool=kubernetes'

获取ceph集群ID:

docker exec -it ceph-mon ceph mon dump

配置Kubernetes集群

以下操作都是在Kubernetes的Master节点上操作。

创建CEPH CONFIGMAP配置文件:

cat <<EOF > csi-config-map.yaml
---
apiVersion: v1
kind: ConfigMap
data:
  config.json: |-
    [
      {
        "clusterID": "96b1ddaf-887a-4915-b155-28b9a341feab",
        "monitors": [
          "172.22.66.195:6789"
        ]
      }
    ]
metadata:
  name: ceph-csi-config
EOF

注意:上述配置文件中的clusterID就是Ceph集群的fsid!

K8S应用CEPH CONFIGMAP配置:

kubectl apply -f csi-config-map.yaml

创建KMS CONFIGMAP配置文件:

cat <<EOF > csi-kms-config-map.yaml
---
apiVersion: v1
kind: ConfigMap
data:
  config.json: |-
    {}
metadata:
  name: ceph-csi-encryption-kms-config
EOF

K8S应用KMS CONFIGMAP配置:

kubectl apply -f csi-kms-config-map.yaml

创建Ceph身份认证的CONFIGMAP配置文件:

cat <<EOF > ceph-config-map.yaml
---
apiVersion: v1
kind: ConfigMap
data:
  ceph.conf: |
    [global]
    auth_cluster_required = cephx
    auth_service_required = cephx
    auth_client_required = cephx
  # keyring is a required key and its value should be empty
  keyring: |
metadata:
  name: ceph-config
EOF

K8S应用Ceph身份认证的CONFIGMAP配置:

kubectl apply -f ceph-config-map.yaml

生成CEPHX密钥:

cat <<EOF > csi-rbd-secret.yaml
---
apiVersion: v1
kind: Secret
metadata:
  name: csi-rbd-secret
  namespace: default
stringData:
  userID: kubernetes
  userKey: AQAThd5hqkTvJBAArvGCELrYwLPW9FBc9jfCBg==
EOF

注意:上述配置文件中的userKey就是为Ceph集群设置的客户端身份验证密钥!

K8S应用CEPHX密钥:

kubectl apply -f csi-rbd-secret.yaml

配置ceph-csi插件:

wget https://raw.githubusercontent.com/ceph/ceph-csi/master/deploy/rbd/kubernetes/csi-provisioner-rbac.yaml
kubectl apply -f csi-provisioner-rbac.yaml
wget https://raw.githubusercontent.com/ceph/ceph-csi/master/deploy/rbd/kubernetes/csi-nodeplugin-rbac.yaml
kubectl apply -f csi-nodeplugin-rbac.yaml

创建ceph-csi提供器和节点插件:

wget https://raw.githubusercontent.com/ceph/ceph-csi/master/deploy/rbd/kubernetes/csi-rbdplugin-provisioner.yaml
kubectl apply -f csi-rbdplugin-provisioner.yaml
wget https://raw.githubusercontent.com/ceph/ceph-csi/master/deploy/rbd/kubernetes/csi-rbdplugin.yaml
kubectl apply -f csi-rbdplugin.yaml

创建Storage Class:

cat <<EOF > csi-rbd-sc.yaml
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
   name: csi-rbd-sc
provisioner: rbd.csi.ceph.com
parameters:
   clusterID: 96b1ddaf-887a-4915-b155-28b9a341feab
   pool: kubernetes
   imageFeatures: layering
   csi.storage.k8s.io/provisioner-secret-name: csi-rbd-secret
   csi.storage.k8s.io/provisioner-secret-namespace: default
   csi.storage.k8s.io/controller-expand-secret-name: csi-rbd-secret
   csi.storage.k8s.io/controller-expand-secret-namespace: default
   csi.storage.k8s.io/node-stage-secret-name: csi-rbd-secret
   csi.storage.k8s.io/node-stage-secret-namespace: default
reclaimPolicy: Delete
allowVolumeExpansion: true
mountOptions:
   - discard
EOF

注意:上述配置文件中的clusterID就是Ceph集群的fsid!

K8S应用Storage Class:

kubectl apply -f csi-rbd-sc.yaml

创建使用块设备的PVC:

cat <<EOF > raw-block-pvc.yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: raw-block-pvc
spec:
  accessModes:
    - ReadWriteOnce
  volumeMode: Block
  resources:
    requests:
      storage: 1Gi
  storageClassName: csi-rbd-sc
EOF

K8S应用使用块设备的PVC:

kubectl apply -f raw-block-pvc.yaml

创建使用块设备的POD:

cat <<EOF > raw-block-pod.yaml
---
apiVersion: v1
kind: Pod
metadata:
  name: pod-with-raw-block-volume
spec:
  containers:
    - name: fc-container
      image: fedora:26
      command: ["/bin/sh", "-c"]
      args: ["tail -f /dev/null"]
      volumeDevices:
        - name: data
          devicePath: /dev/xvda
  volumes:
    - name: data
      persistentVolumeClaim:
        claimName: raw-block-pvc
EOF

K8S应用使用块设备的POD:

kubectl apply -f raw-block-pod.yaml

创建基于文件系统的PVC:

cat <<EOF > pvc.yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: rbd-pvc
spec:
  accessModes:
    - ReadWriteOnce
  volumeMode: Filesystem
  resources:
    requests:
      storage: 1Gi
  storageClassName: csi-rbd-sc
EOF

K8S应用基于文件系统的PVC:

kubectl apply -f pvc.yaml

创建使用基于文件系统的POD:

cat <<EOF > pod.yaml
---
apiVersion: v1
kind: Pod
metadata:
  name: csi-rbd-demo-pod
spec:
  containers:
    - name: web-server
      image: nginx
      volumeMounts:
        - name: mypvc
          mountPath: /var/lib/www/html
  volumes:
    - name: mypvc
      persistentVolumeClaim:
        claimName: rbd-pvc
        readOnly: false
EOF

K8S应用使用基于文件系统的POD:

kubectl apply -f pod.yaml

查看StorageClass:

kubectl get storageclass

查看PVC:

kubectl get pvc

可以看到,PVC已经成功申请Ceph的存储空间!

Write a comment...