Kubespray添加worker节点并部署OpenEBS lvm-localpv

SuKai November 29, 2024

  1. inventory.ini添加新的主机节点

  2. 执行ansible playbook

  3. 新节点设置污点仅调度期望的资源

  4. 配置Rook Ceph在新节点上安装Ceph RBD CSI插件

  5. 部署OpenEBS lvm-localpv分配本地的存储

  6. inventory.ini添加新的主机节点

sukai@r1-m54:~/kubespray-2.26.0/inventory/cluster$ more inventory.ini
[all]
r4-w58 ansible_host=19.18.136.68 ip=19.18.136.68

[kube_node]
r4-w58
  1. 执行ansible playbook
sukai@r1-m54:~/kubespray-2.26.0$ ansible-playbook -i inventory/cluster/inventory.ini --become playbooks/facts.yml
PLAY RECAP ********************************************************************************************************************************************************************************************
r1-m54             : ok=12   changed=0    unreachable=0    failed=0    skipped=14   rescued=0    ignored=0
r4-w58             : ok=12   changed=2    unreachable=0    failed=0    skipped=11   rescued=0    ignored=0

sukai@r1-m54:~/kubespray-2.26.0$ ansible-playbook -i inventory/cluster/inventory.ini --become scale.yml --limit=r4-w58

PLAY RECAP ********************************************************************************************************************************************************************************************
r4-w58             : ok=372  changed=38   unreachable=0    failed=0    skipped=635  rescued=0    ignored=1


sukai@r1-m54:~$ kubectl get nodes -o wide
NAME              STATUS   ROLES           AGE   VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
r1-m54    Ready    control-plane   74d   v1.30.4   19.18.136.34    <none>        Ubuntu 24.04 LTS     6.8.0-44-generic   containerd://1.7.21

r4-w58    Ready    <none>          16h   v1.30.4   19.18.136.68    <none>        Ubuntu 24.04.1 LTS   6.8.0-49-generic   containerd://1.7.21

sukai@r1-m54:~$
  1. 新节点设置污点仅调度期望的资源
kubectl taint nodes r4-w58 storage-node/database=with_ssd:NoSchedule
kubectl label nodes r4-w58 node.kubernetes.io/storage=lvm
  1. 配置Rook Ceph在新节点上安装Ceph RBD CSI插件,在新节点上也可以使用Ceph存储
sukai@r1-m34:~$ kubectl -n rook-ceph edit ds csi-rbdplugin
apiVersion: apps/v1
kind: DaemonSet
metadata:
  annotations:
    deprecated.daemonset.template.generation: "3"
  creationTimestamp: "2024-09-19T02:42:31Z"
  generation: 3
  name: csi-rbdplugin
  namespace: rook-ceph
  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: false
    controller: true
    kind: Deployment
    name: rook-ceph-operator
    uid: 445a6422-d80e-44a9-b3fd-ad87cb9d88db
  resourceVersion: "62296852"
  uid: 3d485e59-f737-4f5d-b3ef-325da2d902fb
spec:
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: csi-rbdplugin
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: csi-rbdplugin
        contains: csi-rbdplugin-metrics
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: node.kubernetes.io/storage
                operator: In
                values:
                - rook
                - lvm
      tolerations:
      - effect: NoSchedule
        key: storage-node/database
        operator: Equal
        value: with_ssd
        
sukai@r4-w58:~$ sudo ls /var/lib/kubelet/plugins
kubernetes.io  rook-ceph.rbd.csi.ceph.com
  1. 部署OpenEBS lvm-localpv分配本地的存储
// 查看现有逻辑卷组和设备
sukai@r1-m54:~/lvm-localpv$ for ip in {58..65};do ssh 19.18.136.$ip "sudo vgs";done
  VG        #PV #LV #SN Attr   VSize    VFree
  ubuntu-vg   1   1   0 wz--n- <892.25g <792.25g

sukai@r1-m54:~/lvm-localpv$ for ip in {58..65};do ssh 19.18.136.$ip "sudo lsblk";done
NAME                      MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda                         8:0    0 894.3G  0 disk
├─sda1                      8:1    0     1M  0 part
├─sda2                      8:2    0     2G  0 part /boot
└─sda3                      8:3    0 892.3G  0 part
  └─ubuntu--vg-ubuntu--lv 252:0    0   100G  0 lvm  /
sdb                         8:16   0 894.3G  0 disk

// 创建物理卷、逻辑卷组用于lvm-localpv
sukai@r1-m54:~/lvm-localpv$ for ip in {58..65};do ssh 19.18.136.$ip "sudo pvcreate /dev/sdb";done
  Physical volume "/dev/sdb" successfully created.

sukai@r1-m54:~/lvm-localpv$
sukai@r1-m54:~/lvm-localpv$ for ip in {58..65};do ssh 19.18.136.$ip "sudo vgcreate lvmvg /dev/sdb";done
  Volume group "lvmvg" successfully created

sukai@r4-w58:~$ sudo vgs
  VG        #PV #LV #SN Attr   VSize    VFree
  lvmvg       1   0   0 wz--n-  894.25g  894.25g
  ubuntu-vg   1   1   0 wz--n- <892.25g <792.25g
sukai@r4-w58:~$


// 修改Helm chart配置,因为volumeSnapshots CRD与Rook Ceph冲突,所以禁用因为volumeSnapshots
sukai@r1-m54:~/lvm-localpv$ vi values.yaml
imagePullSecrets:
storageCapacity: true
rbac:
  pspEnabled: false
lvmNode:
  componentName: openebs-lvm-node
  driverRegistrar:
    name: "csi-node-driver-registrar"
    image:
      registry: reg.nscloud.com:7443/
      repository: sig-storage/csi-node-driver-registrar
      pullPolicy: IfNotPresent
      tag: v2.8.0
  updateStrategy:
    type: RollingUpdate
  annotations: {}
  podAnnotations: {}
  kubeletDir: "/var/lib/kubelet/"
  resources: {}
  podLabels:
    app: openebs-lvm-node
  nodeSelector:
    node.kubernetes.io/storage: lvm
  tolerations:
  - key: "storage-node/database"
    operator: "Equal"
    value: "with_ssd"
    effect: "NoSchedule"
  securityContext: {}
  labels: {}
  priorityClass:
    create: true
    name: lvm-localpv-csi-node-critical
  logLevel: 5
  kubeClientRateLimiter:
    qps: 0
    burst: 0
  hostNetwork: false
lvmController:
  componentName: openebs-lvm-controller
  replicas: 1
  logLevel: 5
  resizer:
    name: "csi-resizer"
    image:
      registry: reg.nscloud.com:7443/
      repository: sig-storage/csi-resizer
      pullPolicy: IfNotPresent
      tag: v1.8.0
  snapshotter:
    name: "csi-snapshotter"
    image:
      registry: reg.nscloud.com:7443/
      repository: sig-storage/csi-snapshotter
      pullPolicy: IfNotPresent
      tag: v6.2.2
  snapshotController:
    name: "snapshot-controller"
    image:
      registry: reg.nscloud.com:7443/
      repository: sig-storage/snapshot-controller
      pullPolicy: IfNotPresent
      tag: v6.2.2
  provisioner:
    name: "csi-provisioner"
    image:
      registry: reg.nscloud.com:7443/
      repository: sig-storage/csi-provisioner
      pullPolicy: IfNotPresent
      tag: v3.5.0
  updateStrategy:
    type: RollingUpdate
  annotations: {}
  podAnnotations: {}
  resources: {}
  podLabels:
    name: openebs-lvm-controller
  nodeSelector:
    node.kubernetes.io/storage: lvm
  tolerations:
  - key: "storage-node/database"
    operator: "Equal"
    value: "with_ssd"
    effect: "NoSchedule"
  topologySpreadConstraints: []
  securityContext: {}
  priorityClass:
    create: true
    name: lvm-localpv-csi-controller-critical
  kubeClientRateLimiter:
    qps: 0
    burst: 0
lvmPlugin:
  name: "openebs-lvm-plugin"
  image:
    registry: reg.nscloud.com:7443/
    repository: openebs/lvm-driver
    pullPolicy: IfNotPresent
    tag: 1.6.1
  ioLimits:
    enabled: false
    containerRuntime: containerd
    readIopsPerGB: ""
    writeIopsPerGB: ""
  metricsPort: 9500
  allowedTopologies: "kubernetes.io/hostname,"
role: openebs-lvm
serviceAccount:
  lvmController:
    create: true
    name: openebs-lvm-controller-sa
  lvmNode:
    create: true
    name: openebs-lvm-node-sa
analytics:
  enabled: true
crds:
  lvmLocalPv:
    enabled: true
    keep: true
  csi:
    volumeSnapshots:
      enabled: false
      keep: true


sukai@r1-m54:~$ helm install openebs --namespace openebs lvm-localpv --create-namespace
NAME: openebs
LAST DEPLOYED: Thu Nov 28 02:00:09 2024
NAMESPACE: openebs
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The OpenEBS LVM LocalPV has been installed. Check its status by running:
$ kubectl get pods -n openebs -l role=openebs-lvm

For more information, visit our Slack at https://openebs.io/community or view
the documentation online at http://docs.openebs.io/.

sukai@r1-m54:~$ kubectl get pods -n openebs -l role=openebs-lvm -o wide
NAME                                              READY   STATUS    RESTARTS   AGE   IP              NODE             NOMINATED NODE   READINESS GATES
openebs-lvm-localpv-controller-577595b84d-6g5qk   5/5     Running   0          23s   10.233.68.236   rack4-worker60   <none>           <none>
openebs-lvm-localpv-node-44k8d                    2/2     Running   0          23s   10.233.68.229   rack4-worker62   <none>           <none>
openebs-lvm-localpv-node-75q6t                    2/2     Running   0          23s   10.233.68.230   rack4-worker64   <none>           <none>
openebs-lvm-localpv-node-87sds                    2/2     Running   0          23s   10.233.68.232   r4-w58   <none>           <none>
openebs-lvm-localpv-node-cblk7                    2/2     Running   0          23s   10.233.68.234   rack4-worker60   <none>           <none>
openebs-lvm-localpv-node-dhdnt                    2/2     Running   0          23s   10.233.68.235   rack4-worker61   <none>           <none>
openebs-lvm-localpv-node-h4l7x                    2/2     Running   0          23s   10.233.68.233   rack4-worker63   <none>           <none>
openebs-lvm-localpv-node-m5hwk                    2/2     Running   0          23s   10.233.68.228   rack4-worker59   <none>           <none>
openebs-lvm-localpv-node-vz2np                    2/2     Running   0          23s   10.233.68.231   rack4-worker65   <none>           <none>
sukai@r1-m54:~$


sukai@r1-m54:~/lvm-localpv$ vi storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: openebs-lvmpv
allowVolumeExpansion: true
parameters:
  storage: "lvm"
  volgroup: "lvmvg"
provisioner: local.csi.openebs.io
allowedTopologies:
- matchLabelExpressions:
  - key: kubernetes.io/hostname
    values:
      - r4-w58

sukai@r1-m54:~/lvm-localpv$ kubectl apply -f storageclass.yaml
storageclass.storage.k8s.io/openebs-lvmpv created
sukai@r1-m54:~/lvm-localpv$
sukai@r1-m54:~/lvm-localpv$ kubectl get sc
NAME                        PROVISIONER                  RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
openebs-lvmpv               local.csi.openebs.io         Delete          Immediate           true                   29s
rook-ceph-block (default)   rook-ceph.rbd.csi.ceph.com   Delete          Immediate           true                   69d


sukai@r4-w58:~$ sudo ls /var/lib/kubelet/plugins
kubernetes.io  lvm-localpv  rook-ceph.rbd.csi.ceph.com
sukai@r4-w58:~$