12.kubernetes笔记|12.kubernetes笔记 Volume存储卷(三) Longhorn Storageclass 动态预配存储空间

前言: 前面介绍的PV、PVC完成了存储类型和Pod挂载的解耦,开发人员直接使用PVC,而Volume存储系统的PV由管理员维护,但随之带来的问题,开发人员使用PVC之前,需要系统管理员提前手动创建PV,而随着pod的不断增多,系统管理员的工作会变得重复和繁琐,静态供给方式这并不符合自动化运维的趋势,随之应运而生的Storageclass 动态预配存储空间,它的作用就是创建PV的模板,解决了PV的自动创建,备份等扩展功能.
Storageclass简介 简称SC; PV和PVC都可属于某个特定的SC;
创建PV的模板:可以将某个存储服务与SC关联起来,并且将该存储服务的管理接口提供给SC,从而让SC能够在存储服务上CRUD(create、Read.Update和Delete)存储单元; 因而,在同一个SC上声明PVC时,若无现存可匹配的PV,则SC能够调用管理接口直接创建出一个符合PVC声明的需求的PV来。这种PV的提供机制,就称为Dynamic Provision.
具体来说,StorageClass会定义一下两部分:

  1. PV的属性 ,比如存储的大小、类型等;
  2. 创建这种PV需要使用到的存储插件,比如Ceph等;
有了这两部分信息,Kubernetes就能够根据用户提交的PVC,找到对应的StorageClass,然后Kubernetes就会调用 StorageClass声明的存储插件,创建出需要的PV。
这里需要注意的是这么多种存储系统并不是所有的存储系统都支持StorageClass,它需要你的存储系统提供某种接口来让controller可以调用并传递进去PVC的参数去创建PV
常用字段:
StorageClass资源的期望状态直接与apiversion、kind和metadata定义于同一级别而无须嵌套于spec字段中,它支持使用的字段包括如下几个
  • allowVolumeExpansion :是否支持存储卷空间扩展功能;
  • allowedTopologies <[]0bject>:定义可以动态配置存储卷的节点拓扑,仅启用了卷调度功能的服务器才会用到该字段; 每个卷插件都有自己支持的拓扑规范,空的拓扑选择器表示无拓扑限制;
  • provisioner :必选字段,用于指定存储服务方(provisioner,或称为预备器),存储类要依赖该字段值来判定要使用的存储插件以便适配到目标存储系统; kubernetes内建支持许多的Provisioner,它们的名字都以kubernetes.io/为前缀,例如kubernetes.io/glusterfs等;
  • parameters
  • reclaimPolicy
  • volumeBindingMlode :定义如何为PVC完成预配和绑定,默认值为VolumeBindingImmediate; 该字段仅在启用了存储卷调度功能时才能生效;
  • mountoptions<[]string>:由当前类动态创建的PV资源的默认挂载选项列表。
模板示例:
[root@k8s-master storage]# cat Storageclass-rdb-demo.yaml apiversion: storage.k8s.io/v1 kind: Storageclass metadata: name: fast-rbd provisioner: kubernetes.io/rbd#不同的存储后端 parameters模板各不一样 需要查阅官方文档 这里是ceph的 也不是所有的存储类型都是支持Storageclass parameters: monitors: ceph01.ilinux.io:6789,ceph02.ilinux.io:6789,ceph03.ilinux.io:6789 adminId: admin adminSecretName: ceph-admin-secret adminSecretNamespace: kube-system pool: kube userId: kube userSecretName: ceph-kube-secret userSecretNamespace: kube-system fsType: ext4 imageFormat: "2" imageFeatures: "layering" reclaimPolicy: Retain#回收策略

  • 因为测试环境没有ceph集群这里不作演示
Longhorn Storageclass插件 官方文档: https://longhorn.io
Longhorn极大地提升了开发人员和ITOps的效率,仅需点击一下鼠标,即可轻松实现持久化存储,并且无需为专有解决方案支付昂贵的费用。除此之外,Longhorn减少了管理数据及操作环境所需的资源,从而帮助企业更加专注且快速地交付代码及应用程序。
Longhorn依旧秉承Rancher 100%开源的产品理念,它是一个使用微服务构建的分布式块存储项目。2019年,Longhorn发布Beta版本,并于同年10月作为沙箱(Sandbox)项目捐献给CNCF。Longhorn受到了开发者们的广泛关注,成千上万名用户对其进行了压力测试,并提供了极为宝贵的反馈意见。
Longhorn的GA版本提供了一系列功能丰富的企业级存储功能,包括:
  • 自动配置,快照,备份和恢复
  • 零中断卷扩容
  • 具有定义的RTO和RPO的跨集群灾难恢复卷
  • 在不影响卷的情况下实时升级
  • 全功能的Kubernetes CLI集成和独立的用户界面
12.kubernetes笔记|12.kubernetes笔记 Volume存储卷(三) Longhorn Storageclass 动态预配存储空间
文章图片

安装Longhorn
安装准备:节点数至少为3台 要选择leader
[root@k8s-master storage]#yum -y install iscsi-initiator-utils#安装前需要安装ISCSI [root@k8s-master storage]# kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v1.1.2/deploy/longhorn.yaml#安装 因为要下载镜像 需要等待一些时间[root@k8s-master storage]# kubectl get pods -n longhorn-system --watch#等待所有pod就绪[root@k8s-master storage]# kubectl get pods --namespace longhorn-system -o wide#所有pod就绪 NAMEREADYSTATUSRESTARTSAGEIPNODENOMINATED NODEREADINESS GATES csi-attacher-54c7586574-fvv4p1/1Running037m10.244.3.16k8s-node3 csi-attacher-54c7586574-swdsr1/1Running043m10.244.2.111k8s-node2 csi-attacher-54c7586574-zkzrg1/1Running043m10.244.3.10k8s-node3 csi-provisioner-5ff5bd6b88-bs6871/1Running037m10.244.3.17k8s-node3 csi-provisioner-5ff5bd6b88-gl4xn1/1Running043m10.244.2.112k8s-node2 csi-provisioner-5ff5bd6b88-qkzt41/1Running043m10.244.3.11k8s-node3 csi-resizer-7699cdfc4-4w49w1/1Running037m10.244.3.15k8s-node3 csi-resizer-7699cdfc4-l2j491/1Running043m10.244.3.12k8s-node3 csi-resizer-7699cdfc4-sndlm1/1Running043m10.244.2.113k8s-node2 csi-snapshotter-8f58f46b4-6s89m1/1Running037m10.244.2.119k8s-node2 csi-snapshotter-8f58f46b4-qgv5r1/1Running043m10.244.3.13k8s-node3 csi-snapshotter-8f58f46b4-tf5ls1/1Running043m10.244.2.115k8s-node2 engine-image-ei-a5a44787-5ntlm1/1Running044m10.244.1.146k8s-node1 engine-image-ei-a5a44787-h45hr1/1Running044m10.244.3.6k8s-node3 engine-image-ei-a5a44787-phnjf1/1Running044m10.244.2.108k8s-node2 instance-manager-e-4384d6f11/1Running044m10.244.2.110k8s-node2 instance-manager-e-54f462561/1Running034m10.244.1.148k8s-node1 instance-manager-e-e008dd8a1/1Running044m10.244.3.7k8s-node3 instance-manager-r-0ad3175d1/1Running044m10.244.3.8k8s-node3 instance-manager-r-612770921/1Running044m10.244.2.109k8s-node2 instance-manager-r-d8a9eb0e1/1Running034m10.244.1.149k8s-node1 longhorn-csi-plugin-5htsd2/2Running07m41s10.244.2.123k8s-node2 longhorn-csi-plugin-hpjgl2/2Running016s10.244.1.151k8s-node1 longhorn-csi-plugin-wtkcj2/2Running043m10.244.3.14k8s-node3 longhorn-driver-deployer-5479f45d86-l4fpq1/1Running057m10.244.3.4k8s-node3 longhorn-manager-dgk4d1/1Running157m10.244.1.145k8s-node1 longhorn-manager-hb7cl1/1Running057m10.244.2.107k8s-node2 longhorn-manager-xrxll1/1Running057m10.244.3.3k8s-node3 longhorn-ui-79f8976fbf-sb79r1/1Running057m10.244.3.5k8s-node3

示例1:测试创建PVC 自动创建PV
[root@k8s-master storage]# catpvc-dyn-longhorn-demo.yaml apiVersion: v1 kind: PersistentVolumeClaim#资源类型 metadata: name: pvc-dyn-longhorn-demo namespace: default spec: accessModes: ["ReadWriteOnce"] volumeMode: Filesystem resources: requests: storage: 2Gi#新的PV会以最低容量创建PV limits: storage: 10Gi storageClassName: longhorn#选择longhorn创造[root@k8s-master storage]# kubectl apply -f longhorn.yaml

  • 修改回收策略及副本数 非必要操作 根据实际需要修改
    [root@k8s-master storage]# kubectl get sc NAMEPROVISIONERRECLAIMPOLICYVOLUMEBINDINGMODEALLOWVOLUMEEXPANSIONAGE longhorndriver.longhorn.ioDelete(注意回收策略 是删除会有风险)Immediatetrue48s[root@k8s-master storage]# vim longhorn.yaml#修改回收策略 修改yaml文件 --- apiVersion: v1 kind: ConfigMap metadata: name: longhorn-storageclass namespace: longhorn-system data: storageclass.yaml: | kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: longhorn provisioner: driver.longhorn.io#这个是longhorn自定义的provisioner 存储在本地,如果有网络存储按自己需求修改 allowVolumeExpansion: true reclaimPolicy: Delete volumeBindingMode: Immediate parameters: numberOfReplicas: "3"#数据副本数 副本数越多数据当然会越安全,但同时的磁盘容量和性能要求也会更高 staleReplicaTimeout: "2880" fromBackup: "" reclaimPolicy: Retain#添加字段 修改回收策略 ... --- [root@k8s-master storage]# kubectl apply -f longhorn.yaml#重新应用配置 [root@k8s-master storage]# kubectl delete-fpvc-dyn-longhorn-demo.yaml [root@k8s-master storage]# kubectl apply-fpvc-dyn-longhorn-demo.yaml#重启创建Pod[root@k8s-master storage]# kubectl get sc#修改成功 NAMEPROVISIONERRECLAIMPOLICYVOLUMEBINDINGMODEALLOWVOLUMEEXPANSIONAGE longhorndriver.longhorn.ioRetainImmediatetrue10m[root@k8s-master storage]# kubectl get pv #自动创建PV NAMECAPACITYACCESS MODESRECLAIM POLICYSTATUSCLAIMSTORAGECLASSREASONAGE pv-nfs-demo00210GiRWXRetainAvailable2d5h pv-nfs-demo0031GiRWORetainAvailable2d5h pvc-dbcbe588-a088-45b4-9972-74b0e6ca0b582GiRWORetainBounddefault/pvc-dyn-longhorn-demolonghorn118s[root@k8s-master storage]# kubectl get pvc #自动创建PVC NAMESTATUSVOLUMECAPACITYACCESS MODESSTORAGECLASSAGE pvc-dyn-longhorn-demoBoundpvc-dbcbe588-a088-45b4-9972-74b0e6ca0b582GiRWOlonghorn2m19s

    修改svc 打开longhorn UI
    [root@k8s-master ~]# kubectl get svc --namespace longhorn-system -o wide NAMETYPECLUSTER-IPEXTERNAL-IPPORT(S)AGESELECTOR csi-attacherClusterIP10.102.194.812345/TCP17happ=csi-attacher csi-provisionerClusterIP10.99.37.1012345/TCP17happ=csi-provisioner csi-resizerClusterIP10.111.56.22612345/TCP17happ=csi-resizer csi-snapshotterClusterIP10.110.198.13312345/TCP17happ=csi-snapshotter longhorn-backendClusterIP10.106.163.239500/TCP17happ=longhorn-manager longhorn-frontendClusterIP10.111.219.11380/TCP17happ=longhorn-ui#修改SVC为NodePort[root@k8s-master ~]# kubectl editsvc longhorn-frontend--namespace longhorn-system service/longhorn-frontend edited[root@k8s-master ~]# kubectl get svc --namespace longhorn-system -o wide NAMETYPECLUSTER-IPEXTERNAL-IPPORT(S)AGESELECTOR csi-attacherClusterIP10.102.194.812345/TCP17happ=csi-attacher csi-provisionerClusterIP10.99.37.1012345/TCP17happ=csi-provisioner csi-resizerClusterIP10.111.56.22612345/TCP17happ=csi-resizer csi-snapshotterClusterIP10.110.198.13312345/TCP17happ=csi-snapshotter longhorn-backendClusterIP10.106.163.239500/TCP17happ=longhorn-manager longhorn-frontendNodePort10.111.219.11380:30745/TCP17happ=longhorn-ui#使用节点IP打开 节点:30745

    【12.kubernetes笔记|12.kubernetes笔记 Volume存储卷(三) Longhorn Storageclass 动态预配存储空间】可以打开UI 查看到现有的卷、创建新卷、查看节点信息、备份等
    12.kubernetes笔记|12.kubernetes笔记 Volume存储卷(三) Longhorn Storageclass 动态预配存储空间
    文章图片

  • 创建redis Pod绑定PVC
[root@k8s-master storage]# cat volumes-pvc-longhorn-demo.yaml apiVersion: v1 kind: Pod metadata: name: volumes-pvc-longhorn-demo namespace: default spec: containers: - name: redis image: redis:alpine imagePullPolicy: IfNotPresent ports: - containerPort: 6379 name: redisport volumeMounts: - mountPath: /data name: redis-data-vol volumes: - name: redis-data-vol persistentVolumeClaim: claimName: pvc-dyn-longhorn-demo#使用sc创建的pvc[root@k8s-master storage]# kubectl apply -f volumes-pvc-longhorn-demo.yaml[root@k8s-master storage]# kubectl get pod -o wide NAMEREADYSTATUSRESTARTSAGEIPNODENOMINATED NODEREADINESS GATES centos-deployment-66d8cd5f8b-95brg1/1Running018h10.244.2.117k8s-node2 my-grafana-7d788c5479-bpztz1/1Running018h10.244.2.120k8s-node2 volumes-pvc-longhorn-demo1/1Running02m30s10.244.1.172k8s-node1[root@k8s-master storage]# kubectl exec volumes-pvc-longhorn-demo -it -- /bin/sh /data # redis-cli 127.0.0.1:6379> set mykey www.qq.com OK 127.0.0.1:6379> bgsabe (error) ERR unknown command `bgsabe`, with args beginning with: 127.0.0.1:6379> bgsave Background saving started 127.0.0.1:6379> exit /data # ls dump.rdblost+found /data # exit[root@k8s-master storage]# kubectl delete -fvolumes-pvc-longhorn-demo.yaml pod "volumes-pvc-longhorn-demo" deleted[root@k8s-master storage]# cat volumes-pvc-longhorn-demo.yaml apiVersion: v1 kind: Pod metadata: name: volumes-pvc-longhorn-demo namespace: default spec: nodeName: k8s-node2#指定运行节点 containers: - name: redis image: redis:alpine imagePullPolicy: IfNotPresent ports: - containerPort: 6379 name: redisport volumeMounts: - mountPath: /data name: redis-data-vol volumes: - name: redis-data-vol persistentVolumeClaim: claimName: pvc-dyn-longhorn-demo[root@k8s-master storage]# kubectl get pod -o wide -w NAMEREADYSTATUSRESTARTSAGEIPNODENOMINATED NODEREADINESS GATES centos-deployment-66d8cd5f8b-95brg1/1Running018h10.244.2.117k8s-node2 my-grafana-7d788c5479-bpztz1/1Running018h10.244.2.120k8s-node2 volumes-pvc-longhorn-demo1/1Running068s10.244.2.124k8s-node2[root@k8s-master storage]# kubectl exec volumes-pvc-longhorn-demo -it -- /bin/sh #查看pvc数据 /data # redis-cli 127.0.0.1:6379> get mykey "www.qq.com" 127.0.0.1:6379> exit /data # exit

longhorn数据存储及自定义资源
  • 因为之前副本数,默认设置为3,所以3个节点,每个节点都一份数据保存
[root@k8s-node1 ~]# ls /var/lib/longhorn/replicas/ #默认的数据保存路径 pvc-dbcbe588-a088-45b4-9972-74b0e6ca0b58-da2330a2 [root@k8s-node1 ~]# ls /var/lib/longhorn/replicas/pvc-dbcbe588-a088-45b4-9972-74b0e6ca0b58-da2330a2/ revision.countervolume-head-000.imgvolume-head-000.img.metavolume.meta[root@k8s-node2 ~]# ls /var/lib/longhorn/replicas/ pvc-dbcbe588-a088-45b4-9972-74b0e6ca0b58-83f7f58c [root@k8s-node2 ~]# ls /var/lib/longhorn/replicas/pvc-dbcbe588-a088-45b4-9972-74b0e6ca0b58-83f7f58c/ revision.countervolume-head-000.imgvolume-head-000.img.metavolume.meta[root@k8s-node3 ~]# ls /var/lib/longhorn/replicas/ pvc-dbcbe588-a088-45b4-9972-74b0e6ca0b58-c90965c0 [root@k8s-node3 ~]# ls /var/lib/longhorn/replicas/pvc-dbcbe588-a088-45b4-9972-74b0e6ca0b58-c90965c0/ revision.countervolume-head-000.imgvolume-head-000.img.metavolume.meta[root@k8s-master storage]# kubectl api-resources --api-group=longhorn.io#longhorn自定义的资源类型 NAMESHORTNAMESAPIGROUPNAMESPACEDKIND backingimagemanagerslhbimlonghorn.iotrueBackingImageManager backingimageslhbilonghorn.iotrueBackingImage engineimageslheilonghorn.iotrueEngineImage engineslhelonghorn.iotrueEngine instancemanagerslhimlonghorn.iotrueInstanceManager nodeslhnlonghorn.iotrueNode replicaslhrlonghorn.iotrueReplica settingslhslonghorn.iotrueSetting sharemanagerslhsmlonghorn.iotrueShareManager volumeslhvlonghorn.iotrueVolume[root@k8s-master storage]# kubectl get replicas -n longhorn-system NAMESTATENODEDISKINSTANCEMANAGERIMAGEAGE pvc-dbcbe588-a088-45b4-9972-74b0e6ca0b58-r-08f28c21runningk8s-node349846d34-b5a8-4a86-96f4-f0d7ca191f2ainstance-manager-r-0ad3175dlonghornio/longhorn-engine:v1.1.23h54m pvc-dbcbe588-a088-45b4-9972-74b0e6ca0b58-r-6694fc48runningk8s-node2ce1ed80b-43c9-4fc9-8266-cedb736bacaainstance-manager-r-61277092longhornio/longhorn-engine:v1.1.23h54m pvc-dbcbe588-a088-45b4-9972-74b0e6ca0b58-r-86a35cd3runningk8s-node13d40b18a-c0e9-459c-b37e-d878152d1261instance-manager-r-d8a9eb0elonghornio/longhorn-engine:v1.1.23h54m

    推荐阅读