环境说明
kubectl get nodeNAME STATUS ROLES AGE VERSIONdocker-desktop Ready master 50m v1.19.7
安装NFS
关闭防火墙和禁止开启启动防火墙
$ sudo systemctl stop firewalld.service$ sudo systemctl disable firewalld.service$ sudo systemctl status firewalld● firewalld.service - firewalld - dynamic firewall daemonLoaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)Active: inactive (dead)Docs: man:firewalld(1)
安装nfs
$ sudo yum -y install nfs-utils rpcbind
创建/data/k8s/ 目录
$ sudo mkdir -p /data/k8s/$ sudo chmod 755 /data/k8s/
配置 nfs,nfs 的默认配置文件在 /etc/exports 文件下,在该文件中添加下面的配置信息:
$ sudo vim /etc/exports/data/k8s *(rw,sync,no_root_squash)
配置说明:
- /data/k8s:是共享的数据目录
- *:表示任何人都有权限连接,当然也可以是一个网段,一个 IP,也可以是域名
- rw:读写的权限
- sync:表示文件同时写入硬盘和内存
- no_root_squash:当登录 NFS 主机使用共享目录的使用者是 root 时,其权限将被转换成为匿名使用者,通常它的 UID 与 GID,都会变成 nobody 身份
启动服务 nfs
启动服务 nfs 需要向 rpc 注册,rpc 一旦重启了,注册的文件都会丢失,向他注册的服务都需要重启
注意启动顺序,
启动 rpcbind
$ sudo systemctl start rpcbind.service$ sudo systemctl enable rpcbind$ sudo systemctl status rpcbind● rpcbind.service - RPC bind serviceLoaded: loaded (/usr/lib/systemd/system/rpcbind.service; enabled; vendor preset: enabled)Active: active (running) since Sun 2021-07-11 22:31:08 CST; 33s agoMain PID: 4392 (rpcbind)CGroup: /system.slice/rpcbind.service└─4392 /sbin/rpcbind -wJul 11 22:31:08 VM-8-5-centos systemd[1]: Starting RPC bind service...Jul 11 22:31:08 VM-8-5-centos systemd[1]: Started RPC bind service.
启动 nfs
$ sudo systemctl start nfs.service$ sudo systemctl enable nfs$ sudo systemctl status nfs● nfs-server.service - NFS server and servicesLoaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled; vendor preset: disabled)Drop-In: /run/systemd/generator/nfs-server.service.d└─order-with-mounts.confActive: active (exited) since Sun 2021-07-11 22:32:35 CST; 53s agoMain PID: 4642 (code=exited, status=0/SUCCESS)CGroup: /system.slice/nfs-server.serviceJul 11 22:32:35 VM-8-5-centos systemd[1]: Starting NFS server and services...Jul 11 22:32:35 VM-8-5-centos systemd[1]: Started NFS server and services.
查看挂载权限
$ cat /var/lib/nfs/etab/data/k8s *(rw,sync,wdelay,hide,nocrossmnt,secure,no_root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,anonuid=65534,anongid=65534,sec=sys,rw,secure,no_root_squash,no_all_squash)
检查下 nfs 是否有共享目录
showmount -e 10.0.8.5Export list for 10.0.8.5:/data/k8s *
本地客户端挂载$HOME/k8s/data
$ mkdir -p $HOME/k8s/data$ mount -t nfs 10.0.8.5:/data/k8s $HOME/k8s/data$ cd $HOME/k8s/data/$ sudo echo "hello world" >> test.txt
创建pv
apiVersion: v1kind: PersistentVolumemetadata:name: pv-nfsspec:capacity:storage: 2GiaccessModes:- ReadWriteOncepersistentVolumeReclaimPolicy: Recyclenfs:path: /data/k8sserver: 10.0.8.5
Capacity(存储能力)
一般来说,一个 PV 对象都要指定一个存储能力,通过 PV 的 capacity属性来设置的,目前只支持存储空间的设置,就是我们这里的 storage=1Gi,不过未来可能会加入 IOPS、吞吐量等指标的配置。
AccessModes(访问模式)
AccessModes 是用来对 PV 进行访问模式的设置,用于描述用户应用对存储资源的访问权限,访问权限包括下面几种方式:
- ReadWriteOnce(RWO):读写权限,但是只能被单个节点挂载
- ReadOnlyMany(ROX):只读权限,可以被多个节点挂载
- ReadWriteMany(RWX):读写权限,可以被多个节点挂载
注意:一些 PV 可能支持多种访问模式,但是在挂载的时候只能使用一种访问模式,多种访问模式是不会生效的。
persistentVolumeReclaimPolicy(回收策略)
我这里指定的 PV 的回收策略为 Recycle,目前 PV 支持的策略有三种:
- Retain(保留)- 保留数据,需要管理员手工清理数据
- Recycle(回收)- 清除 PV 中的数据,效果相当于执行 rm -rf /thevoluem/*
- Delete(删除)- 与 PV 相连的后端存储完成 volume 的删除操作,当然这常见于云服务商的存储服务,比如 ASW EBS。
不过需要注意的是,目前只有 NFS 和 HostPath 两种类型支持回收策略。当然一般来说还是设置为 Retain 这种策略保险一点。
状态
一个 PV 的生命周期中,可能会处于4中不同的阶段:
- Available(可用):表示可用状态,还未被任何 PVC 绑定
- Bound(已绑定):表示 PV 已经被 PVC 绑定
- Released(已释放):PVC 被删除,但是资源还未被集群重新声明
- Failed(失败): 表示该 PV 的自动回收失败
kubectl get pvNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGEpv1 1Gi RWO Recycle Available 3m
StorageClass
参考这个创建https://github.com/kubernetes-retired/external-storage/tree/master/nfs-client/deploy
创建ServiceAccount及相关权限
创建rbac-nfs.yaml
apiVersion: v1kind: ServiceAccountmetadata:name: nfs-client-provisionernamespace: default---kind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1metadata:name: nfs-client-provisioner-runnerrules:- apiGroups: [""]resources: ["persistentvolumes"]verbs: ["get", "list", "watch", "create", "delete"]- apiGroups: [""]resources: ["persistentvolumeclaims"]verbs: ["get", "list", "watch", "update"]- apiGroups: ["storage.k8s.io"]resources: ["storageclasses"]verbs: ["get", "list", "watch"]- apiGroups: [""]resources: ["events"]verbs: ["create", "update", "patch"]---kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata:name: run-nfs-client-provisionersubjects:- kind: ServiceAccountname: nfs-client-provisioner# replace with namespace where provisioner is deployednamespace: defaultroleRef:kind: ClusterRolename: nfs-client-provisioner-runnerapiGroup: rbac.authorization.k8s.io---kind: RoleapiVersion: rbac.authorization.k8s.io/v1metadata:name: leader-locking-nfs-client-provisioner# replace with namespace where provisioner is deployednamespace: defaultrules:- apiGroups: [""]resources: ["endpoints"]verbs: ["get", "list", "watch", "create", "update", "patch"]---kind: RoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata:name: leader-locking-nfs-client-provisionersubjects:- kind: ServiceAccountname: nfs-client-provisioner# replace with namespace where provisioner is deployednamespace: defaultroleRef:kind: Rolename: leader-locking-nfs-client-provisionerapiGroup: rbac.authorization.k8s.io
执行
kubectl apply -f rbac-nfs.yamlserviceaccount/nfs-client-provisioner createdclusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner createdclusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner createdrole.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner createdrolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
创建NFS provisioner
创建provisioner 文件provisioner-nfs.yaml
apiVersion: apps/v1kind: Deploymentmetadata:name: nfs-client-provisionerlabels:app: nfs-client-provisioner# replace with namespace where provisioner is deployednamespace: default #与RBAC文件中的namespace保持一致spec:replicas: 1selector:matchLabels:app: nfs-client-provisionerstrategy:type: Recreatetemplate:metadata:labels:app: nfs-client-provisionerspec:serviceAccountName: nfs-client-provisionercontainers:- name: nfs-client-provisionerimage: quay.io/external_storage/nfs-client-provisioner:latestvolumeMounts:- name: nfs-client-rootmountPath: /persistentvolumesenv:- name: PROVISIONER_NAMEvalue: nfs-storage #provisioner名称,请确保该名称与 sc-nfs.yaml文件中的provisioner名称保持一致- name: NFS_SERVERvalue: 81.71.154.47 #NFS Server IP地址- name: NFS_PATHvalue: /data/k8s #NFS挂载卷volumes:- name: nfs-client-rootnfs:server: 81.71.154.47 #NFS Server IP地址path: /data/k8s #NFS 挂载卷
执行
kubectl apply -f provisioner-nfs.yamldeployment.apps/nfs-client-provisioner created
创建NFS资源的StorageClass
创建StorageClass 文件sc-nfs.yaml,这里provisioner的名称要和provisioner配置文件中的环境变量PROVISIONER_NAME保持一致
apiVersion: storage.k8s.io/v1kind: StorageClassmetadata:name: managed-nfs-storageprovisioner: nfs-storageparameters:archiveOnDelete: "false"
执行
kubectl apply -f sc-nfs.yamlstorageclass.storage.k8s.io/managed-nfs-storage created
验证
声明pvc文件test-claim.yaml
kind: PersistentVolumeClaimapiVersion: v1metadata:name: test-claimspec:storageClassName: managed-nfs-storageaccessModes:- ReadWriteManyresources:requests:storage: 1Mi
也谢说使用annotations:volume.beta.kubernetes.io/storage-class 这个建议大家放弃使用。
官方文档已经准备弃用了。https://kubernetes.io/docs/concepts/storage/persistent-volumes/#class
执行
✗ kubectl get scNAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGEhostpath (default) docker.io/hostpath Delete Immediate false 19mmanaged-nfs-storage nfs-storage Delete Immediate false 11s✗ kubectl patch storageclass managed-nfs-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'storageclass.storage.k8s.io/managed-nfs-storage patchedkubectl patch storageclass hostpath -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'storageclass.storage.k8s.io/hostpath patched
执行
$ kubectl apply -f test-claim.yamlpersistentvolumeclaim/test-claim created➜ pv git:(master) ✗ kubectl get pvcNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEtest-claim Bound pvc-050e4039-935c-4f18-8623-8580d1295e3a 1Mi RWX managed-nfs-storage 7s$ kubectl get pvNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGEpvc-050e4039-935c-4f18-8623-8580d1295e3a 1Mi RWX Delete Bound default/test-claim managed-nfs-storage 32s
创建pod文件test-pod.yaml
kind: PodapiVersion: v1metadata:name: test-podspec:containers:- name: test-podimage: busyboxcommand:- "/bin/sh"args:- "-c"- "touch /mnt/SUCCESS && exit 0 || exit 1" #创建一个SUCCESS文件后退出volumeMounts:- name: nfs-pvcmountPath: "/mnt"restartPolicy: "Never"volumes:- name: nfs-pvcpersistentVolumeClaim:claimName: test-claim #与PVC名称保持一致
pvc
声明pvc-nfs.yaml 配置文件
kind: PersistentVolumeClaimapiVersion: v1metadata:name: pvc-nfsspec:accessModes:- ReadWriteOnceresources:requests:storage: 1Gi
执行创建pvc
$ kubectl apply -f pvc-nfs.yamlpersistentvolumeclaim/pvc-nfs created
查看pvc
$ kubectl get pvcNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEpvc-nfs Bound pvc-aec6f5cf-c3a6-422f-9063-82ce0cdbf53a 1Gi RWO hostpath 13s
查看pv
✗ kubectl get pvNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGEpv-nfs 1Gi RWO Recycle Available 85spvc-b9de43c1-ecd7-4e58-94d7-f5c24092ad3c 1Gi RWO Delete Bound default/pvc-nfs hostpath 61s Bound default/pvc-nfs hostpath 2m37s
错误处理
1 controller.go:1004] provision "default/test-claim" class "managed-nfs-storage": unexpected error getting claim reference: selfLink was empty, can't make reference
[root@master ~]# grep -B 5 'feature-gates' /etc/kubernetes/manifests/kube-apiserver.yaml- --service-account-key-file=/etc/kubernetes/pki/sa.pub- --service-account-signing-key-file=/etc/kubernetes/pki/sa.key- --service-cluster-ip-range=10.96.0.0/12- --tls-cert-file=/etc/kubernetes/pki/apiserver.crt- --tls-private-key-file=/etc/kubernetes/pki/apiserver.key- --feature-gates=RemoveSelfLink=false #添加内容
参考
https://www.jianshu.com/p/b860d26f2951
https://github.com/kubernetes-sigs/sig-storage-lib-external-provisioner
https://www.kococ.cn/20210119/cid=670.html
https://blog.csdn.net/ag1942/article/details/115371793
