【Kubernetes存储篇】StorageClass存储类动态生成PV
咱们在使用helm安装部署zookeeper,redis时下载的chart包里默认的都是有StorageClass这个参数,当咱们的环境中没有创建StorageClass存储类时,应用时无法部署的,本次使用最简单的方法创建StorageClass存储类,在使用helm部署应用时动态创建PV,PVC。3.咱们以nfs-client这个StorageClass创建服务验证一下,咱们通过helm部署个z
·
咱们在使用helm安装部署zookeeper,redis时下载的chart包里默认的都是有StorageClass这个参数,当咱们的环境中没有创建StorageClass存储类时,应用时无法部署的,本次使用最简单的方法创建StorageClass存储类,在使用helm部署应用时动态创建PV,PVC。
Storageclass存储类实战演示
1.搭建NFS服务端
#所有的node节点都需要安装nfs-utils
[root@k8s-node1 ~]# yum -y install nfs-utils
#创建空想目录并赋权
[root@k8s-node1 ~]# mkdir -p /data/nfs_root
[root@k8s-node1 ~]# chown -R 777 /data/nfs_root
[root@k8s-node1 ~]# cat /etc/exports
/data/nfs_root *(rw,no_root_squash)
#启动NFS服务
[root@k8s-node1 ~]# systemctl enable nfs --now
#验证NFS服务是否正常
[root@master1 ~]# showmount -e 192.168.21.103
Export list for 192.168.21.103:
/data/nfs_root *
2.使用 helm 快速部署nfs-subdir-external-provisioner
#添加helm仓库
[root@master1 ~]# helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
"nfs-subdir-external-provisioner" has been added to your repositories
[root@master1 ~]# helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
> --namespace=nfs-provisioner \
> --create-namespace \
> --set image.repository=willdockerhub/nfs-subdir-external-provisioner \
> --set replicaCount=2 \
> --set storageClass.name=nfs-client \
> --set storageClass.defaultClass=true \
> --set nfs.server=192.168.21.103 \ #这个地址要换成你NFS服务的地址
> --set nfs.path=/data/nfs_root #这个共享路径换成你的路径
>
#安装成功会出现两个pod
[root@master1 ~]# kubectl get pods -n nfs-provisioner
NAME READY STATUS RESTARTS AGE
nfs-subdir-external-provisioner-6cf4b446c6-cbl75 1/1 Running 0 92s
nfs-subdir-external-provisioner-6cf4b446c6-g8sjr 1/1 Running 0 92s
#可以看到创建了一个nfs-client的StorageClass
[root@master1 ~]# kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs-client (default) cluster.local/nfs-subdir-external-provisioner Delete Immediate true 2m27s
3.咱们以nfs-client这个StorageClass创建服务验证一下,咱们通过helm部署个zookeeper验证一下
#添加helm仓库
[root@master1 ~]# helm repo add bitnami https://charts.bitnami.com/bitnami
[root@master1 ~]# helm search repo zookeeper
NAME CHART VERSION APP VERSION DESCRIPTION
bitnami/zookeeper 12.3.3 3.9.1 Apache ZooKeeper provides a reliable, centraliz...
bitnami/dataplatform-bp2 12.0.5 1.0.1 DEPRECATED This Helm chart can be used for the ...
bitnami/kafka 26.4.2 3.6.0 Apache Kafka is a distributed streaming platfor...
bitnami/schema-registry 16.2.3 7.5.2 Confluent Schema Registry provides a RESTful in...
bitnami/solr 8.3.2 9.4.0 Apache Solr is an extremely powerful, open sour...
[root@master1 ~]# helm pull bitnami/zookeeper
解压下载的chart包
[root@master1 ~]# tar -xf zookeeper-12.3.3.tgz
[root@master1 ~]# kubectl create ns app
namespace/app created
[root@master1 ~]# cd zookeeper/
[root@master1 zookeeper]# ll
total 116
-rw-r--r-- 1 root root 226 Nov 22 08:11 Chart.lock
drwxr-xr-x 3 root root 20 Nov 25 20:41 charts
-rw-r--r-- 1 root root 845 Nov 22 08:11 Chart.yaml
-rw-r--r-- 1 root root 62570 Nov 22 08:11 README.md
drwxr-xr-x 2 root root 4096 Nov 25 20:41 templates
-rw-r--r-- 1 root root 37273 Nov 22 08:11 values.yaml
[root@master1 zookeeper]# helm install zookeeper . -n app
NAME: zookeeper
LAST DEPLOYED: Sat Nov 25 20:42:24 2023
NAMESPACE: app
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: zookeeper
CHART VERSION: 12.3.3
APP VERSION: 3.9.1
** Please be patient while the chart is being deployed **
ZooKeeper can be accessed via port 2181 on the following DNS name from within your cluster:
zookeeper.app.svc.cluster.local
To connect to your ZooKeeper server run the following commands:
export POD_NAME=$(kubectl get pods --namespace app -l "app.kubernetes.io/name=zookeeper,app.kubernetes.io/instance=zookeeper,app.kubernetes.io/component=zookeeper" -o jsonpath="{.items[0].metadata.name}")
kubectl exec -it $POD_NAME -- zkCli.sh
To connect to your ZooKeeper server from outside the cluster execute the following commands:
kubectl port-forward --namespace app svc/zookeeper 2181:2181 &
zkCli.sh 127.0.0.1:2181
#这样zookeeper通过helm就部署好了
[root@master1 ~]# kubectl get pods -n app -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
zookeeper-0 1/1 Running 0 7m45s 10.10.36.113 k8s-node1 <none> <none>
更多推荐

所有评论(0)