博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
Kubernetes(十七) 基于NFS的动态存储申请
阅读量:3964 次
发布时间:2019-05-24

本文共 7955 字,大约阅读时间需要 26 分钟。

部署nfs-provisioner

  1. 创建工作目录

    $ mkdir -p /opt/k8s/nfs/data
  2. 下载nfs-provisioner对应的镜像,上传到自己的私有镜像中

    $ docker pull fishchen/nfs-provisioner:v2.2.2$ docker tag fishchen/nfs-provisioner:v2.2.2 192.168.0.107/k8s/nfs-provisioner:v2.2.2$ docker push 192.168.0.107/k8s/nfs-provisioner:v2.2.2
  3. 编辑启动nfs-provisioner的deploy.yml文件

    $ cd /opt/k8s/nfs$ cat > deploy.yml << EOFapiVersion: v1kind: ServiceAccountmetadata:  name: nfs-provisioner---kind: ServiceapiVersion: v1metadata:  name: nfs-provisioner  labels:    app: nfs-provisionerspec:  ports:    - name: nfs      port: 2049    - name: mountd      port: 20048    - name: rpcbind      port: 111    - name: rpcbind-udp      port: 111      protocol: UDP   selector:    app: nfs-provisioner---kind: DeploymentapiVersion: apps/v1metadata:  name: nfs-provisionerspec:  selector:    matchLabels:      app: nfs-provisioner  replicas: 1  strategy:    type: Recreate   template:    metadata:      labels:        app: nfs-provisioner    spec:      serviceAccount: nfs-provisioner      containers:        - name: nfs-provisioner          image: 192.168.0.107/k8s/nfs-provisioner:v2.2.2          ports:            - name: nfs              containerPort: 2049            - name: mountd              containerPort: 20048            - name: rpcbind              containerPort: 111            - name: rpcbind-udp              containerPort: 111              protocol: UDP          securityContext:            capabilities:              add:                - DAC_READ_SEARCH                - SYS_RESOURCE          args:            - "-provisioner=myprovisioner.kubernetes.io/nfs"          env:            - name: POD_IP              valueFrom:                fieldRef:                  fieldPath: status.podIP            - name: SERVICE_NAME              value: nfs-provisioner            - name: POD_NAMESPACE              valueFrom:                fieldRef:                  fieldPath: metadata.namespace          imagePullPolicy: "IfNotPresent"          volumeMounts:            - name: export-volume              mountPath: /export      volumes:        - name: export-volume          hostPath:            path: /opt/k8s/nfs/dataEOF
    • volumes.hostPath 指向刚创建的数据目录,作为nfs的export目录,此目录可以是任意的Linux目录
    • args: - "-myprovisioner.kubernetes.io/nfs" 指定provisioner的名称,要和后面创建的storeClass中的名称保持一致
  4. 编辑自动创建pv相关的rbac文件

    $ cd /opt/k8s/nfs$ cat > rbac.yml << EOFkind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1metadata:  name: nfs-provisioner-runnerrules:  - apiGroups: [""]    resources: ["persistentvolumes"]    verbs: ["get", "list", "watch", "create", "delete"]  - apiGroups: [""]    resources: ["persistentvolumeclaims"]    verbs: ["get", "list", "watch", "update"]  - apiGroups: ["storage.k8s.io"]    resources: ["storageclasses"]    verbs: ["get", "list", "watch"]  - apiGroups: [""]    resources: ["events"]    verbs: ["create", "update", "patch"]  - apiGroups: [""]    resources: ["services", "endpoints"]    verbs: ["get"]  - apiGroups: ["extensions"]    resources: ["podsecuritypolicies"]    resourceNames: ["nfs-provisioner"]    verbs: ["use"]---kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata:  name: run-nfs-provisionersubjects:  - kind: ServiceAccount    name: nfs-provisioner     # replace with namespace where provisioner is deployed    namespace: defaultroleRef:  kind: ClusterRole  name: nfs-provisioner-runner  apiGroup: rbac.authorization.k8s.io---kind: RoleapiVersion: rbac.authorization.k8s.io/v1metadata:  name: leader-locking-nfs-provisionerrules:  - apiGroups: [""]    resources: ["endpoints"]    verbs: ["get", "list", "watch", "create", "update", "patch"]---kind: RoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata:  name: leader-locking-nfs-provisionersubjects:  - kind: ServiceAccount    name: nfs-provisioner    # replace with namespace where provisioner is deployed    namespace: defaultroleRef:  kind: Role  name: leader-locking-nfs-provisioner  apiGroup: rbac.authorization.k8s.ioEOF
  5. 编辑创建StorageClass的启动文件

    $ cd /opt/k8s/nfs$ cat > class.yml << EOFkind: StorageClassapiVersion: storage.k8s.io/v1metadata:  name: example-nfsprovisioner: myprovisioner.kubernetes.io/nfsmountOptions:  - vers=4.1EOF
    • provisioner 对应的值要和前面deployment.yml中配置的值一样
  6. 启动nfs-provisioner

    $ kubectl create -f deploy.yml -f rbac.yml -f class.yml

验证和使用nfs-provisioner

下面我们通过一个简单的例子来验证刚创建的nfs-provisioner,例子中主要包含两个应用,一个busyboxy和一个web,两个应用挂载同一个PVC,其中busybox负责向共享存储中写入内容,web应用读取共享存储中的内容,并展示到界面。

  1. 编辑创建PVC文件

    $ cd /opt/k8s/nfs$ cat > claim.yml << EOFkind: PersistentVolumeClaimapiVersion: v1metadata:  name: nfsstorageClassName: example-nfsspec:  accessModes:    - ReadWriteMany  resources:    requests:      storage: 100MiEOF
    • storageClassName: 指定前面创建的StorageClass对应的名称
    • accessModes: ReadWriteMany 允许多个node进行挂载和read、write
    • 申请资源是100Mi
  2. 创建PVC,并检查是否能自动创建相应的pv

    $ kubectl create -f claim.yml$ kubectl get pvcNAME   STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGEnfs    Bound    pvc-10a1a98c-2d0f-4324-8617-618cf03944fe   100Mi      RWX            example-nfs    11s$kubectl get pvNAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM         STORAGECLASS   REASON   AGEpvc-10a1a98c-2d0f-4324-8617-618cf03944fe   100Mi      RWX            Delete           Bound    default/nfs   example-nfs             18s
    • 可以看到自动给我们创建了一个pv对应的名称是pvc-10a1a98c-2d0f-4324-8617-618cf03944fe,STORAGECLASS是example-nfs,对应的claim是default/nfs
  3. 启动一个busybox应用,通过挂载共享目录,向其中写入数据

    1. 编辑启动文件

      $ cd /opt/k8s/nfs$ cat > deploy-busybox.yml << EOFapiVersion: apps/v1kind: Deploymentmetadata:  name: nfs-busyboxspec:  replicas: 1  selector:    matchLabels:      name: nfs-busybox  template:    metadata:      labels:        name: nfs-busybox    spec:      containers:      - image: busybox        command:          - sh          - -c          - 'while true; do date > /mnt/index.html; hostname >> /mnt/index.html; sleep 20; done'        imagePullPolicy: IfNotPresent        name: busybox        volumeMounts:          # name must match the volume name below          - name: nfs            mountPath: "/mnt"      volumes:      - name: nfs        persistentVolumeClaim:          claimName: nfsEOF
      • volumes.persistentVolumeClaim.claimName 设定成刚创建的PVC
    2. 启动busybox

      $ cd /opt/k8s/nfs$ kubectl create -f deploy-busybox.yml

      查看是否在对应的pv下生成了index.html

      $ cd /opt/k8s/nfs$ ls data/pvc-10a1a98c-2d0f-4324-8617-618cf03944fe/

    index.html

    $ cat data/pvc-10a1a98c-2d0f-4324-8617-618cf03944fe/index.html
    Sun Mar 1 12:51:30 UTC 2020
    nfs-busybox-6b677d655f-qcg5c

    ```  * 可以看到在对应的pv下生成了文件,也正确写入了内容
  4. 启动web应用(nginx),读取共享挂载中的内容

    1. 编辑启动文件

      $ cd /opt/k8s/nfs$ cat >deploy-web.yml << EOFapiVersion: v1kind: Servicemetadata:  name: nfs-webspec:  type: NodePort  selector:    role: web-frontend  ports:  - name: http    port: 80    targetPort: 80    nodePort: 8086---apiVersion: apps/v1kind: Deploymentmetadata:  name: nfs-webspec:  replicas: 2  selector:    matchLabels:      role: web-frontend  template:    metadata:      labels:        role: web-frontend    spec:      containers:      - name: web        image: nginx:1.9.1        ports:          - name: web            containerPort: 80        volumeMounts:            # name must match the volume name below            - name: nfs              mountPath: "/usr/share/nginx/html"      volumes:      - name: nfs        persistentVolumeClaim:          claimName: nfsEOF
      • volumes.persistentVolumeClaim.claimName 设定成刚创建的PVC
    2. 启动web程序

      $ cd /opt/k8s/nfs$ kubectl create -f deploy-web.yml
    3. 访问页面

      • 可以看到正确读取了内容,没过20秒,持续观察可发现界面的时间可以刷新

遇到问题

参照github上的步骤执行,启动PVC后无法创建pv,查看nfs-provisioner服务的日志,有出现错误:

error syncing claim "20eddcd8-1771-44dc-b185-b1225e060c9d": failed to provision volume with StorageClass "example-nfs": error getting NFS server IP for volume: service SERVICE_NAME=nfs-provisioner is not valid; check that it has for ports map[{111 TCP}:true {111 UDP}:true {2049 TCP}:true {20048 TCP}:true] exactly one endpoint, this pod's IP POD_IP=172.30.22.3

错误原因,之后把错误中提到端口保留,其他端口号都去掉,正常

转载地址:http://ufuki.baihongyu.com/

你可能感兴趣的文章
golang文章
查看>>
Source Insight 经典教程
查看>>
快速打开菜单附件中的工具
查看>>
Windows系统进程间通信
查看>>
linux exec的用法
查看>>
C语言中如何使用宏
查看>>
Http与RPC通信协议的比较
查看>>
Source Insight的对齐问题
查看>>
ubuntu设置开机默认进入字符界面方法
查看>>
chrome 快捷键
查看>>
Linux下buffer和cache的区别
查看>>
程序员不应该再犯的五大编程错误
查看>>
[转载][转帖]Hibernate与Sleep的区别
查看>>
Linux系统的默认编码设置
查看>>
Linux系统调用
查看>>
Linux 信号signal处理机制
查看>>
Linux 信号signal处理函数
查看>>
perror简介
查看>>
linux的system () 函数详解
查看>>
在shell脚本的第一行中,必须写#!/bin/bash
查看>>