文章大纲
文件系统存储(也称为共享文件系统)能够以读写权限挂载到多个 Pod 中,这非常有利于应用使用共享存储,例如 FTP、Web、共享存储服务等。
创建文件系统
创建文件系统需要通过 CephFilesystem
CRD 指定 metadata 池和 data 池以及 metadata server。
创建文件系统的示例:
apiVersion: ceph.rook.io/v1
kind: CephFilesystem
metadata:
name: myfs
namespace: rook-ceph
spec:
metadataPool:
replicated:
size: 3
dataPools:
- name: myfs
replicated:
size: 3
preserveFilesystemOnDelete: true
metadataServer:
activeCount: 1
activeStandby: true
应用然后验证:
[vagrant@master01 examples]$ kubectl apply -f myfs.yaml
cephfilesystem.ceph.rook.io/myfs created
[vagrant@master01 examples]$ kubectl -n rook-ceph get pod -l app=rook-ceph-mds
NAME READY STATUS RESTARTS AGE
rook-ceph-mds-myfs-a-5949f8b48-nn9cm 2/2 Running 0 18s
rook-ceph-mds-myfs-b-fd5c79f5d-2sgwq 2/2 Running 0 17s
在 ToolBox 中进行验证:
[vagrant@master01 examples]$ kubectl exec -it rook-ceph-tools-66b77b8df5-x97q4 -n rook-ceph -- /bin/ceph status
cluster:
id: f8bdb7b9-12c8-4814-b4a2-6122366ddd1a
health: HEALTH_WARN
1 daemons have recently crashed
services:
mon: 3 daemons, quorum a,b,c (age 8m)
mgr: a(active, since 6m), standbys: b
mds: 1/1 daemons up, 1 hot standby
osd: 3 osds: 3 up (since 7m), 3 in (since 4w)
data:
volumes: 1/1 healthy
pools: 7 pools, 177 pgs
objects: 32 objects, 463 KiB
usage: 82 MiB used, 60 GiB / 60 GiB avail
pgs: 177 active+clean
io:
client: 973 B/s rd, 1 op/s rd, 0 op/s wr
progress:
[vagrant@master01 examples]$ kubectl exec -it rook-ceph-tools-66b77b8df5-x97q4 -n rook-ceph -- /bin/ceph fs ls
name: myfs, metadata pool: myfs-metadata, data pools: [myfs-replicated ]
创建存储类
创建使用 myfs
文件系统的存储类,示例定义:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: rook-cephfs
# Change "rook-ceph" provisioner prefix to match the operator namespace if needed
provisioner: rook-ceph.cephfs.csi.ceph.com
parameters:
# clusterID is the namespace where the rook cluster is running
# If you change this namespace, also change the namespace below where the secret namespaces are defined
clusterID: rook-ceph
# CephFS filesystem name into which the volume shall be created
fsName: myfs
# Ceph pool into which the volume shall be created
# Required for provisionVolume: "true"
pool: myfs-replicated
# The secrets contain Ceph admin credentials. These are generated automatically by the operator
# in the same namespace as the cluster.
csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner
csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph
csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner
csi.storage.k8s.io/controller-expand-secret-namespace: rook-ceph
csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node
csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph
reclaimPolicy: Delete
应用:
[vagrant@master01 examples]$ kubectl apply -f storageclass-myfs.yaml
storageclass.storage.k8s.io/rook-cephfs created
[vagrant@master01 examples]$ kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
rook-ceph-block rook-ceph.rbd.csi.ceph.com Delete Immediate true 6h56m
rook-ceph-block-ec rook-ceph.rbd.csi.ceph.com Delete Immediate true 6h42m
rook-cephfs rook-ceph.cephfs.csi.ceph.com Delete Immediate false 3s
使用文件系统存储类
首先创建一个使用 rook-cephfs
的 PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nginx-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
storageClassName: rook-cephfs
应用并检查:
[vagrant@master01 rook-ceph]$ kubectl apply -f nginx-pvc.yaml
persistentvolumeclaim/nginx-pvc created
[vagrant@master01 rook-ceph]$ kubectl get pvc -n cephfs-demo
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
nginx-pvc Bound pvc-a5fea40e-9055-4304-ab8b-782082184a3a 1Gi RWX rook-cephfs <unset> 8s
创建一个 Nginx 应用,并使用 nginx-pvc
:
apiVersion: v1
kind: Pod
metadata:
name: test-cephfs-pod
namespace: cephfs-demo
spec:
volumes:
- name: nginx-pvc
persistentVolumeClaim:
claimName: nginx-pvc
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: nginx-pvc
应用并验证:
[vagrant@master01 rook-ceph]$ kubectl apply -f test-cephfs-pod.yaml
pod/test-cephfs-pod created
[vagrant@master01 rook-ceph]$ kubectl get pods -n cephfs-demo
NAME READY STATUS RESTARTS AGE
test-cephfs-pod 1/1 Running 0 98s
[vagrant@master01 rook-ceph]$
[vagrant@master01 rook-ceph]$ kubectl -n cephfs-demo exec -it test-cephfs-pod -- /bin/mount | grep nginx
10.101.27.111:6789,10.106.190.243:6789,10.107.18.191:6789:/volumes/csi/csi-vol-0435565e-561a-45d6-99d6-2ce34f5b59c0/a840ec39-e187-4095-b8c9-7ccb1c792626 on /usr/share/nginx/html type ceph (rw,relatime,name=csi-cephfs-node,secret=<hidden>,fsid=00000000-0000-0000-0000-000000000000,acl,mds_namespace=myfs)