nfs共享存储
1.安装配置nfs
1.1 安装nfs
yum -y install nfs-utils rpcbind
1.2 创建共享目录
[ -d /data/nfs ] || mkdir -p /data/nfs && chown nfsnobody.nfsnobody /data/nfs
1.3 编辑配置文件
cat > /etc/exports << EOF
/data/nfs 192.168.1.0/24(rw,sync,no_root_squash)
EOF
参数说明
参数 | 说明 |
---|---|
/data/nfs | nfs服务端共享目录 |
192.168.1.0/24 | 允许挂载的客户端网段 |
rw | 挂载权限 |
sync | 同时将数据写入到内存与硬盘中,保证不丢失数据 |
no_root_squash | 当登录 NFS 主机使用共享目录的使用者是 root 时,其权限将被转换成为匿名使用者,通常它的 UID 与 GID,都会变成 nobody 身份 |
1.4 启动nfs
systemctl start rpcbind nfs-server && systemctl enable rpcbind nfs-server
1.5 查看启动状态
$ systemctl status rpcbind nfs-server
● rpcbind.service - RPC bind service
Loaded: loaded (/usr/lib/systemd/system/rpcbind.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2022-11-14 21:31:52 CST; 55s ago
Main PID: 21841 (rpcbind)
CGroup: /system.slice/rpcbind.service
└─21841 /sbin/rpcbind -w
Nov 14 21:31:52 ctyun systemd[1]: Starting RPC bind service...
Nov 14 21:31:52 ctyun systemd[1]: Started RPC bind service.
● nfs-server.service - NFS server and services
Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled; vendor preset: disabled)
Drop-In: /run/systemd/generator/nfs-server.service.d
└─order-with-mounts.conf
Active: active (exited) since Mon 2022-11-14 21:31:52 CST; 55s ago
Main PID: 21874 (code=exited, status=0/SUCCESS)
CGroup: /system.slice/nfs-server.service
Nov 14 21:31:52 ctyun systemd[1]: Starting NFS server and services...
Nov 14 21:31:52 ctyun systemd[1]: Started NFS server and services.
1.6 查看nfs共享记录信息
nfs启动后会在 /var/lib/nfs/etab
文件中记录共享内容
$ cat /var/lib/nfs/etab
/data/nfs 192.168.1.0/24(rw,sync,wdelay,hide,nocrossmnt,secure,no_root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,anonuid=65534,anongid=65534,sec=sys,rw,secure,no_root_squash,no_all_squash)
2.在k8s集群中使用nfs
2.1 手动创建pv
2.1.1 编辑yaml文件
:::tip说明
用户真正使用的是pvc,而要使用pvc的前提就是必须要先和某个符合条件的pv进行一对一的绑定,比如存储容器、访问模式,以及pvc和pv的storageClassName 字段必须一样,这样才能够进行绑定,当pvc和pv绑定成功后就可以直接使用这个pvc对象了
:::
cat > nfs-volume.yaml << EOF
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv
spec:
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
nfs:
path: /data/nfs # 指定nfs的挂载点
server: 192.168.1.96 # 指定nfs服务地址
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
EOF
2.1.2 创建pv pvc
$ kubectl apply -f nfs-volume.yaml
persistentvolume/nfs-pv created
persistentvolumeclaim/nfs-pvc created
2.1.3 查看pv pvc
查看pv
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs-pv 1Gi RWO Retain Bound test/nfs-pvc manual 30s
查看pvc
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nfs-pvc Bound nfs-pv 1Gi RWO manual 29s
2.1.4 创建pod并引用创建的pvc
编辑yaml文件
cat > nfs-pod.yaml << EOF
apiVersion: v1
kind: Pod
metadata:
name: test-volumes
spec:
volumes:
- name: nfs
persistentVolumeClaim:
claimName: nfs-pvc
containers:
- name: web
image: nginx
ports:
- name: web
containerPort: 80
volumeMounts:
- name: nfs
subPath: test-volumes
mountPath: "/usr/share/nginx/html"
EOF
创建pod
$ kubectl apply -f nfs-pod.yaml
pod/test-volumes created
查看pod
$ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
test-volumes 1/1 Running 0 31s 100.108.251.8 ctyun <none> <none>
由于我们这里pv中的数据为空,所以挂载后会将nginx容器中的 /usr/share/nginx/html
目录覆盖,那么访问应用的时候就没有内容了
$ curl 100.108.251.8
<html>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.23.2</center>
</body>
</html>
向pv目录写入文件内容
echo 'test-volumes' > /data/nfs/test-volumes/index.html
再次访问就可以看到刚才写入的内容了
$ curl 100.108.251.8
test-volumes