Cloud computing is often far more secure than traditional computing, because companies like Google and Amazon can attract and retain cyber-security personnel of a higher quality than many governmental agencies. Vivek Kundra, Executive Vice President at Salesforce.com.

Inspiration:


https://schroederdennis.de/allgemein/ist-gluster-unbrauchbar-im-docker-swarm/
https://github.com/BryceAshey/raspberry-pi-kubernetes-cluster/blob/master/docs/gluster-setup.md
https://www.mytinydc.com/en/provisionner-filer-glusterfs/
https://de.wikipedia.org/wiki/GlusterFS
https://www.howtoforge.com/tutorial/high-availability-storage-with-glusterfs-on-ubuntu-1804/
https://www.gopeedesignstudio.com/2018/07/13/glusterfs-on-arm/
https://kubernetes.io/docs/concepts/storage/storage-classes/
https://github.com/heketi/heketi
https://medium.com/searce/glusterfs-dynamic-provisioning-using-heketi-as-external-storage-with-gke-bd9af17434e5
https://rpi4cluster.com/k3s/k3s-storage-setting/
https://docs.gluster.org/en/latest/Quick-Start-Guide/Architecture/
https://ralph.blog.imixs.com/2020/03/03/kubernetes-and-glusterfs/
https://faun.pub/configuring-ha-kubernetes-cluster-on-bare-metal-servers-with-glusterfs-metallb-2-3-c9e0b705aa3d


Glusterfs ist ein verteiltes Speichersystem. Die einzelnen Komponenten "Bricks" sind im Clustersystem verteilt. Es gibt verschiedene Arten der Replication und Fehler-Sicherheit. Details siehe in den Links.

In diesem Beispiel wird glusterfs installiert, und es gibt statische Volumes, die mit pvc angesprochen werden können. Es gibt aber kein dynamisches provisionieren von Disken.

Zuerst installieren wir die Software:

ansible pc -m shell -a 'sudo apt-get update'
ansible pc -m shell -a 'sudo apt-get -y upgrade'
ansible pc -m shell -a 'sudo modprobe fuse'
ansible pc -m shell -a 'sudo apt-get install -y xfsprogs glusterfs-server'
ansible pc -m shell -a 'sudo systemctl start glusterd'
ansible pc -m shell -a 'sudo systemctl enable glusterd'
ansible pc -m shell -a 'sudo systemctl status glusterd'

Von einem Node aus alle anderen Nodes verbinden:

ansible pc1 -m shell -a 'sudo gluster peer probe pc2'
ansible pc1 -m shell -a 'sudo gluster peer probe pc3'
ansible pc1 -m shell -a 'sudo gluster peer probe pc4'
ansible pc1 -m shell -a 'sudo gluster peer probe pc5'
ansible pc1 -m shell -a 'sudo gluster peer status'

Dann werden die Platten formatiert. Die USB-Sticks sind auf allen Rechner /dev/sda (sonst sollte man die richtigen ansible-Variablen nehmen).

ansible pc -m shell -a 'sudo wipefs -a /dev/sda'
ansible pc -m shell -a '(
echo g # Create a new empty DOS partition table
echo n # Add a new partition
echo   # Just press enter accept the default
echo   # Just press enter accept the default
echo   # Just press enter accept the default
echo w # Write changes
) | sudo fdisk -w auto /dev/sda'
ansible pc -m shell -a 'sudo mkfs.xfs -f -L pcvol-brick1 /dev/sda1'
ansible pc -m shell -a 'sudo printf $(sudo blkid -o export /dev/sda1|grep PARTUUID)" /data/glusterfs/pcvol/brick1 xfs defaults,noatime 1 2\n" | sudo tee -a /etc/fstab'
ansible pc -m shell -a 'sudo cat /etc/fstab'
ansible pc -m shell -a 'sudo mkdir -p /data/glusterfs/pcvol/brick1/'


Jetzt mounten wir die Platte auf allen Nodes und konfigurieren glusterfs.

ansible pc -m shell -a 'sudo mount /data/glusterfs/pcvol/brick1'
ansible pc -m shell -a 'df -h | grep -i sda'

Für unseren Cluster brauchen wir mehrere Platten, und machen dazu eine Planung:

 

 

PC1

PC2

PC3

PC4

PC5

Größe

Gesamt

web

x

x

x

   

5

10

k8s

x

 

x

x

 

14

28

db

x

 

x

 

x

9

18

minio

 

x

 

x

x

15

30

spare

x

x

   

x

1

2

 

 

 

 

 

 

 

0

Summe

29,0

21,0

28,0

29,0

25,0

132,0

88

Kapazität

29,0

29,0

29,0

29,0

29,0

145,0

145

Freier Platz

0,0

8,0

1,0

0,0

4,0

13,0

57,0

Es sind die Platten web, k8s, db, minio und spare geplant, mit einer Gesamtkapazität von Netto 88GB. Diese Platten verteilen sich als "Dispersed Volume" auf die unterschiedlichen Nodes. Dh. Für je zwei Daten-"Brick" gibt es einen zusätzlichen Brick mit der Checksum für den Fall des Verlustes eines USB-Sticks. Das ergibt eine Nettokapazität von 88 GB bei einer Bruttokapazität von 145 GB (wobei 13 GB noch ungenutzter freier Plattenplatz sind).

Wir legen nun die Filesysteme nach Plan an:

# Dispersed Volume
ansible pc -m shell -a 'sudo mkdir -p /data/glusterfs/pcvol/brick1/web'
ansible pc1 -m shell -a 'sudo gluster volume create web disperse 3 redundancy 1 transport tcp pc1:/data/glusterfs/pcvol/brick1/web pc2:/data/glusterfs/pcvol/brick1/web pc3:/data/glusterfs/pcvol/brick1/web'
ansible pc -m shell -a 'sudo mkdir -p /data/glusterfs/pcvol/brick1/k8s'
ansible pc1 -m shell -a 'sudo gluster volume create k8s disperse 3 redundancy 1 transport tcp pc1:/data/glusterfs/pcvol/brick1/k8s pc3:/data/glusterfs/pcvol/brick1/k8s pc4:/data/glusterfs/pcvol/brick1/k8s'
ansible pc -m shell -a 'sudo mkdir -p /data/glusterfs/pcvol/brick1/db'
ansible pc1 -m shell -a 'sudo gluster volume create db disperse 3 redundancy 1 transport tcp pc1:/data/glusterfs/pcvol/brick1/db pc3:/data/glusterfs/pcvol/brick1/db pc5:/data/glusterfs/pcvol/brick1/db'
ansible pc -m shell -a 'sudo mkdir -p /data/glusterfs/pcvol/brick1/minio'
ansible pc1 -m shell -a 'sudo gluster volume create minio disperse 3 redundancy 1 transport tcp pc2:/data/glusterfs/pcvol/brick1/minio pc4:/data/glusterfs/pcvol/brick1/minio pc5:/data/glusterfs/pcvol/brick1/minio'
ansible pc -m shell -a 'sudo mkdir -p /data/glusterfs/pcvol/brick1/spare'
ansible pc1 -m shell -a 'sudo gluster volume create spare disperse 3 redundancy 1 transport tcp pc1:/data/glusterfs/pcvol/brick1/spare pc2:/data/glusterfs/pcvol/brick1/spare pc5:/data/glusterfs/pcvol/brick1/spare'

Nun starten wir die Platten.

# Starten der Platten
ansible pc1 -m shell -a 'sudo gluster volume start web'
ansible pc1 -m shell -a 'sudo gluster volume start k8s'
ansible pc1 -m shell -a 'sudo gluster volume start db'
ansible pc1 -m shell -a 'sudo gluster volume start minio'
ansible pc1 -m shell -a 'sudo gluster volume start spare'
# Prüfen der Platten und der Verteilung
ansible pc1 -m shell -a 'sudo gluster pool list'
ansible pc1 -m shell -a 'sudo gluster volume info'
ansible pc -m shell -a 'sudo tree -a /data/glusterfs'

Herausfinden der belegten Ports.

ansible@monitoring:~$ ansible pc -m shell -a 'service glusterd status | grep brick-port'


Eintragen der Endpoints im K8s.

ansible pc1 -m shell -a 'cat <<EOF | microk8s kubectl apply -f -
#
# Definieren aller Endpoints für glusterfs
#
---
apiVersion: v1
kind: Endpoints
metadata:
  name: gluster-web
  namespace: default
subsets:
- addresses:              
  - ip: 192.168.0.201
  ports:                  
  - port: 49155
    protocol: TCP
- addresses:              
  - ip: 192.168.0.202
  ports:                  
  - port: 49155
    protocol: TCP
- addresses:              
  - ip: 192.168.0.203
  ports:                  
  - port: 49155
    protocol: TCP
---
apiVersion: v1
kind: Endpoints
metadata:
  name: gluster-k8s
  namespace: default
subsets:
- addresses:              
  - ip: 192.168.0.201
  ports:                  
  - port: 49156
    protocol: TCP
- addresses:              
  - ip: 192.168.0.203
  ports:                  
  - port: 49156
    protocol: TCP
- addresses:              
  - ip: 192.168.0.204
  ports:                  
  - port: 49155
    protocol: TCP
---
apiVersion: v1
kind: Endpoints
metadata:
  name: gluster-db
  namespace: default
subsets:
- addresses:              
  - ip: 192.168.0.201
  ports:                  
  - port: 49157
    protocol: TCP
- addresses:              
  - ip: 192.168.0.203
  ports:                  
  - port: 49157
    protocol: TCP
- addresses:              
  - ip: 192.168.0.205
  ports:                  
  - port: 49155
    protocol: TCP
---
apiVersion: v1
kind: Endpoints
metadata:
  name: gluster-minio
  namespace: default
subsets:
- addresses:              
  - ip: 192.168.0.202
  ports:                  
  - port: 49156
    protocol: TCP
- addresses:              
  - ip: 192.168.0.204
  ports:                  
  - port: 49156
    protocol: TCP
- addresses:              
  - ip: 192.168.0.205
  ports:                  
  - port: 49156
    protocol: TCP
---
apiVersion: v1
kind: Endpoints
metadata:
  name: gluster-spare
  namespace: default
subsets:
- addresses:              
  - ip: 192.168.0.201
  ports:                  
  - port: 49158
    protocol: TCP
- addresses:              
  - ip: 192.168.0.202
  ports:                  
  - port: 49157
    protocol: TCP
- addresses:              
  - ip: 192.168.0.205
  ports:                  
  - port: 49157
    protocol: TCP
---
EOF
'

ansible@monitoring:~$ ansible pc1 -m shell -a 'microk8s kubectl get endpoints --all-namespaces '


Erzeugen der Persistant Volumes.

ansible pc1 -m shell -a 'cat <<EOF | microk8s kubectl apply -f -
#
# Erzeugen der Persistent Volumes für Glusterfs
#
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-gluster-web
  annotations:
    pv.beta.kubernetes.io/gid: "0"
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteMany
  glusterfs:
    endpoints: gluster-web
    path: /web
    readOnly: false
  persistentVolumeReclaimPolicy: Retain
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-gluster-k8s
  annotations:
    pv.beta.kubernetes.io/gid: "0"
spec:
  capacity:
    storage: 14Gi
  accessModes:
    - ReadWriteMany
  glusterfs:
    endpoints: gluster-k8s
    path: /k8s
    readOnly: false
  persistentVolumeReclaimPolicy: Retain
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-gluster-db
  annotations:
    pv.beta.kubernetes.io/gid: "0"
spec:
  capacity:
    storage: 9Gi
  accessModes:
    - ReadWriteMany
  glusterfs:
    endpoints: gluster-db
    path: /db
    readOnly: false
  persistentVolumeReclaimPolicy: Retain
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-gluster-minio
  annotations:
    pv.beta.kubernetes.io/gid: "0"
spec:
  capacity:
    storage: 15Gi
  accessModes:
    - ReadWriteMany
  glusterfs:
    endpoints: gluster-minio
    path: /minio
    readOnly: false
  persistentVolumeReclaimPolicy: Retain
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-gluster-spare
  annotations:
    pv.beta.kubernetes.io/gid: "0"
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  glusterfs:
    endpoints: gluster-spare
    path: /spare
    readOnly: false
  persistentVolumeReclaimPolicy: Retain
EOF
'
ansible pc1 -m shell -a 'microk8s kubectl get pv --all-namespaces '

Erzeugen der passende Persistent Volume Claims.

ansible pc1 -m shell -a 'cat <<EOF | microk8s kubectl apply -f -
#
# Erzeugen der Persistent Volumes Claims für Glusterfs
#
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-gluster-web
spec:
  accessModes:
  - ReadWriteMany      
  resources:
     requests:
       storage: 5Gi   
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-gluster-k8s
spec:
  accessModes:
  - ReadWriteMany      
  resources:
     requests:
       storage: 14Gi   
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-gluster-db
spec:
  accessModes:
  - ReadWriteMany      
  resources:
     requests:
       storage: 9Gi   
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-gluster-minio
spec:
  accessModes:
  - ReadWriteMany      
  resources:
     requests:
       storage: 15Gi   
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-gluster-spare
spec:
  accessModes:
  - ReadWriteMany      
  resources:
     requests:
       storage: 1Gi   
---
EOF
'
ansible pc1 -m shell -a 'microk8s kubectl get pvc --all-namespaces '

Jetzt kann man die Busybox einspielen.

alfred@pc1:~/pc1_app$ cat busybox_glusterfs.yaml
#
# Test für alle möglichen Mounts:)
#
---
apiVersion: apps/v1
kind: Deployment
metadata:
 name: busybox-glusterfs
 namespace: default
 labels:
   app: busybox-glusterfs
spec:
 replicas: 1
 strategy:
   type: RollingUpdate
 selector:
   matchLabels:
     app: busybox-glusterfs
 template:
   metadata:
     labels:
       app: busybox-glusterfs
   spec:
     volumes:
     - name: web
       persistentVolumeClaim:
         claimName: pvc-gluster-web
     - name: k8s
       persistentVolumeClaim:
         claimName: pvc-gluster-k8s
     - name: db
       persistentVolumeClaim:
         claimName: pvc-gluster-db
     - name: minio
       persistentVolumeClaim:
         claimName: pvc-gluster-minio
     - name: spare
       persistentVolumeClaim:
         claimName: pvc-gluster-spare
     containers:
     - name: busybox-glusterfs
       image: busybox
       command:
          - sh
          - -c
          - 'while true; do echo "`date` [`hostname`] Hello."; sleep 5; done'
       imagePullPolicy: IfNotPresent
       ports:
        - containerPort: 443
        - containerPort: 80
       volumeMounts:
        - mountPath: /web
          name: web
        - mountPath: /k8s
          name: k8s
        - mountPath: /db
          name: db
        - mountPath: /minio
          name: minio
        - mountPath: /spare
          name: spare
---
apiVersion: v1
kind: Service
metadata:
 name: busybox-glusterfs
 namespace: default
 labels:
   name: busybox-glusterfs
spec:
 ports:
  - port: 80
    targetPort: 80
    protocol: TCP
    name: http
  - port: 443
    targetPort: 443
    protocol: TCP
    name: https
 selector:
     name: busybox-glusterfs

alfred@pc1:~/pc1_app$ k apply -f busybox_glusterfs.yaml
deployment.apps/busybox-glusterfs created
service/busybox-glusterfs created
alfred@pc1:~/pc1_app$

Testen des Containers.

alfred@pc1:~/pc1_app$ k get all
NAME                                     READY   STATUS    RESTARTS   AGE
pod/busybox-glusterfs-559d9bd968-7mng5   1/1     Running   0          32s

NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
service/kubernetes          ClusterIP   10.152.183.1     <none>        443/TCP          28h
service/busybox-glusterfs   ClusterIP   10.152.183.201   <none>        80/TCP,443/TCP   33s

NAME                                READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/busybox-glusterfs   1/1     1            1           33s

NAME                                           DESIRED   CURRENT   READY   AGE
replicaset.apps/busybox-glusterfs-559d9bd968   1         1         1       32s
alfred@pc1:~/pc1_app$ k exec --stdin --tty busybox-glusterfs-559d9bd968-7mng5 -- sh
/ # df -h
Filesystem                Size      Used Available Use% Mounted on
overlay                  58.3G      5.9G     49.9G  11% /
tmpfs                    64.0M         0     64.0M   0% /dev
tmpfs                     3.8G         0      3.8G   0% /sys/fs/cgroup
192.168.0.202:/web       57.8G      1.0G     56.7G   2% /web
192.168.0.201:/k8s       57.8G      1.0G     56.7G   2% /k8s
192.168.0.205:/db        57.8G      1.0G     56.7G   2% /db
192.168.0.202:/minio     57.8G      1.0G     56.7G   2% /minio
192.168.0.201:/spare     57.8G      1.0G     56.7G   2% /spare
/dev/mmcblk0p2           58.3G      5.9G     49.9G  11% /etc/hosts
/dev/mmcblk0p2           58.3G      5.9G     49.9G  11% /dev/termination-log
/dev/mmcblk0p2           58.3G      5.9G     49.9G  11% /etc/hostname
/dev/mmcblk0p2           58.3G      5.9G     49.9G  11% /etc/resolv.conf
shm                      64.0M         0     64.0M   0% /dev/shm
tmpfs                     3.8G     12.0K      3.8G   0% /var/run/secrets/kubernetes.io/serviceaccount
tmpfs                    64.0M         0     64.0M   0% /proc/kcore
tmpfs                    64.0M         0     64.0M   0% /proc/keys
tmpfs                    64.0M         0     64.0M   0% /proc/latency_stats
tmpfs                    64.0M         0     64.0M   0% /proc/timer_list
tmpfs                    64.0M         0     64.0M   0% /proc/sched_debug
tmpfs                     3.8G         0      3.8G   0% /proc/scsi
tmpfs                     3.8G         0      3.8G   0% /sys/firmware
/ # echo "hallo" > /web/hallo.txt
/ # echo "hallo" > /k8s/hallo.txt
/ # echo "hallo" > /db/hallo.txt
/ # echo "hallo" > /minio/hallo.txt
/ # echo "hallo" > /spare/hallo.txt
/ # ls -lisa /web
total 5
      1      0 drwxr-xr-x    3 root     root            41 May 21 21:18 .
 771331      4 drwxr-xr-x    1 root     root          4096 May 21 21:17 ..
11333069524118470030      1 -rw-r--r--    1 root     root             6 May 21 21:18 hallo.txt
/ # exit
alfred@pc1:~/pc1_app$ k delete -f busybox_glusterfs.yaml
deployment.apps "busybox-glusterfs" deleted
service "busybox-glusterfs" deleted
alfred@pc1:~/pc1_app$

Nach dem Stoppen der Busybox starten wir das ganze neu, und prüfen wie die Files aussehen.

alfred@pc1:~/pc1_app$ k apply -f busybox_glusterfs.yaml
deployment.apps/busybox-glusterfs created
service/busybox-glusterfs created
alfred@pc1:~/pc1_app$ k get all
NAME                                     READY   STATUS              RESTARTS   AGE
pod/busybox-glusterfs-559d9bd968-zk5pd   0/1     ContainerCreating   0          5s

NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
service/kubernetes          ClusterIP   10.152.183.1    <none>        443/TCP          28h
service/busybox-glusterfs   ClusterIP   10.152.183.60   <none>        80/TCP,443/TCP   6s

NAME                                READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/busybox-glusterfs   0/1     1            0           6s

NAME                                           DESIRED   CURRENT   READY   AGE
replicaset.apps/busybox-glusterfs-559d9bd968   1         1         0       6s
alfred@pc1:~/pc1_app$ k exec --stdin --tty busybox-glusterfs-559d9bd968-zk5pd -- sh
/ # df -h
Filesystem                Size      Used Available Use% Mounted on
overlay                  58.3G      5.9G     49.9G  11% /
tmpfs                    64.0M         0     64.0M   0% /dev
tmpfs                     3.8G         0      3.8G   0% /sys/fs/cgroup
192.168.0.201:/web       57.8G      1.0G     56.7G   2% /web
192.168.0.201:/k8s       57.8G      1.0G     56.7G   2% /k8s
192.168.0.205:/db        57.8G      1.0G     56.7G   2% /db
192.168.0.205:/minio     57.8G      1.0G     56.7G   2% /minio
192.168.0.205:/spare     57.8G      1.0G     56.7G   2% /spare
/dev/mmcblk0p2           58.3G      5.9G     49.9G  11% /etc/hosts
/dev/mmcblk0p2           58.3G      5.9G     49.9G  11% /dev/termination-log
/dev/mmcblk0p2           58.3G      5.9G     49.9G  11% /etc/hostname
/dev/mmcblk0p2           58.3G      5.9G     49.9G  11% /etc/resolv.conf
shm                      64.0M         0     64.0M   0% /dev/shm
tmpfs                     3.8G     12.0K      3.8G   0% /var/run/secrets/kubernetes.io/serviceaccount
tmpfs                    64.0M         0     64.0M   0% /proc/kcore
tmpfs                    64.0M         0     64.0M   0% /proc/keys
tmpfs                    64.0M         0     64.0M   0% /proc/latency_stats
tmpfs                    64.0M         0     64.0M   0% /proc/timer_list
tmpfs                    64.0M         0     64.0M   0% /proc/sched_debug
tmpfs                     3.8G         0      3.8G   0% /proc/scsi
tmpfs                     3.8G         0      3.8G   0% /sys/firmware
/ # ls -lisa /k8s
total 5
      1      0 drwxr-xr-x    3 root     root            41 May 21 21:18 .
 771332      4 drwxr-xr-x    1 root     root          4096 May 21 21:21 ..
11876416863091892302      1 -rw-r--r--    1 root     root             6 May 21 21:18 hallo.txt
/ # cat /minio/hallo.txt
hallo
/ #

Die Verteilung unserer Files auf die Nodes ist derart, dass auf jedem der 3 Bricks für ein Volume ein Eintrag angelegt ist.

ansible@monitoring:~$ ansible pc -m shell -a 'tree -lsh /data/ '
pc1 | CHANGED | rc=0 >>
/data/
└── [4.0K]  glusterfs
    └── [4.0K]  pcvol
        └── [  64]  brick1
            ├── [  41]  db
            │   └── [ 512]  hallo.txt
            ├── [  41]  k8s
            │   └── [ 512]  hallo.txt
            ├── [   6]  minio
            ├── [  41]  spare
            │   └── [ 512]  hallo.txt
            └── [  41]  web
                └── [ 512]  hallo.txt

8 directories, 4 files
pc2 | CHANGED | rc=0 >>
/data/
└── [4.0K]  glusterfs
    └── [4.0K]  pcvol
        └── [  64]  brick1
            ├── [   6]  db
            ├── [   6]  k8s
            ├── [  41]  minio
            │   └── [ 512]  hallo.txt
            ├── [  41]  spare
            │   └── [ 512]  hallo.txt
            └── [  41]  web
                └── [ 512]  hallo.txt

8 directories, 3 files
pc4 | CHANGED | rc=0 >>
/data/
└── [4.0K]  glusterfs
    └── [4.0K]  pcvol
        └── [  64]  brick1
            ├── [   6]  db
            ├── [  41]  k8s
            │   └── [ 512]  hallo.txt
            ├── [  41]  minio
            │   └── [ 512]  hallo.txt
            ├── [   6]  spare
            └── [   6]  web

8 directories, 2 files
pc5 | CHANGED | rc=0 >>
/data/
└── [4.0K]  glusterfs
    └── [4.0K]  pcvol
        └── [  64]  brick1
            ├── [  41]  db
            │   └── [ 512]  hallo.txt
            ├── [   6]  k8s
            ├── [  41]  minio
            │   └── [ 512]  hallo.txt
            ├── [  41]  spare
            │   └── [ 512]  hallo.txt
            └── [   6]  web

8 directories, 3 files
pc3 | CHANGED | rc=0 >>
/data/
└── [4.0K]  glusterfs
    └── [4.0K]  pcvol
        └── [  64]  brick1
            ├── [  41]  db
            │   └── [ 512]  hallo.txt
            ├── [  41]  k8s
            │   └── [ 512]  hallo.txt
            ├── [   6]  minio
            ├── [   6]  spare
            └── [  41]  web
                └── [ 512]  hallo.txt

8 directories, 3 files
ansible@monitoring:~$
 

Jetzt haben wir persistente Clusterstorage. Um zu sehen, wieviel davon noch frei ist, hilft folgendes Kommando:

alfred@pc1:~/pc1_app$ sudo gluster volume status web detail
Status of volume: web
------------------------------------------------------------------------------
Brick                : Brick pc1:/data/glusterfs/pcvol/brick1/web
TCP Port             : 49155               
RDMA Port            : 0                   
Online               : Y                   
Pid                  : 105572              
File System          : xfs                 
Device               : /dev/sda1           
Mount Options        : rw,noatime,attr2,inode64,logbufs=8,logbsize=32k,noquota
Inode Size           : 512                 
Disk Space Free      : 28.7GB              
Total Disk Space     : 28.9GB              
Inode Count          : 15154624            
Free Inodes          : 15154544            
------------------------------------------------------------------------------
Brick                : Brick pc2:/data/glusterfs/pcvol/brick1/web
TCP Port             : 49155               
RDMA Port            : 0                   
Online               : Y                   
Pid                  : 79568               
File System          : xfs                 
Device               : /dev/sda1           
Mount Options        : rw,noatime,attr2,inode64,logbufs=8,logbsize=32k,noquota
Inode Size           : 512                 
Disk Space Free      : 28.7GB              
Total Disk Space     : 28.9GB              
Inode Count          : 15154624            
Free Inodes          : 15154562            
------------------------------------------------------------------------------
Brick                : Brick pc3:/data/glusterfs/pcvol/brick1/web
TCP Port             : 49155               
RDMA Port            : 0                   
Online               : Y                   
Pid                  : 88043               
File System          : xfs                 
Device               : /dev/sda1           
Mount Options        : rw,noatime,attr2,inode64,logbufs=8,logbsize=32k,noquota
Inode Size           : 512                 
Disk Space Free      : 28.7GB              
Total Disk Space     : 28.9GB              
Inode Count          : 15154624            
Free Inodes          : 15154562
         

Wenn der Mount der neuen Busybox hängen bleibt, hilft das neue Einspielen der  Konfiguration (Warum auch immer).

k apply -f glusterfs.yaml
k apply -f glusterfs_pv.yaml
k apply -f glusterfs_pvc.yaml
k delete -f busybox_glusterfs.yaml
k get all
k apply -f busybox_glusterfs.yaml
k get all
alfred@pc1:~/pc1_app$ k exec --stdin --tty busybox-glusterfs-559d9bd968-nhxd7 -- sh
/ # ls -lisah /spare
total 7K     
      1      0 drwxr-xr-x    3 root     root          57 May 22 05:17 .
 896981      4 drwxr-xr-x    1 root     root        4.0K May 22 07:56 ..
9245963184356017741      2 -rw-r--r--    1 root     root        1.3K May 22 05:20 date.txt
10899994604977992651      1 -rw-r--r--    1 root     root           6 May 21 21:19 hallo.txt
/ #

Es ist alles wieder da.

Neuer Versuch. Pod läuft auf Node 2. Änderung in der /Spare. Shutdown Node 2. Pod kommt auf Node 4 von selbst wieder hoch (dort liegt gar kein Spare-Brick). Alle Daten da.

Nun Cluster Reboot. Pod läuft wieder auf Node 4. Alle Daten da.
Wir haben eine redundante HA-fähige Cluster-Storage. Gluster-FS ist nicht so gut wie Longhorn (das sucht sich die Nodes selber aus, und lagert ggf. selber um). Dh. Wenn zwei Nodes ausfallen, auf denen zwei von drei bricks liegen, dann ist das nicht mehr redundant. Wenn der Node mit dem dritten brick ausfällt, dann ist die Platte nicht mehr da. Longhorn würde bei jedem Ausfall sofort versuchen, auf  irgendeinem anderen Node eine Replica wieder herzustellen.