With the cloud, individuals and small businesses can snap their fingers and instantly set up enterprise-class services. Roy Stephan, Founder, and CEO of PierceMatrix.

Inspiration:


https://opensource.com/article/21/1/ceph-raspberry-pi
https://github.com/ceph/ceph
https://opensource.com/article/21/1/ceph-raspberry-pi
https://github.com/CyberHippo/Ceph-Pi
https://dev.to/ingoleajinkya/kubernetes-storage-using-ceph-4lbp
https://rpi4cluster.com/k3s/k3s-storage-setting/
https://github.com/longhorn/longhorn
https://longhorn.io/docs/0.8.1/advanced-resources/os-distro-specific/csi-on-k3s/
https://github.com/balchua/do-microk8s/blob/master/docs/longhorn.md


Bereinigen der Platten, dann formatieren, in fstab eintragen.

ansible@monitoring:~$ ansible pc -m shell -a 'sudo wipefs -a /dev/sda'
ansible@monitoring:~$ ansible pc -m shell -a 'sudo mkfs.ext4 /dev/sda'
ansible@monitoring:~$ ansible pc -m shell -a 'sudo mkdir /var/lib/longhorn' # Das ist der Default
ansible@monitoring:~$ ansible pc -m shell -a 'sudo mount /dev/sda /var/lib/longhorn'
ansible@monitoring:~$ ansible pc -m shell -a 'sudo printf $(sudo blkid -o export /dev/sda | grep UUID)" /var/lib/longhorn       ext4    defaults        0       2" | sudo tee -a /etc/fstab'
ansible@monitoring:~$ ansible pc -m shell -a 'cat /etc/fstab'

Installation von Open-iscsi.

ansible pc -m shell -a 'sudo apt-get update'
ansible pc -m shell -a 'sudo apt-get -y install open-iscsi'
ansible pc -m shell -a 'sudo systemctl enable iscsid'
ansible pc -m shell -a 'sudo systemctl start iscsid'
ansible pc -m shell -a 'sudo systemctl status iscsid'

Installation Longhorn.

alfred@pc1:~/longhorn$ microk8s helm3 repo add longhorn https://charts.longhorn.io
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /var/snap/microk8s/2215/credentials/client.config
"longhorn" has been added to your repositories
alfred@pc1:~/longhorn$ microk8s helm3 repo update
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /var/snap/microk8s/2215/credentials/client.config
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "longhorn" chart repository
...Successfully got an update from the "openebs" chart repository
...Successfully got an update from the "jetstack" chart repository
Update Complete. ⎈Happy Helming!⎈
alfred@pc1:~/longhorn$
alfred@pc1:~/longhorn$ microk8s helm3 install longhorn longhorn/longhorn --namespace longhorn-system \
>   --set defaultSettings.defaultDataPath="/var/lib/longhorn" \
>   --set csi.kubeletRootDir="/var/snap/microk8s/common/var/lib/kubelet"
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /var/snap/microk8s/2215/credentials/client.config
W0519 14:28:00.635132  444704 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W0519 14:28:01.115431  444704 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
NAME: longhorn
LAST DEPLOYED: Wed May 19 14:27:58 2021
NAMESPACE: longhorn-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Longhorn is now installed on the cluster!

Please wait a few minutes for other Longhorn components such as CSI deployments, Engine Images, and Instance Managers to be initialized.

Visit our documentation at https://longhorn.io/docs/
alfred@pc1:~/longhorn$

Das sieht dann am System aus wie folgt:

alfred@pc1:~/longhorn$ k get all -n longhorn-system
NAME                                           READY   STATUS    RESTARTS   AGE
pod/longhorn-ui-5879656c55-k4z2g               1/1     Running   0          2m34s
pod/longhorn-manager-sp9ll                     1/1     Running   0          2m34s
pod/instance-manager-r-3d26c2a0                1/1     Running   0          2m12s
pod/instance-manager-e-380501dd                1/1     Running   0          2m13s
pod/longhorn-manager-d79xw                     1/1     Running   0          2m34s
pod/longhorn-manager-7bwgt                     1/1     Running   0          2m34s
pod/longhorn-driver-deployer-8d69c5cb9-qfjm2   1/1     Running   0          2m34s
pod/instance-manager-e-9d90f044                1/1     Running   0          2m11s
pod/longhorn-manager-dxgkm                     1/1     Running   1          2m34s
pod/engine-image-ei-611d1496-9lg6t             1/1     Running   0          2m10s
pod/engine-image-ei-611d1496-n8pzr             1/1     Running   0          2m1s
pod/longhorn-manager-s8hnl                     1/1     Running   1          2m34s
pod/instance-manager-r-f84f5253                1/1     Running   0          2m10s
pod/engine-image-ei-611d1496-gk2nz             1/1     Running   0          2m11s
pod/instance-manager-r-0129bb4a                1/1     Running   0          2m1s
pod/instance-manager-e-11ac556f                1/1     Running   0          2m2s
pod/instance-manager-e-9acaff8b                1/1     Running   0          111s
pod/instance-manager-r-eca7f8cb                1/1     Running   0          110s
pod/instance-manager-r-91388de3                1/1     Running   0          111s
pod/engine-image-ei-611d1496-pzw2n             1/1     Running   0          2m
pod/engine-image-ei-611d1496-cx8nc             1/1     Running   0          2m
pod/instance-manager-e-5331d714                1/1     Running   0          110s
pod/csi-attacher-5df5c79d4b-9j94z              1/1     Running   0          110s
pod/csi-attacher-5df5c79d4b-nhzbt              1/1     Running   0          110s
pod/csi-provisioner-547dfff5dd-jz6f9           1/1     Running   0          105s
pod/csi-resizer-5d6f844cd8-wpr5x               1/1     Running   0          100s
pod/csi-resizer-5d6f844cd8-r4bqk               1/1     Running   0          100s
pod/csi-provisioner-547dfff5dd-9rlfh           1/1     Running   0          105s
pod/csi-provisioner-547dfff5dd-kqj7j           1/1     Running   0          108s
pod/csi-snapshotter-76c6f569f9-xs2x8           1/1     Running   0          96s
pod/csi-attacher-5df5c79d4b-249pd              1/1     Running   0          110s
pod/csi-snapshotter-76c6f569f9-pfn2h           1/1     Running   0          99s
pod/longhorn-csi-plugin-fztlq                  2/2     Running   0          95s
pod/csi-resizer-5d6f844cd8-zvbdx               1/1     Running   0          100s
pod/longhorn-csi-plugin-bmsz2                  2/2     Running   0          95s
pod/longhorn-csi-plugin-cwxmn                  2/2     Running   0          96s
pod/longhorn-csi-plugin-frrf7                  2/2     Running   0          95s
pod/longhorn-csi-plugin-zrgdm                  2/2     Running   0          95s
pod/csi-snapshotter-76c6f569f9-4555p           1/1     Running   0          96s

NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)     AGE
service/longhorn-frontend   ClusterIP   10.152.183.227   <none>        80/TCP      2m35s
service/longhorn-backend    ClusterIP   10.152.183.111   <none>        9500/TCP    2m35s
service/csi-attacher        ClusterIP   10.152.183.79    <none>        12345/TCP   111s
service/csi-provisioner     ClusterIP   10.152.183.209   <none>        12345/TCP   110s
service/csi-resizer         ClusterIP   10.152.183.133   <none>        12345/TCP   108s
service/csi-snapshotter     ClusterIP   10.152.183.115   <none>        12345/TCP   100s

NAME                                      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/longhorn-manager           5         5         5       5            5           <none>          2m35s
daemonset.apps/engine-image-ei-611d1496   5         5         5       5            5           <none>          2m13s
daemonset.apps/longhorn-csi-plugin        5         5         5       5            5           <none>          99s

NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/longhorn-ui                1/1     1            1           2m35s
deployment.apps/longhorn-driver-deployer   1/1     1            1           2m35s
deployment.apps/csi-attacher               3/3     3            3           111s
deployment.apps/csi-provisioner            3/3     3            3           110s
deployment.apps/csi-resizer                3/3     3            3           105s
deployment.apps/csi-snapshotter            3/3     3            3           100s

NAME                                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/longhorn-ui-5879656c55               1         1         1       2m34s
replicaset.apps/longhorn-driver-deployer-8d69c5cb9   1         1         1       2m34s
replicaset.apps/csi-attacher-5df5c79d4b              3         3         3       110s
replicaset.apps/csi-provisioner-547dfff5dd           3         3         3       108s
replicaset.apps/csi-resizer-5d6f844cd8               3         3         3       101s
replicaset.apps/csi-snapshotter-76c6f569f9           3         3         3       99s
alfred@pc1:~/longhorn$


Das default bei den Storage-Klassen richtig setzen:

alfred@pc1:~/longhorn$ k edit sc microk8s-hostpath
storageclass.storage.k8s.io/microk8s-hostpath edited
alfred@pc1:~/longhorn$ k get sc
NAME                        PROVISIONER                                                RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
openebs-jiva-default        openebs.io/provisioner-iscsi                               Delete          Immediate              false                  40h
openebs-snapshot-promoter   volumesnapshot.external-storage.k8s.io/snapshot-promoter   Delete          Immediate              false                  40h
openebs-hostpath            openebs.io/local                                           Delete          WaitForFirstConsumer   false                  40h
openebs-device              openebs.io/local                                           Delete          WaitForFirstConsumer   false                  40h
longhorn (default)          driver.longhorn.io                                         Delete          Immediate              true                   9m18s
microk8s-hostpath           microk8s.io/hostpath                                       Delete          Immediate              false                  4d1h
alfred@pc1:~/longhorn$

Den Frontend-Service als LoadBalancer definieren:

alfred@pc1:~/longhorn$ k edit service longhorn-frontend -n longhorn-system
service/longhorn-frontend edited


Dann rufen wir das Frontend auf:

 



Wir erzeugen ein Volume.

Das sieht dann auf den Nodes so aus:




Es gibt drei Replicas (Auf node pc2, pc4 und pc5). Am System sieht das aus wie folgt:

alfred@pc1:~/longhorn$ k get pvc
NAME     STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS      AGE
alfred   Bound    alfred   20Gi       RWX            longhorn-static   6s
alfred@pc1:~/longhorn$ k get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                               STORAGECLASS        REASON   AGE
pvc-cc0c8a93-9264-475b-89b2-02f25e261f7a   10Gi       RWO            Delete           Bound    portainer/portainer                 microk8s-hostpath            3d15h
pvc-ef70ee35-f466-4ffb-8a1d-9c04181505ca   20Gi       RWX            Delete           Bound    container-registry/registry-claim   microk8s-hostpath            3d15h
alfred                                     20Gi       RWX            Retain           Bound    default/alfred                      longhorn-static              18s
alfred@pc1:~/longhorn$

Test mit der Busybox:

alfred@pc1:~/longhorn$ cat busybox.yaml
#
# Test für alle möglichen Mounts:)
#
apiVersion: apps/v1
kind: Deployment
metadata:
 name: busybox
 labels:
   app: busybox
spec:
 replicas: 1
 strategy:
   type: RollingUpdate
 selector:
   matchLabels:
     app: busybox
 template:
   metadata:
     labels:
       app: busybox
   spec:
     volumes:
     - name: alfred
       persistentVolumeClaim:
         claimName: pvc-alfred-platte
     containers:
     - name: busybox
       image: busybox
       command:
          - sh
          - -c
          - 'while true; do echo "`date` [`hostname`] Hello." >> /etc/alfred/greet.txt; sleep $(($RANDOM % 5 + 300)); done'
       imagePullPolicy: IfNotPresent
       ports:
        - containerPort: 443
        - containerPort: 80
       volumeMounts:
        - mountPath: /etc/alfred
          name: alfred
---
apiVersion: v1
kind: Service
metadata:
 name: busybox-alfred
 labels:
   name: busybox-alfred
spec:
 ports:
  - port: 80
    targetPort: 80
    protocol: TCP
    name: http
  - port: 443
    targetPort: 443
    protocol: TCP
    name: https
 selector:
     name: busybox

alfred@pc1:~/longhorn$

alfred@pc1:~/longhorn$ k exec --stdlonghornin --tty busybox-7594569f65-t5mp9 -- sh
/ # df -h
Filesystem                Size      Used Available Use% Mounted on
overlay                  58.3G      9.3G     46.5G  17% /
tmpfs                    64.0M         0     64.0M   0% /dev
tmpfs                     3.8G         0      3.8G   0% /sys/fs/cgroup
10.152.183.140:/alfred-platte
                         19.6G     44.0M     19.5G   0% /etc/alfred
/dev/mmcblk0p2           58.3G      9.3G     46.5G  17% /etc/hosts
/dev/mmcblk0p2           58.3G      9.3G     46.5G  17% /dev/termination-log
/dev/mmcblk0p2           58.3G      9.3G     46.5G  17% /etc/hostname
/dev/mmcblk0p2           58.3G      9.3G     46.5G  17% /etc/resolv.conf
shm                      64.0M         0     64.0M   0% /dev/shm
tmpfs                     3.8G     12.0K      3.8G   0% /var/run/secrets/kubernetes.io/serviceaccount
tmpfs                    64.0M         0     64.0M   0% /proc/kcore
tmpfs                    64.0M         0     64.0M   0% /proc/keys
tmpfs                    64.0M         0     64.0M   0% /proc/latency_stats
tmpfs                    64.0M         0     64.0M   0% /proc/timer_list
tmpfs                    64.0M         0     64.0M   0% /proc/sched_debug
tmpfs                     3.8G         0      3.8G   0% /proc/scsi
tmpfs                     3.8G         0      3.8G   0% /sys/firmware
/ #

Der Durchsatz ist in den folgenden Links gut beschrieben.
https://medium.com/volterra-io/kubernetes-storage-performance-comparison-v2-2020-updated-1c0b69f0dcf4
https://longhorn.io/blog/performance-scalability-report-aug-2020/
Für meinen PicoCluster ist das gut genug.


Test Failover (Pod auf Node 5, Disks auf pc2, pc3, pc4)


alfred@pc1:~$ k get pods -o wide
NAME                       READY   STATUS    RESTARTS   AGE   IP            NODE   NOMINATED NODE   READINESS GATES
busybox-7594569f65-n67r9   1/1     Running   0          24m   10.1.80.110   pc5    <none>           <none>
alfred@pc1:~$

shutdown von pc5.

alfred@pc1:~$ k get pods -o wide
NAME                       READY   STATUS        RESTARTS   AGE   IP             NODE   NOMINATED NODE   READINESS GATES
busybox-7594569f65-n67r9   1/1     Terminating   0          35m   10.1.80.110    pc5    <none>           <none>
busybox-7594569f65-qbd98   1/1     Running       0          21s   10.1.212.203   pc1    <none>           <none>
alfred@pc1:~$

alfred@pc1:~$ k exec --stdin --tty busybox-7594569f65-qbd98 -- sh
/ # df -h
Filesystem                Size      Used Available Use% Mounted on
overlay                  58.3G      9.6G     46.2G  17% /
tmpfs                    64.0M         0     64.0M   0% /dev
tmpfs                     3.8G         0      3.8G   0% /sys/fs/cgroup
10.152.183.140:/alfred-platte
                         19.6G     44.0M     19.5G   0% /etc/alfred
/dev/mmcblk0p2           58.3G      9.6G     46.2G  17% /etc/hosts
/dev/mmcblk0p2           58.3G      9.6G     46.2G  17% /dev/termination-log
/dev/mmcblk0p2           58.3G      9.6G     46.2G  17% /etc/hostname
/dev/mmcblk0p2           58.3G      9.6G     46.2G  17% /etc/resolv.conf
shm                      64.0M         0     64.0M   0% /dev/shm
tmpfs                     3.8G     12.0K      3.8G   0% /var/run/secrets/kubernetes.io/serviceaccount
tmpfs                    64.0M         0     64.0M   0% /proc/kcore
tmpfs                    64.0M         0     64.0M   0% /proc/keys
tmpfs                    64.0M         0     64.0M   0% /proc/latency_stats
tmpfs                    64.0M         0     64.0M   0% /proc/timer_list
tmpfs                    64.0M         0     64.0M   0% /proc/sched_debug
tmpfs                     3.8G         0      3.8G   0% /proc/scsi
tmpfs                     3.8G         0      3.8G   0% /sys/firmware
/ # cat /etc/alfred/greet.txt
Wed May 19 12:58:23 UTC 2021 [busybox-7594569f65-t5mp9] Hello.
Wed May 19 13:02:18 UTC 2021 [busybox-7594569f65-jq462] Hello.
Wed May 19 13:05:19 UTC 2021 [busybox-7594569f65-v2b8n] Hello.
Wed May 19 13:05:20 UTC 2021 [busybox-7594569f65-d4n8b] Hello.
Wed May 19 13:05:21 UTC 2021 [busybox-7594569f65-sl86b] Hello.
Wed May 19 13:05:21 UTC 2021 [busybox-7594569f65-mck2s] Hello.
Wed May 19 13:05:23 UTC 2021 [busybox-7594569f65-2vdzd] Hello.
Wed May 19 13:05:33 UTC 2021 [busybox-7594569f65-h44xv] Hello.
Wed May 19 13:05:33 UTC 2021 [busybox-7594569f65-k8pg4] Hello.
Wed May 19 13:07:21 UTC 2021 [busybox-7594569f65-jq462] Hello.
Wed May 19 14:08:04 UTC 2021 [busybox-7594569f65-n67r9] Hello.
Wed May 19 14:13:06 UTC 2021 [busybox-7594569f65-n67r9] Hello.
Wed May 19 14:18:09 UTC 2021 [busybox-7594569f65-n67r9] Hello.
Wed May 19 14:23:13 UTC 2021 [busybox-7594569f65-n67r9] Hello.
Wed May 19 14:28:13 UTC 2021 [busybox-7594569f65-n67r9] Hello.
Wed May 19 14:33:16 UTC 2021 [busybox-7594569f65-n67r9] Hello.
Wed May 19 14:36:45 UTC 2021 [busybox-7594569f65-n67r9] Hello.
Wed May 19 14:36:46 UTC 2021 [busybox-7594569f65-n67r9] Hello.
Wed May 19 14:43:24 UTC 2021 [busybox-7594569f65-qbd98] Hello.
/ #

Failover funktioniert. Daten sind vorhanden und benutzbar. Ich erhöhe die Replicas für Busybox, damit sich mindestens eine busybox auf einem Node mit Datenreplica befindet.


alfred@pc1:~$ k edit deployments.apps busybox -n default
deployment.apps/busybox edited
alfred@pc1:~$ k get pods -o wide
NAME                       READY   STATUS        RESTARTS   AGE   IP             NODE   NOMINATED NODE   READINESS GATES
busybox-7594569f65-n67r9   1/1     Terminating   0          39m   10.1.80.110    pc5    <none>           <none>
busybox-7594569f65-qbd98   1/1     Running       0          4m    10.1.212.203   pc1    <none>           <none>
busybox-7594569f65-hs7t9   1/1     Running       0          40s   10.1.100.58    pc3    <none>           <none>
busybox-7594569f65-szvmg   1/1     Running       0          41s   10.1.100.53    pc3    <none>           <none>
alfred@pc1:~$




Somit fahre ich pc3 herunter.



Nach einiger Zeit führt das zu folgendem Systemstatus.


alfred@pc1:~$ k get pods -o wide
NAME                       READY   STATUS        RESTARTS   AGE   IP             NODE   NOMINATED NODE   READINESS GATES
busybox-7594569f65-n67r9   1/1     Terminating   0          48m   10.1.80.110    pc5    <none>           <none>
busybox-7594569f65-qbd98   1/1     Running       0          13m   10.1.212.203   pc1    <none>           <none>
busybox-7594569f65-hs7t9   1/1     Terminating   0          10m   10.1.100.58    pc3    <none>           <none>
busybox-7594569f65-szvmg   1/1     Terminating   0          10m   10.1.100.53    pc3    <none>           <none>
busybox-7594569f65-zmqxq   1/1     Running       0          36s   10.1.212.202   pc1    <none>           <none>
busybox-7594569f65-m8p9g   1/1     Running       0          36s   10.1.169.25    pc4    <none>           <none>
alfred@pc1:~$
alfred@pc1:~$ k exec --stdin --tty busybox-7594569f65-m8p9g -- sh
/ # cat /etc/alfred/greet.txt
Wed May 19 12:58:23 UTC 2021 [busybox-7594569f65-t5mp9] Hello.
Wed May 19 13:02:18 UTC 2021 [busybox-7594569f65-jq462] Hello.
Wed May 19 13:05:19 UTC 2021 [busybox-7594569f65-v2b8n] Hello.
Wed May 19 13:05:20 UTC 2021 [busybox-7594569f65-d4n8b] Hello.
Wed May 19 13:05:21 UTC 2021 [busybox-7594569f65-sl86b] Hello.
Wed May 19 13:05:21 UTC 2021 [busybox-7594569f65-mck2s] Hello.
Wed May 19 13:05:23 UTC 2021 [busybox-7594569f65-2vdzd] Hello.
Wed May 19 13:05:33 UTC 2021 [busybox-7594569f65-h44xv] Hello.
Wed May 19 13:05:33 UTC 2021 [busybox-7594569f65-k8pg4] Hello.
Wed May 19 13:07:21 UTC 2021 [busybox-7594569f65-jq462] Hello.
Wed May 19 14:08:04 UTC 2021 [busybox-7594569f65-n67r9] Hello.
Wed May 19 14:13:06 UTC 2021 [busybox-7594569f65-n67r9] Hello.
Wed May 19 14:18:09 UTC 2021 [busybox-7594569f65-n67r9] Hello.
Wed May 19 14:23:13 UTC 2021 [busybox-7594569f65-n67r9] Hello.
Wed May 19 14:28:13 UTC 2021 [busybox-7594569f65-n67r9] Hello.
Wed May 19 14:33:16 UTC 2021 [busybox-7594569f65-n67r9] Hello.
Wed May 19 14:36:45 UTC 2021 [busybox-7594569f65-n67r9] Hello.
Wed May 19 14:36:46 UTC 2021 [busybox-7594569f65-n67r9] Hello.
Wed May 19 14:43:24 UTC 2021 [busybox-7594569f65-qbd98] Hello.
Wed May 19 14:46:31 UTC 2021 [busybox-7594569f65-hs7t9] Hello.
Wed May 19 14:46:31 UTC 2021 [busybox-7594569f65-szvmg] Hello.
Wed May 19 14:48:27 UTC 2021 [busybox-7594569f65-qbd98] Hello.
Wed May 19 14:48:46 UTC 2021 [busybox-7594569f65-szvmg] Hello.
Wed May 19 14:48:46 UTC 2021 [busybox-7594569f65-hs7t9] Hello.
 [busybox-7594569f65-hs7t9] Hello.
Wed May 19 14:48:47 UTC 2021 [busybox-7594569f65-szvmg] Hello.
Wed May 19 14:53:29 UTC 2021 [busybox-7594569f65-qbd98] Hello.
Wed May 19 14:55:54 UTC 2021 [busybox-7594569f65-zmqxq] Hello.
Wed May 19 14:56:10 UTC 2021 [busybox-7594569f65-m8p9g] Hello.
/ #

Die Daten sind vorhanden und lesbar. Die Replica haben sich neu verteilt.

Die Daten sind jetzt auf pc1, pc2 und pc4. Werde den ganzen Cluster rebooten.

alfred@pc1:~$ k get pods -o wide
NAME                       READY   STATUS    RESTARTS   AGE     IP            NODE   NOMINATED NODE   READINESS GATES
busybox-7594569f65-nhglm   1/1     Running   0          2m19s   10.1.80.118   pc5    <none>           <none>
busybox-7594569f65-kfs6g   1/1     Running   0          2m9s    10.1.100.63   pc3    <none>           <none>
busybox-7594569f65-9fwhl   1/1     Running   0          2m19s   10.1.100.61   pc3    <none>           <none>
alfred@pc1:~$



Die Replicas haben sich auch selbst wieder reorganisiert.

Und nach geraumer Zeit wird auch der Platz auf pc1 wieder freigegeben.

Fazit: ClusterStorage macht nur mit physischen Disken wirklich Spaß. Longhorn ist die funktionierende Alternative.

Allerdings ist Longhorn sehr ressourcenintensiv. Auf einen kleinen Raspberry-Cluster ist Longhorn mit sich selbst beschäftigt. Offensichtlich timed irgendwas immer aus, und Longhorn beginnt die Replica auf einen anderen Node zu verlagern. Das führt zu einer permanenten Selbstbeschäftigung (da ja in der Zwischenzeit wieder was aus-timed und Longhorn wieder umschichted usw...).