Drucken
Zugriffe: 518

Achtung: Heketi funktioniert am Raspberry nicht. Dieser Beitrag beschreibt nur einen Versuch.


Inspiration:

https://github.com/heketi/heketi
https://go-home.io/docs/k8s/gluster-fs/
https://github.com/heketi/heketi/releases/tag/v10.3.0
https://github.com/gluster/gluster-kubernetes
https://faun.pub/configuring-ha-kubernetes-cluster-on-bare-metal-servers-with-glusterfs-metallb-2-3-c9e0b705aa3d
https://github.com/kadalu/kadalu

Wir installieren Heketi als Server Standalone,und schaffen eine Storage-Class, die mit dem Heketi-Server spricht. Das Verwalten der gluster-Platten usw. erfolgt über den Heketi-Server. Dieser Server bietet eine REST-API an, über die die Storageklasse im k8s kommunizieren kann.

 

 #!/bin/bash
############################################################################################
#    $Date: 2021-05-22 22:31:14 +0200 (Sa, 22. Mai 2021) $
#    $Revision: 421 $
#    $Author: alfred $
#    $HeadURL: https://monitoring.slainte.at/svn/slainte/trunk/k8s/microk8s/microk8s_Installation_teil4_glusterfs_heketi_standalone.sh $
#    $Id: microk8s_Installation_teil4_glusterfs_heketi_standalone.sh 421 2021-05-22 20:31:14Z alfred $
#
# Schnell-Installation microk8s - heketi Server standalone
#
############################################################################################
#shopt -o -s errexit    #—Terminates  the shell script  if a command returns an error code.
shopt -o -s xtrace #—Displays each command before it’s executed.
shopt -o -s nounset #-No Variables without definition
# Voraussetzung: Sauber installierte Nodes, Verbundener Cluster
#
ansible pc -m shell -a 'sudo apt-get update'
ansible pc -m shell -a 'sudo apt-get -y upgrade'
#
ansible pc -m shell -a 'sudo modprobe fuse'
ansible pc -m shell -a 'sudo modprobe dm_thin_pool'
ansible pc -m shell -a 'sudo modprobe dm_snapshot'
ansible pc -m shell -a 'sudo modprobe dm_mirror'
#
ansible pc -m shell -a 'sudo apt-get install -y xfsprogs glusterfs-server glusterfs-client lvm2 thin-provisioning-tools'
ansible pc -m shell -a 'sudo systemctl start glusterd'
ansible pc -m shell -a 'sudo systemctl enable glusterd'
ansible pc -m shell -a 'sudo systemctl status glusterd'
#
ansible pc1 -m shell -a 'sudo gluster peer probe pc2'
ansible pc1 -m shell -a 'sudo gluster peer probe pc3'
ansible pc1 -m shell -a 'sudo gluster peer probe pc4'
ansible pc1 -m shell -a 'sudo gluster peer probe pc5'
ansible pc1 -m shell -a 'sudo gluster peer status'
#
# Startup-file für Modprobe
#
ansible pc -m shell -a 'cat <<EOF > ./loop_gluster.service
#
# Servicedefinition für heketi
#    $Date: 2021-05-22 22:31:14 +0200 (Sa, 22. Mai 2021) $
#    $Revision: 421 $
#    $Author: alfred $
#    $HeadURL: https://monitoring.slainte.at/svn/slainte/trunk/k8s/microk8s/microk8s_Installation_teil4_glusterfs_heketi_standalone.sh $
#    $Id: microk8s_Installation_teil4_glusterfs_heketi_standalone.sh 421 2021-05-22 20:31:14Z alfred $
#worker1-3# vi /etc/systemd/system/loop_gluster.service
#
[Unit]
Description=modprobe for GlusterFS, For heketi /dev/sda1 is used
DefaultDependencies=false
Before=local-fs.target
After=systemd-udev-settle.service
Requires=systemd-udev-settle.service

[Service]
Type=oneshot
ExecStart=/bin/bash -c "modprobe dm_thin_pool && modprobe dm_snapshot && modprobe dm_mirror && modprobe fuse "

[Install]
WantedBy=local-fs.target
EOF
'
ansible pc -m shell -a 'ls -lisa ./loop_gluster.service'
ansible pc -m shell -a 'sudo mv -f ./loop_gluster.service /etc/systemd/system/loop_gluster.service'
ansible pc -m shell -a 'sudo chown root:root /etc/systemd/system/loop_gluster.service'
ansible pc -m shell -a 'sudo chmod 755 /etc/systemd/system/loop_gluster.service'
ansible pc -m shell -a 'ls -lisa /etc/systemd/system/loop_gluster.service'
ansible pc -m shell -a 'sudo systemctl enable /etc/systemd/system/loop_gluster.service'
#
# alle Platten aus anderen Versuchen stoppen und löschen
#sudo gluster volume stop db
#sudo gluster volume delete db
#db
#k8s
#minio
#spare
#web
#
#ansible pc -m shell -a 'sudo ls /data/glusterfs/pcvol/brick1/*'
#ansible pc -m shell -a 'sudo rm -f -R /data/glusterfs/pcvol/brick1/*'
#ansible pc -m shell -a 'sudo rm -f -R /data/glusterfs/pcvol/brick1/*'
#ansible pc -m shell -a 'sudo umount /data/glusterfs/pcvol/brick1'
#
# aus /etc/fstab den mountpoint entfernen
#
ansible pc -m shell -a 'sudo wipefs -a /dev/sda'
ansible pc1 -m shell -a 'wget https://github.com/heketi/heketi/releases/download/v10.3.0/heketi-v10.3.0.linux.arm64.tar.gz'
ansible pc1 -m shell -a 'sudo mkdir -p /etc/heketi'
ansible pc1 -m shell -a 'sudo tar xzvf heketi-v10.3.0.linux.arm64.tar.gz -C /etc/heketi'
ansible pc1 -m shell -a 'rm -f heketi-v10.3.0.linux.arm64.tar.gz'
ansible pc1 -m shell -a 'sudo ln /etc/heketi/heketi/heketi-cli /usr/bin/heketi-cli'
ansible pc1 -m shell -a 'sudo ln /etc/heketi/heketi/heketi /usr/bin/heketi'
#
# Keys generieren und verteilen
#
ansible pc -m shell -a 'sudo adduser heketi --gecos "First Last,RoomNumber,WorkPhone,HomePhone" --disabled-password'
ansible pc -m shell -a 'echo "heketi:bvxLnKi6PhyIoHdaTCqR" | sudo chpasswd'
ansible pc -m shell -a 'sudo usermod -aG sudo heketi'
#
# Manuell einloggen und von pc1 aus auf alle anderen Nodes verteilen
#ssh-keygen
#ssh-copy-id -i
#
ansible pc1 -m shell -a 'cat <<EOF > ./heketi.service
#
# Servicedefinition für heketi
#    $Date: 2021-05-22 22:31:14 +0200 (Sa, 22. Mai 2021) $
#    $Revision: 421 $
#    $Author: alfred $
#    $HeadURL: https://monitoring.slainte.at/svn/slainte/trunk/k8s/microk8s/microk8s_Installation_teil4_glusterfs_heketi_standalone.sh $
#    $Id: microk8s_Installation_teil4_glusterfs_heketi_standalone.sh 421 2021-05-22 20:31:14Z alfred $
#worker1-3# vi /etc/systemd/system/loop_gluster.service
#
[Unit]
Description=Heketi Server

[Service]
Type=simple
WorkingDirectory=/var/lib/heketi
EnvironmentFile=-/etc/heketi/heketi.env
User=heketi
ExecStart=sudo /usr/bin/heketi --config=/etc/heketi/heketi/heketi.json
Restart=on-failure
StandardOutput=syslog
StandardError=syslog

[Install]
WantedBy=multi-user.target
EOF
'
ansible pc1 -m shell -a 'ls -lisa ./heketi.service'
ansible pc1 -m shell -a 'sudo mv -f ./heketi.service /etc/systemd/system/heketi.service'
ansible pc1 -m shell -a 'sudo chown root:root /etc/systemd/system/heketi.service'
ansible pc1 -m shell -a 'sudo chmod 755 /etc/systemd/system/heketi.service'
ansible pc1 -m shell -a 'ls -lisa /etc/systemd/system/heketi.service'
#
# Anpassen von /etc/heketi/heketi.json
    "executor": "ssh",

    "_sshexec_comment": "SSH username and private key file information",
    "sshexec": {
      "keyfile": "/home/heketi/.ssh/id_rsa",
      "user": "heketi",
      "sudo": true,
      "port": "22",
      "fstab": "/etc/fstab",
      "backup_lvm_metadata": false
    },
#

ansible pc1 -m shell -a 'sudo chown -R heketi:heketi /var/lib/heketi'
ansible pc1 -m shell -a 'sudo chown -R heketi:heketi /etc/heketi'
ansible pc1 -m shell -a 'sudo systemctl daemon-reload'
ansible pc1 -m shell -a 'sudo systemctl enable /etc/systemd/system/heketi.service'
ansible pc1 -m shell -a 'sudo systemctl start heketi'
ansible pc1 -m shell -a 'sudo systemctl status heketi'


# dann das heketi-topology.json erstellen und nach /etc/heketi/heketi kopieren

heketi-cli topology load --json=heketi_topology.json --user=admin --secret bvxLnKi6PhyIoHdaTCqR
Creating cluster ... ID: 48e64cfd65e93b4e55b117e19ceea171
    Allowing file volumes on cluster.
    Allowing block volumes on cluster.
    Creating node 192.168.0.201 ... ID: 38b8822507b61118219bb24117df9468
        Adding device /dev/sda ... OK
    Creating node 192.168.0.202 ... ID: f0d16fcd7e98a982572d1aba73d332d0
        Adding device /dev/sda ... OK
    Creating node 192.168.0.203 ... ID: f144f48e5f0013573c8c606890b14d91
        Adding device /dev/sda ... OK
    Creating node 192.168.0.204 ... ID: 7f719c43250d2341890eae10046af1fb
        Adding device /dev/sda ... OK
    Creating node 192.168.0.205 ... ID: af1739b78d6244a4dcbc6fd2c6d99550
        Adding device /dev/sda ... OK

# Check
Clusters:
Id:48e64cfd65e93b4e55b117e19ceea171 [file][block]
root@pc1:/etc/heketi/heketi# heketi-cli cluster info 48e64cfd65e93b4e55b117e19ceea171 --user=admin --secret bvxLnKi6PhyIoHdaTCqR
Cluster id: 48e64cfd65e93b4e55b117e19ceea171
Nodes:
38b8822507b61118219bb24117df9468
7f719c43250d2341890eae10046af1fb
af1739b78d6244a4dcbc6fd2c6d99550
f0d16fcd7e98a982572d1aba73d332d0
f144f48e5f0013573c8c606890b14d91
Volumes:

Block: true

File: true

root@pc1:~# heketi-cli node list --user=admin --secret bvxLnKi6PhyIoHdaTCqR
Id:38b8822507b61118219bb24117df9468    Cluster:48e64cfd65e93b4e55b117e19ceea171
Id:7f719c43250d2341890eae10046af1fb    Cluster:48e64cfd65e93b4e55b117e19ceea171
Id:af1739b78d6244a4dcbc6fd2c6d99550    Cluster:48e64cfd65e93b4e55b117e19ceea171
Id:f0d16fcd7e98a982572d1aba73d332d0    Cluster:48e64cfd65e93b4e55b117e19ceea171
Id:f144f48e5f0013573c8c606890b14d91    Cluster:48e64cfd65e93b4e55b117e19ceea171
root@pc1:~#


root@pc1:~# heketi-cli topology info --user=admin --secret bvxLnKi6PhyIoHdaTCqR

Cluster Id: 48e64cfd65e93b4e55b117e19ceea171

    File:  true
    Block: true

    Volumes:


    Nodes:

    Node Id: 38b8822507b61118219bb24117df9468
    State: online
    Cluster Id: 48e64cfd65e93b4e55b117e19ceea171
    Zone: 1
    Management Hostnames: 192.168.0.201
    Storage Hostnames: 192.168.0.201
    Devices:
        Id:d8c0208ee04854b612dec3a879ffe44c   State:online    Size (GiB):28      Used (GiB):0       Free (GiB):28      
            Known Paths: /dev/disk/by-path/platform-fd500000.pcie-pci-0000:01:00.0-usb-0:2:1.0-scsi-0:0:0:0 /dev/disk/by-id/usb-ADATA_USB_Flash_Drive_2111223200130042-0:0 /dev/sda

            Bricks:

    Node Id: 7f719c43250d2341890eae10046af1fb
    State: online
    Cluster Id: 48e64cfd65e93b4e55b117e19ceea171
    Zone: 1
    Management Hostnames: 192.168.0.204
    Storage Hostnames: 192.168.0.204
    Devices:
        Id:a72d805e9402cb7b4b693ab806403aec   State:online    Size (GiB):28      Used (GiB):0       Free (GiB):28      
            Known Paths: /dev/disk/by-path/platform-fd500000.pcie-pci-0000:01:00.0-usb-0:2:1.0-scsi-0:0:0:0 /dev/disk/by-id/usb-ADATA_USB_Flash_Drive_2111223200210036-0:0 /dev/sda

            Bricks:

    Node Id: af1739b78d6244a4dcbc6fd2c6d99550
    State: online
    Cluster Id: 48e64cfd65e93b4e55b117e19ceea171
    Zone: 1
    Management Hostnames: 192.168.0.205
    Storage Hostnames: 192.168.0.205
    Devices:
        Id:3b539133126cb5dd657bb0023a7268a6   State:online    Size (GiB):28      Used (GiB):0       Free (GiB):28      
            Known Paths: /dev/disk/by-path/platform-fd500000.pcie-pci-0000:01:00.0-usb-0:2:1.0-scsi-0:0:0:0 /dev/disk/by-id/usb-ADATA_USB_Flash_Drive_2111223200020035-0:0 /dev/sda

            Bricks:

    Node Id: f0d16fcd7e98a982572d1aba73d332d0
    State: online
    Cluster Id: 48e64cfd65e93b4e55b117e19ceea171
    Zone: 1
    Management Hostnames: 192.168.0.202
    Storage Hostnames: 192.168.0.202
    Devices:
        Id:5e54d11c100800007682ce977a258e56   State:online    Size (GiB):28      Used (GiB):0       Free (GiB):28      
            Known Paths: /dev/disk/by-path/platform-fd500000.pcie-pci-0000:01:00.0-usb-0:2:1.0-scsi-0:0:0:0 /dev/disk/by-id/usb-ADATA_USB_Flash_Drive_2111223200160036-0:0 /dev/sda

            Bricks:

    Node Id: f144f48e5f0013573c8c606890b14d91
    State: online
    Cluster Id: 48e64cfd65e93b4e55b117e19ceea171
    Zone: 1
    Management Hostnames: 192.168.0.203
    Storage Hostnames: 192.168.0.203
    Devices:
        Id:0641ea5ed126f7a634ee830ed06cce4b   State:online    Size (GiB):28      Used (GiB):0       Free (GiB):28      
            Known Paths: /dev/disk/by-id/usb-ADATA_USB_Flash_Drive_2111223200060026-0:0 /dev/disk/by-path/platform-fd500000.pcie-pci-0000:01:00.0-usb-0:2:1.0-scsi-0:0:0:0 /dev/sda

            Bricks:

root@pc1:~#


# Secret für die Storage-Klasse erzeugen
ansible pc1 -m shell -a 'cat <<EOF | microk8s kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
  name: heketi-secret
type: Opaque
stringData:
  config.yaml: |
    apiUrl: "https://192.168.0.201:8080"
    username: admin
    password: bvxLnKi6PhyIoHdaTCqR
EOF
'
ansible pc1 -m shell -a 'microk8s kubectl delete sc glusterfs'
ansible pc1 -m shell -a 'cat <<EOF | microk8s kubectl apply -f -
#
# Storage Class für unser Filesystem
#
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: glusterfs
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/glusterfs
reclaimPolicy: Delete
volumeBindingMode: Immediate
parameters:
  resturl: "https://192.168.0.201:8080"
  restuser: "admin"
  secretName: "heketi-secret"
  secretNamespace: "default"
  volumetype: replicate:3
  volumenameprefix: "icp"
EOF
'
ansible pc1 -m shell -a 'microk8s kubectl patch storageclass microk8s-hostpath -p '\''{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'\'
ansible pc1 -m shell -a 'microk8s kubectl patch storageclass glusterfs -p '\''{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'\'
ansible pc1 -m shell -a 'microk8s kubectl get sc'
#


ansible pc1 -m shell -a 'cat <<EOF | microk8s kubectl apply -f -
#
# Erzeugen eines Persistent Volumes Claims für Glusterfs
#
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-gluster-web
  namespace: default
spec:
  storageClassName: glusterfs
  accessModes:
  - ReadWriteMany      
  resources:
     requests:
       storage: 5Gi
EOF
'
ansible pc1 -m shell -a 'microk8s kubectl get pvc --all-namespaces '
#
# Jetzt kann man die Busybox einspielen
#
alfred@pc1:~/pc1_app$ cat busybox_glusterfs_heketi.yaml
#
# Test für alle möglichen Mounts:)
#
---
apiVersion: apps/v1
kind: Deployment
metadata:
 name: busybox-glusterfs-heketi
 namespace: default
 labels:
   app: busybox-glusterfs-heketi
spec:
 replicas: 1
 strategy:
   type: RollingUpdate
 selector:
   matchLabels:
     app: busybox-glusterfs-heketi
 template:
   metadata:
     labels:
       app: busybox-glusterfs-heketi
   spec:
     volumes:
     - name: web
       persistentVolumeClaim:
         claimName: pvc-gluster-web
     containers:
     - name: busybox-glusterfs-heketi
       image: busybox
       command:
          - sh
          - -c
          - 'while true; do echo "`date` [`hostname`] Hello."; sleep 5; done'
       imagePullPolicy: IfNotPresent
       ports:
        - containerPort: 443
        - containerPort: 80
       volumeMounts:
        - mountPath: /web
          name: web
---
apiVersion: v1
kind: Service
metadata:
 name: busybox-glusterfs-heketi
 namespace: default
 labels:
   name: busybox-glusterfs-heketi
spec:
 ports:
  - port: 80
    targetPort: 80
    protocol: TCP
    name: http
  - port: 443
    targetPort: 443
    protocol: TCP
    name: https
 selector:
     name: busybox-glusterfs-heketi

alfred@pc1:~/pc1_app$

alfred@pc1:~/pc1_app$ k apply -f busybox
busybox.yaml                   busybox_glusterfs.yaml         busybox_glusterfs_heketi.yaml  
alfred@pc1:~/pc1_app$ k apply -f busybox_glusterfs_heketi.yaml
deployment.apps/busybox-glusterfs-heketi configured
service/busybox-glusterfs-heketi unchanged

alfred@pc1:~/pc1_app$ k get pods
NAME                                        READY   STATUS    RESTARTS   AGE
busybox-glusterfs-heketi-795559485d-bznvx   0/1     Pending   0          11h
alfred@pc1:~/pc1_app$ k get pvc
NAME              STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc-gluster-web   Pending                                      glusterfs      36m
alfred@pc1:~/pc1_app$ k get pv
No resources found
alfred@pc1:~/pc1_app$

alfred@pc1:~/pc1_app$ sudo systemctl status heketi
● heketi.service - Heketi Server
     Loaded: loaded (/etc/systemd/system/heketi.service; enabled; vendor preset: enabled)
     Active: active (running) since Sat 2021-05-22 22:32:00 CEST; 11h ago
   Main PID: 1762 (sudo)
      Tasks: 11 (limit: 9257)
     Memory: 31.0M
     CGroup: /system.slice/heketi.service
             ├─1762 /usr/bin/sudo /usr/bin/heketi --config=/etc/heketi/heketi/heketi.json
             └─1787 /usr/bin/heketi --config=/etc/heketi/heketi/heketi.json
4m40s       Normal    Created                   pod/glusterfs-sk6qh   Created container glusterfs
4m40s       Normal    Started                   pod/glusterfs-sk6qh   Started container glusterfs
40s         Warning   BackOff                   pod/glusterfs-wcb29   Back-off restarting failed container
38s         Warning   BackOff                   pod/glusterfs-xw7fj   Back-off restarting failed container
35s         Warning   BackOff                   pod/glusterfs-btt4l   Back-off restarting failed container
34s         Warning   BackOff                   pod/glusterfs-sk6qh   Back-off restarting failed container
33s         Warning   BackOff                   pod/glusterfs-hgnx7   Back-off restarting failed container

May 23 09:49:45 pc1 sudo[1787]: [heketi] INFO 2021/05/23 09:49:45 Periodic health check status: node af1739b78d6244a4dcbc6fd2c6d99550 up=true
May 23 09:49:45 pc1 sudo[1787]: [cmdexec] INFO 2021/05/23 09:49:45 Check Glusterd service status in node 192.168.0.202
May 23 09:49:45 pc1 sudo[1787]: [cmdexec] DEBUG 2021/05/23 09:49:45 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will>
May 23 09:49:47 pc1 sudo[1787]: [cmdexec] DEBUG 2021/05/23 09:49:47 heketi/pkg/remoteexec/log/commandlog.go:41:log.(*CommandLogger).Success: Ran>
May 23 09:49:47 pc1 sudo[1787]: [heketi] INFO 2021/05/23 09:49:47 Periodic health check status: node f0d16fcd7e98a982572d1aba73d332d0 up=true
May 23 09:49:47 pc1 sudo[1787]: [cmdexec] INFO 2021/05/23 09:49:47 Check Glusterd service status in node 192.168.0.203
May 23 09:49:47 pc1 sudo[1787]: [cmdexec] DEBUG 2021/05/23 09:49:47 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will>
May 23 09:49:50 pc1 sudo[1787]: [cmdexec] DEBUG 2021/05/23 09:49:50 heketi/pkg/remoteexec/log/commandlog.go:41:log.(*CommandLogger).Success: Ran>
May 23 09:49:50 pc1 sudo[1787]: [heketi] INFO 2021/05/23 09:49:50 Periodic health check status: node f144f48e5f0013573c8c606890b14d91 up=true
May 23 09:49:50 pc1 sudo[1787]: [heketi] INFO 2021/05/23 09:49:50 Cleaned 0 nodes from health cache
lines 1-20/20 (END)



Es erfolgt keine Kommunikation von k8s zum heketi-Server. Damit bleibt der pod und die pvc im Status pending. Schade.