Notes: Difference between revisions
Line 1: | Line 1: | ||
= 2024.02.06 = | = 2024.02.06 = | ||
== FOG Project == | |||
A free open-source network computer cloning and management solution | |||
https://fogproject.org/ | |||
FOG Wiki | |||
https://wiki.fogproject.org/wiki | |||
== MAAS DHCP Notes == | == MAAS DHCP Notes == |
Revision as of 01:08, 10 February 2024
2024.02.06
FOG Project
A free open-source network computer cloning and management solution https://fogproject.org/
FOG Wiki https://wiki.fogproject.org/wiki
MAAS DHCP Notes
How do I modify the pxe config in MAAS? - Ask Ubuntu https://askubuntu.com/questions/130772/how-do-i-modify-the-pxe-config-in-maas
iPXE can't boot UEFI mode via USB key - Printable Version https://forum.ipxe.org/printthread.php?tid=13245
MAAS DHCP
networking - How do I use MAAS and Juju with an existing DHCP server? - Ask Ubuntu https://askubuntu.com/questions/427030/how-do-i-use-maas-and-juju-with-an-existing-dhcp-server
subnet 192.168.2.0 netmask 255.255.255.0 { range 192.168.2.128 192.168.2.254; option routers 192.168.2.1; filename "/pxelinux.0"; }
next-server
---
Additional Configuration — MAAS 1.6 documentation http://web.archive.org/web/20140705024900/http://maas.ubuntu.com/docs/configure.html
subnet 192.168.122.0 netmask 255.255.255.0 { filename "pxelinux.0"; option subnet-mask 255.255.255.0; option broadcast-address 192.168.122.255; option domain-name-servers 192.168.122.136; range dynamic-bootp 192.168.122.5 192.168.122.135; }
---
MaaS with an external DHCP server (EdgeMax) https://portegi.es/blog/maas-1
bootfile-name bootx64.efi bootfile-server 192.168.4.118 subnet-parameters "filename "bootx64.efi";" tftp-server-name 192.168.4.118
based on /var/lib/maas/dhcpd.conf you get...
# # Bootloaders # if option arch = 00:00 { # pxe filename "pxelinux.0"; } elsif option arch = 00:07 { # uefi_amd64 filename "bootx64.efi"; } elsif option arch = 00:09 { # uefi_amd64 filename "bootx64.efi"; } elsif option arch = 00:0B { # uefi_arm64 filename "grubaa64.efi"; } elsif option arch = 00:0C { # open-firmware_ppc64el filename "bootppc64.bin"; } elsif option arch = 00:0E { # powernv filename "pxelinux.0"; option path-prefix "ppc64el/"; } else { # pxe filename "pxelinux.0"; }
Although this site references 'grubx64.efi' - https://discourse.maas.io/t/arm-and-uefi-clients-dont-get-boot-from-maas/7831
MAAS Preesed
/etc/maas/preseeds/generic /etc/maas/preseeds/preseed-master
ref:
Additional Configuration — MAAS 1.6 documentation http://web.archive.org/web/20140705024900/http://maas.ubuntu.com/docs/configure.html
MAAS Windows
Creating a MAAS Image Builder Server (Windows Server 2022 example) — Crying Cloud https://www.cryingcloud.com/blog/2022/10/19/create-azure-stack-hci-images-for-use-with-maas
MAAS | How to customise images https://maas.io/docs/customising-images-for-specific-needs
2024.01.20
Redudnant NFS
Ubuntu 22 LTS
Create a Highly Available NFS Service with Gluster and Oracle Linux https://docs.oracle.com/en/learn/ol-ha-nfs/index.html
apt remove nfs-kernel-server
apt update apt install -y corosync glusterfs-cli glusterfs-server nfs-ganesha-gluster pacemaker pcs pcp-zeroconf fence-agents
Create an XFS filesystem:
sudo mkfs.xfs -f -i size=512 -L gluster-000 /dev/sdb
Create Mountpoint:
sudo mkdir -p /data/gfs echo 'LABEL=gluster-000 /data/gfs xfs defaults 0 0' | sudo tee -a /etc/fstab > /dev/null sudo mount /data/gfs
Start Gluster Service (on all nodes):
sudo systemctl enable --now glusterd
Add nodes to Cluster (from any node in the pool, or initial node of 1):
sudo gluster peer probe [node2] sudo gluster peer probe [nodeX...]
If you need to remove a node:
sudo gluster detach [node]
Get Glsuter Status: (check from each node, and make sure all nodes are communicating)
gluster pool list gluster peer status
Create a Gluster shared volume: (from any node)
sudo gluster volume create sharedvol replica 3 node{1,2,3}:/data/gfs/brick
Create 2 way replicated distributed volume:
sudo gluster volume create [MYGFS] replica 2 node{1,2,3,4,5,6}:/data/gfs/brick
sudo gluster volume create mygfs replica 2 lmt-store-0{2,3,4,5,6,7}.ess.lab:/data/gfs/brick
NOTE: Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to avoid this. See:
http://docs.gluster.org/en/latest/Administrator-Guide/Split-brain-and-ways-to-deal-with-it/
sudo gluster volume create mygfs replica 3 lmt-store-0{2,3,4,5,6,7}.ess.lab:/data/gfs/brick
To reduce storage requirements you can use a Arbiter for the 3rd: [1]
# Creating a replicated volume with an arbiter sudo gluster volume create myvolume replica 3 arbiter 1 node{1,2,3}:/data/glusterfs/myvolume/mybrick/brick
# Creating a distributed replicated volume with one brick on six nodes with an arbiter sudo gluster volume create myvolume replica 3 arbiter 1 node{1..6}:/data/glusterfs/myvolume/mybrick/brick
Different distribution and repliation styles [1]
Four Node Distributed Replicated Volume with a Two-way Replication: [2]
gluster volume create test-volume replica 2 transport tcp server1:/exp1/brick server2:/exp2/brick server3:/exp3/brick server4:/exp4/brick
Six Node Distributed Replicated Volume with a Two-way Replication: [3]
gluster volume create test-volume replica 2 transport tcp server1:/exp1/brick server2:/exp2/brick server3:/exp3/brick server4:/exp4/brick server5:/exp5/brick server6:/exp6/brick
Get volume info:
gluster volume info
If you wish to enable NFS (not recommended, use NFS-Ganesha instead): [2]
WARNING: Gluster NFS is being deprecated in favor of NFS-Ganesha
gluster volume set [VOLNAME] nfs.disable off
# to set NFS ACLs: gluster volume set [VOLNAME] nfs.acl on
Enable volume (this will change "Status: Created" to "Status: Started"): (from any node)
gluster volume start [VOLUMENAME]
Get Volume Status:
gluster volume status
---
Setting Up Volumes - Gluster Docs - Volume Types (Distributed / Replicated / Disperesed) https://docs.gluster.org/en/main/Administrator-Guide/Setting-Up-Volumes/#creating-distributed-replicated-volumes
Volume Types
Volumes of the following types can be created in your storage environment:
- Distributed - Distributed volumes distribute files across the bricks in the volume. You can use distributed volumes where the requirement is to scale storage and the redundancy is either not important or is provided by other hardware/software layers.
- Replicated – Replicated volumes replicate files across bricks in the volume. You can use replicated volumes in environments where high-availability and high-reliability are critical.
- Distributed Replicated - Distributed replicated volumes distribute files across replicated bricks in the volume. You can use distributed replicated volumes in environments where the requirement is to scale storage and high-reliability is critical. Distributed replicated volumes also offer improved read performance in most environments.
- Dispersed - Dispersed volumes are based on erasure codes, providing space-efficient protection against disk or server failures. It stores an encoded fragment of the original file to each brick in a way that only a subset of the fragments is needed to recover the original file. The number of bricks that can be missing without losing access to data is configured by the administrator on volume creation time.
- Distributed Dispersed - Distributed dispersed volumes distribute files across dispersed subvolumes. This has the same advantages of distribute replicate volumes, but using disperse to store the data into the bricks.
---
NFS-Ganesha
/etc/ganesha/ganesha.conf :
EXPORT{ Export_Id = 1 ; # Unique identifier for each EXPORT (share) Path = "/sharedvol"; # Export path of our NFS share FSAL { name = GLUSTER; # Backing type is Gluster hostname = "localhost"; # Hostname of Gluster server volume = "sharedvol"; # The name of our Gluster volume } Access_type = RW; # Export access permissions Squash = No_root_squash; # Control NFS root squashing Disable_ACL = FALSE; # Enable NFSv4 ACLs Pseudo = "/sharedvol"; # NFSv4 pseudo path for our NFS share Protocols = "3","4" ; # NFS protocols supported Transports = "UDP","TCP" ; # Transport protocols supported SecType = "sys"; # NFS Security flavors supported }
systemctl status nfs-ganesha
systemctl restart nfs-ganesha
---
Mount directly:
sudo apt-get install glusterfs-client mount -t glusterfs nodes[x]:/mygfs /mnt
Allow from all machines, or specify IPs comman seperated:
sudo gluster volume set volume1 auth.allow *
---
Many many more exmaples:
Oracle® Linux Gluster Storage for Oracle Linux User's Guide - Chapter 4 Creating and Managing Volumes https://docs.oracle.com/en/operating-systems/oracle-linux/gluster-storage/gluster-using.html#4.3.1-Using-the-Volume-Status-Command