Ceph/MicroCeph: Difference between revisions
(16 intermediate revisions by the same user not shown) | |||
Line 3: | Line 3: | ||
Ubuntu 22.04 LTS MicroCeph with Snap | Ubuntu 22.04 LTS MicroCeph with Snap | ||
These installation instructions use MicroCeph - Ceph in a snap. MicroCeph is a pure upstream Ceph distribution designed for small scale and edge deployments, which can be installed and maintained with minimal knowledge and effort. | These installation instructions use MicroCeph - Ceph in a snap. MicroCeph is a pure upstream Ceph distribution designed for small scale and edge deployments, which can be installed and maintained with minimal knowledge and effort. | ||
* Uses MicroCeph | |||
* Minimum 4-nodes, full-HA Ceph cluster | |||
* Suitable for small-scale production environments | |||
Install Ceph on Ubuntu | Ubuntu | Install Ceph on Ubuntu | Ubuntu | ||
https://ubuntu.com/ceph/install | https://ubuntu.com/ceph/install | ||
Requirements: | |||
* You will need 4 physical machines with multi-core processors and at least 8GiB of memory and 100GB of disk space. MicroCeph has been tested on x86-based physical machines running Ubuntu 22.04 LTS. | |||
Note: If snap is not installed: | Note: If snap is not installed: | ||
Line 34: | Line 40: | ||
/var/snap/microceph/current/conf/ceph.keyring | /var/snap/microceph/current/conf/ceph.keyring | ||
Create links: | |||
mkdir -p /etc/ceph # not needed if you 'apt install ceph-common' | |||
ln -s /var/snap/microceph/current/conf/ceph.conf /etc/ceph/ceph.conf | ln -s /var/snap/microceph/current/conf/ceph.conf /etc/ceph/ceph.conf | ||
ln -s /var/snap/microceph/current/conf/ceph.keyring /etc/ceph/ceph.keyring | |||
Verify cluster: | Verify cluster: | ||
Line 41: | Line 50: | ||
microceph disk list | microceph disk list | ||
== Get Auth == | |||
ceph auth get client.admin | |||
Example: | |||
<pre> | |||
[client.admin] | |||
key = AQBKdXXXXXXXXXXXXXXXXXXXXX== | |||
caps mds = "allow *" | |||
caps mgr = "allow *" | |||
caps mon = "allow *" | |||
caps osd = "allow *" | |||
</pre> | |||
Used for things like /etc/ceph/ceph.keyring | |||
== Create File System == | == Create File System == | ||
Create data and metadata pool: | |||
ceph osd pool create cephfs_data | |||
ceph osd pool create cephfs_metadata | |||
# with different PGs, which can range from 0 to 512, and probably has auto scaling on... | |||
ceph osd pool create cephfs_data 64 | |||
ceph osd pool create cephfs_metadata 32 | |||
View pool stats: | |||
ceph osd pool ls | |||
ceph osd lspools | |||
ceph osd pool stats | |||
Create filesystem: | |||
# ceph fs new <fs_name> <metadata_pool> <data_pool> | |||
ceph fs new cephfs cephfs_metadata cephfs_data | |||
View filesystems: | |||
ceph fs ls | |||
ceph fs status | |||
ceph fs status [cephfs] | |||
Stat the MDs: | |||
ceph mds stat | |||
Reference: | |||
Create a Ceph file system — Ceph Documentation | |||
https://docs.ceph.com/en/quincy/cephfs/createfs/ | |||
== Local Mount == | == Local Mount == | ||
(make sure a cephfs has been created first) | |||
If you are on one of the cluster nodes, and if you look at the ceph.keyring, you will notice the default user was likely 'admin' (client.admin). | If you are on one of the cluster nodes, and if you look at the ceph.keyring, you will notice the default user was likely 'admin' (client.admin). | ||
Install mount.ceph wrapper: | |||
apt install ceph-common | |||
Assuming /etc/ceph/ceph.keyring exits: | Assuming /etc/ceph/ceph.keyring exits: | ||
Line 56: | Line 115: | ||
# Or: | # Or: | ||
mount -t ceph admin@.cfs=/ /mnt | mount -t ceph admin@.cfs=/ /mnt | ||
/etc/fstab: | |||
admin@.mycfs=/ /mycfs ceph defaults,noauto 0 0 | |||
Specify it all: | Specify it all: | ||
Line 69: | Line 131: | ||
Mount CephFS using Kernel Driver — Ceph Documentation | Mount CephFS using Kernel Driver — Ceph Documentation | ||
https://docs.ceph.com/en/latest/cephfs/mount-using-kernel-driver/ | https://docs.ceph.com/en/latest/cephfs/mount-using-kernel-driver/ | ||
== NFS share with Ganesha == | |||
(make sure a cephfs has been created first) | |||
Install ganesha: | |||
apt install nfs-ganesha nfs-ganesha-ceph | |||
Config (below) | |||
Restart service: | |||
systemctl restart nfs-ganesha | |||
Config: | |||
/etc/ganesha/ganesha.conf | |||
<pre> | |||
NFS_CORE_PARAM | |||
{ | |||
Enable_NLM = false; | |||
Enable_RQUOTA = false; | |||
#Protocols = 4; | |||
} | |||
NFSv4 | |||
{ | |||
RecoveryBackend = rados_ng; | |||
Minor_Versions = 1,2; | |||
} | |||
MDCACHE { | |||
Dir_Chunk = 0; | |||
} | |||
EXPORT | |||
{ | |||
Export_ID=100; | |||
Protocols = 3, 4; | |||
SecType = "sys"; | |||
Transports = TCP; | |||
Path = /; | |||
Pseudo = /; | |||
Access_Type = RW; | |||
Attr_Expiration_Time = 0; | |||
Squash = No_root_squash; | |||
FSAL { | |||
Name = CEPH; | |||
Filesystem = "mycfs"; | |||
} | |||
} | |||
</pre> | |||
Longer config version: | |||
<pre> | |||
NFS_CORE_PARAM | |||
{ | |||
# Ganesha can lift the NFS grace period early if NLM is disabled. | |||
Enable_NLM = false; | |||
# rquotad doesn't add any value here. CephFS doesn't support per-uid | |||
# quotas anyway. | |||
Enable_RQUOTA = false; | |||
# In this configuration, we're just exporting NFSv4. In practice, it's | |||
# best to use NFSv4.1+ to get the benefit of sessions. | |||
#Protocols = 4; | |||
} | |||
NFSv4 | |||
{ | |||
# Modern versions of libcephfs have delegation support, though they | |||
# are not currently recommended in clustered configurations. They are | |||
# disabled by default but can be re-enabled for singleton or | |||
# active/passive configurations. | |||
# Delegations = false; | |||
# One can use any recovery backend with this configuration, but being | |||
# able to store it in RADOS is a nice feature that makes it easy to | |||
# migrate the daemon to another host. | |||
# | |||
# For a single-node or active/passive configuration, rados_ng driver | |||
# is preferred. For active/active clustered configurations, the | |||
# rados_cluster backend can be used instead. See the | |||
# ganesha-rados-grace manpage for more information. | |||
RecoveryBackend = rados_ng; | |||
# NFSv4.0 clients do not send a RECLAIM_COMPLETE, so we end up having | |||
# to wait out the entire grace period if there are any. Avoid them. | |||
Minor_Versions = 1,2; | |||
} | |||
# The libcephfs client will aggressively cache information while it | |||
# can, so there is little benefit to ganesha actively caching the same | |||
# objects. Doing so can also hurt cache coherency. Here, we disable | |||
# as much attribute and directory caching as we can. | |||
MDCACHE { | |||
# Size the dirent cache down as small as possible. | |||
Dir_Chunk = 0; | |||
} | |||
EXPORT | |||
{ | |||
# Unique export ID number for this export | |||
Export_ID=100; | |||
# We're only interested in NFSv4 in this configuration | |||
Protocols = 3, 4; | |||
SecType = "sys"; | |||
# NFSv4 does not allow UDP transport | |||
Transports = TCP; | |||
# | |||
# Path into the cephfs tree. | |||
# | |||
# Note that FSAL_CEPH does not support subtree checking, so there is | |||
# no way to validate that a filehandle presented by a client is | |||
# reachable via an exported subtree. | |||
# | |||
# For that reason, we just export "/" here. | |||
Path = /; | |||
# | |||
# The pseudoroot path. This is where the export will appear in the | |||
# NFS pseudoroot namespace. | |||
# | |||
Pseudo = /; | |||
# We want to be able to read and write | |||
Access_Type = RW; | |||
# Time out attribute cache entries immediately | |||
Attr_Expiration_Time = 0; | |||
# NFS servers usually decide to "squash" incoming requests from the | |||
# root user to a "nobody" user. It's possible to disable that, but for | |||
# now, we leave it enabled. | |||
# Squash = root; | |||
Squash = No_root_squash; | |||
FSAL { | |||
# FSAL_CEPH export | |||
Name = CEPH; | |||
# | |||
# Ceph filesystems have a name string associated with them, and | |||
# modern versions of libcephfs can mount them based on the | |||
# name. The default is to mount the default filesystem in the | |||
# cluster (usually the first one created). | |||
# | |||
# Filesystem = "cephfs_a"; | |||
Filesystem = "mycfs"; | |||
} | |||
} | |||
</pre> | |||
Config references: | |||
* NFS — Ceph Documentation - https://docs.ceph.com/en/latest/cephfs/nfs/ | |||
* nfs-ganesha/src/config_samples/ceph.conf at next · nfs-ganesha/nfs-ganesha · GitHub - https://github.com/nfs-ganesha/nfs-ganesha/blob/next/src/config_samples/ceph.conf | |||
* Chapter 11. Management of NFS-Ganesha gateway using the Ceph Orchestrator (Limited Availability) Red Hat Ceph Storage 5 | Red Hat Customer Portal - https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/5/html/operations_guide/management-of-nfs-ganesha-gateway-using-the-ceph-orchestrator | |||
== Documentation == | == Documentation == |
Latest revision as of 03:55, 22 January 2024
MicroCeph
Ubuntu 22.04 LTS MicroCeph with Snap
These installation instructions use MicroCeph - Ceph in a snap. MicroCeph is a pure upstream Ceph distribution designed for small scale and edge deployments, which can be installed and maintained with minimal knowledge and effort.
- Uses MicroCeph
- Minimum 4-nodes, full-HA Ceph cluster
- Suitable for small-scale production environments
Install Ceph on Ubuntu | Ubuntu https://ubuntu.com/ceph/install
Requirements:
- You will need 4 physical machines with multi-core processors and at least 8GiB of memory and 100GB of disk space. MicroCeph has been tested on x86-based physical machines running Ubuntu 22.04 LTS.
Note: If snap is not installed:
sudo apt install snapd
Install microceph:
sudo snap install microceph
Bootstrap cluster from first node:
sudo microceph cluster bootstrap
From first node add other nodes:
sudo microceph cluster add node[x]
Copy join output onto node[x]:
sudo microceph cluster join [pasted-output-from-node1]
Check status:
sudo microceph.ceph status
Add some disks to each node (as OSDs):
sudo microceph disk add /dev/sd[x] --wipe
Config files are found at:
/var/snap/microceph/current/conf/ /var/snap/microceph/current/conf/ceph.conf /var/snap/microceph/current/conf/metadata.yaml /var/snap/microceph/current/conf/ceph.keyring
Create links:
mkdir -p /etc/ceph # not needed if you 'apt install ceph-common' ln -s /var/snap/microceph/current/conf/ceph.conf /etc/ceph/ceph.conf ln -s /var/snap/microceph/current/conf/ceph.keyring /etc/ceph/ceph.keyring
Verify cluster:
sudo microceph.ceph status sudo microceph.ceph osd status
microceph disk list
Get Auth
ceph auth get client.admin
Example:
[client.admin] key = AQBKdXXXXXXXXXXXXXXXXXXXXX== caps mds = "allow *" caps mgr = "allow *" caps mon = "allow *" caps osd = "allow *"
Used for things like /etc/ceph/ceph.keyring
Create File System
Create data and metadata pool:
ceph osd pool create cephfs_data ceph osd pool create cephfs_metadata
# with different PGs, which can range from 0 to 512, and probably has auto scaling on... ceph osd pool create cephfs_data 64 ceph osd pool create cephfs_metadata 32
View pool stats:
ceph osd pool ls ceph osd lspools ceph osd pool stats
Create filesystem:
# ceph fs new <fs_name> <metadata_pool> <data_pool> ceph fs new cephfs cephfs_metadata cephfs_data
View filesystems:
ceph fs ls ceph fs status ceph fs status [cephfs]
Stat the MDs:
ceph mds stat
Reference:
Create a Ceph file system — Ceph Documentation https://docs.ceph.com/en/quincy/cephfs/createfs/
Local Mount
(make sure a cephfs has been created first)
If you are on one of the cluster nodes, and if you look at the ceph.keyring, you will notice the default user was likely 'admin' (client.admin).
Install mount.ceph wrapper:
apt install ceph-common
Assuming /etc/ceph/ceph.keyring exits:
mount -t ceph :/ /mnt -o name=admin
New style, with the mount.ceph helper, assuming the fs was created as 'mycfs':
mount.ceph admin@.mycfs=/ /mnt
# Or: mount -t ceph admin@.cfs=/ /mnt
/etc/fstab:
admin@.mycfs=/ /mycfs ceph defaults,noauto 0 0
Specify it all:
mount -t ceph {device-string}={path-to-mounted} {mount-point} -o {key-value-args} {other-args}
mount -t ceph <name>@<fsid>.<fs_name>=/ /mnt
mount -t ceph cephuser@.cephfs=/ /mnt -o secretfile=/etc/ceph/cephuser.secret
mount -t ceph cephuser@.cephfs=/ /mnt -o secret=AQATSKdNGBnwLhAAnNDKnH65FmVKpXZJVasUeQ==
Reference:
Mount CephFS using Kernel Driver — Ceph Documentation https://docs.ceph.com/en/latest/cephfs/mount-using-kernel-driver/
(make sure a cephfs has been created first)
Install ganesha:
apt install nfs-ganesha nfs-ganesha-ceph
Config (below)
Restart service:
systemctl restart nfs-ganesha
Config:
/etc/ganesha/ganesha.conf
NFS_CORE_PARAM { Enable_NLM = false; Enable_RQUOTA = false; #Protocols = 4; } NFSv4 { RecoveryBackend = rados_ng; Minor_Versions = 1,2; } MDCACHE { Dir_Chunk = 0; } EXPORT { Export_ID=100; Protocols = 3, 4; SecType = "sys"; Transports = TCP; Path = /; Pseudo = /; Access_Type = RW; Attr_Expiration_Time = 0; Squash = No_root_squash; FSAL { Name = CEPH; Filesystem = "mycfs"; } }
Longer config version:
NFS_CORE_PARAM { # Ganesha can lift the NFS grace period early if NLM is disabled. Enable_NLM = false; # rquotad doesn't add any value here. CephFS doesn't support per-uid # quotas anyway. Enable_RQUOTA = false; # In this configuration, we're just exporting NFSv4. In practice, it's # best to use NFSv4.1+ to get the benefit of sessions. #Protocols = 4; } NFSv4 { # Modern versions of libcephfs have delegation support, though they # are not currently recommended in clustered configurations. They are # disabled by default but can be re-enabled for singleton or # active/passive configurations. # Delegations = false; # One can use any recovery backend with this configuration, but being # able to store it in RADOS is a nice feature that makes it easy to # migrate the daemon to another host. # # For a single-node or active/passive configuration, rados_ng driver # is preferred. For active/active clustered configurations, the # rados_cluster backend can be used instead. See the # ganesha-rados-grace manpage for more information. RecoveryBackend = rados_ng; # NFSv4.0 clients do not send a RECLAIM_COMPLETE, so we end up having # to wait out the entire grace period if there are any. Avoid them. Minor_Versions = 1,2; } # The libcephfs client will aggressively cache information while it # can, so there is little benefit to ganesha actively caching the same # objects. Doing so can also hurt cache coherency. Here, we disable # as much attribute and directory caching as we can. MDCACHE { # Size the dirent cache down as small as possible. Dir_Chunk = 0; } EXPORT { # Unique export ID number for this export Export_ID=100; # We're only interested in NFSv4 in this configuration Protocols = 3, 4; SecType = "sys"; # NFSv4 does not allow UDP transport Transports = TCP; # # Path into the cephfs tree. # # Note that FSAL_CEPH does not support subtree checking, so there is # no way to validate that a filehandle presented by a client is # reachable via an exported subtree. # # For that reason, we just export "/" here. Path = /; # # The pseudoroot path. This is where the export will appear in the # NFS pseudoroot namespace. # Pseudo = /; # We want to be able to read and write Access_Type = RW; # Time out attribute cache entries immediately Attr_Expiration_Time = 0; # NFS servers usually decide to "squash" incoming requests from the # root user to a "nobody" user. It's possible to disable that, but for # now, we leave it enabled. # Squash = root; Squash = No_root_squash; FSAL { # FSAL_CEPH export Name = CEPH; # # Ceph filesystems have a name string associated with them, and # modern versions of libcephfs can mount them based on the # name. The default is to mount the default filesystem in the # cluster (usually the first one created). # # Filesystem = "cephfs_a"; Filesystem = "mycfs"; } }
Config references:
- NFS — Ceph Documentation - https://docs.ceph.com/en/latest/cephfs/nfs/
- nfs-ganesha/src/config_samples/ceph.conf at next · nfs-ganesha/nfs-ganesha · GitHub - https://github.com/nfs-ganesha/nfs-ganesha/blob/next/src/config_samples/ceph.conf
- Chapter 11. Management of NFS-Ganesha gateway using the Ceph Orchestrator (Limited Availability) Red Hat Ceph Storage 5 | Red Hat Customer Portal - https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/5/html/operations_guide/management-of-nfs-ganesha-gateway-using-the-ceph-orchestrator
Documentation
Charmed Ceph Documentation | Ubuntu https://ubuntu.com/ceph/docs
Remove MicroCeph
If you decide you want to remove everything cleanly:
sudo snap remove microceph