Ceph/MicroCeph

From Omnia
Jump to navigation Jump to search

MicroCeph

Ubuntu 22.04 LTS MicroCeph with Snap

These installation instructions use MicroCeph - Ceph in a snap. MicroCeph is a pure upstream Ceph distribution designed for small scale and edge deployments, which can be installed and maintained with minimal knowledge and effort.
Install Ceph on Ubuntu | Ubuntu
https://ubuntu.com/ceph/install

Note: If snap is not installed:

sudo apt install snapd

Install microceph:

sudo snap install microceph

Bootstrap cluster from first node:

sudo microceph cluster bootstrap

From first node add other nodes:

sudo microceph cluster add node[x]

Copy join output onto node[x]:

sudo microceph cluster join [pasted-output-from-node1]

Check status:

sudo microceph.ceph status

Add some disks to each node (as OSDs):

sudo microceph disk add /dev/sd[x] --wipe

Config files are found at:

/var/snap/microceph/current/conf/
  /var/snap/microceph/current/conf/ceph.conf
  /var/snap/microceph/current/conf/metadata.yaml
  /var/snap/microceph/current/conf/ceph.keyring

Create links:

mkdir -p /etc/ceph  # not needed if you 'apt install ceph-common'
ln -s /var/snap/microceph/current/conf/ceph.conf /etc/ceph/ceph.conf
ln -s /var/snap/microceph/current/conf/ceph.keyring /etc/ceph/ceph.keyring

Verify cluster:

sudo microceph.ceph status
sudo microceph.ceph osd status
microceph disk list

Get Auth

ceph auth get client.admin

Example:

[client.admin]
        key = AQBKdXXXXXXXXXXXXXXXXXXXXX==
        caps mds = "allow *"
        caps mgr = "allow *"
        caps mon = "allow *"
        caps osd = "allow *"

Used for things like /etc/ceph/ceph.keyring

Create File System

Create data and metadata pool:

ceph osd pool create cephfs_data
ceph osd pool create cephfs_metadata
# with different PGs, which can range from 0 to 512, and probably has auto scaling on...
ceph osd pool create cephfs_data 64
ceph osd pool create cephfs_metadata 32

View pool stats:

ceph osd pool ls
ceph osd lspools
ceph osd pool stats

Create filesystem:

# ceph fs new  <fs_name> <metadata_pool>  <data_pool>
ceph   fs new  cephfs    cephfs_metadata  cephfs_data

View filesystems:

ceph fs ls
ceph fs status
ceph fs status [cephfs]

Stat the MDs:

ceph mds stat

Reference:

Create a Ceph file system — Ceph Documentation
https://docs.ceph.com/en/quincy/cephfs/createfs/

Local Mount

(make sure a cephfs has been created first)

If you are on one of the cluster nodes, and if you look at the ceph.keyring, you will notice the default user was likely 'admin' (client.admin).

Install mount.ceph wrapper:

apt install ceph-common

Assuming /etc/ceph/ceph.keyring exits:

mount -t ceph :/ /mnt -o name=admin

New style, with the mount.ceph helper, assuming the fs was created as 'mycfs':

mount.ceph admin@.mycfs=/ /mnt
# Or:
mount -t ceph admin@.cfs=/ /mnt

/etc/fstab:

admin@.mycfs=/    /mycfs    ceph    defaults,noauto    0 0

Specify it all:

mount -t ceph {device-string}={path-to-mounted} {mount-point} -o {key-value-args} {other-args}
mount -t ceph <name>@<fsid>.<fs_name>=/ /mnt
mount -t ceph cephuser@.cephfs=/ /mnt -o secretfile=/etc/ceph/cephuser.secret
mount -t ceph cephuser@.cephfs=/ /mnt -o secret=AQATSKdNGBnwLhAAnNDKnH65FmVKpXZJVasUeQ==

Reference:

Mount CephFS using Kernel Driver — Ceph Documentation
https://docs.ceph.com/en/latest/cephfs/mount-using-kernel-driver/

Documentation

Charmed Ceph Documentation | Ubuntu
https://ubuntu.com/ceph/docs

Remove MicroCeph

If you decide you want to remove everything cleanly:

sudo snap remove microceph

keywords