Ceph/MicroCeph: Difference between revisions

From Omnia
Jump to navigation Jump to search
Line 34: Line 34:
   /var/snap/microceph/current/conf/ceph.keyring
   /var/snap/microceph/current/conf/ceph.keyring


Create links:
mkdir -p /etc/ceph
  ln -s /var/snap/microceph/current/conf/ceph.conf /etc/ceph/ceph.conf
  ln -s /var/snap/microceph/current/conf/ceph.conf /etc/ceph/ceph.conf
ln -s /var/snap/microceph/current/conf/ceph.keyring /etc/ceph/ceph.keyring


Verify cluster:
Verify cluster:

Revision as of 23:11, 21 January 2024

MicroCeph

Ubuntu 22.04 LTS MicroCeph with Snap

These installation instructions use MicroCeph - Ceph in a snap. MicroCeph is a pure upstream Ceph distribution designed for small scale and edge deployments, which can be installed and maintained with minimal knowledge and effort.
Install Ceph on Ubuntu | Ubuntu
https://ubuntu.com/ceph/install

Note: If snap is not installed:

sudo apt install snapd

Install microceph:

sudo snap install microceph

Bootstrap cluster from first node:

sudo microceph cluster bootstrap

From first node add other nodes:

sudo microceph cluster add node[x]

Copy join output onto node[x]:

sudo microceph cluster join [pasted-output-from-node1]

Check status:

sudo microceph.ceph status

Add some disks to each node (as OSDs):

sudo microceph disk add /dev/sd[x] --wipe

Config files are found at:

/var/snap/microceph/current/conf/
  /var/snap/microceph/current/conf/ceph.conf
  /var/snap/microceph/current/conf/metadata.yaml
  /var/snap/microceph/current/conf/ceph.keyring

Create links:

mkdir -p /etc/ceph
ln -s /var/snap/microceph/current/conf/ceph.conf /etc/ceph/ceph.conf
ln -s /var/snap/microceph/current/conf/ceph.keyring /etc/ceph/ceph.keyring

Verify cluster:

sudo microceph.ceph status
sudo microceph.ceph osd status
microceph disk list

Create File System

Create data and metadata pool:

ceph osd pool create cephfs_data
ceph osd pool create cephfs_metadata

Create filesystem:

# ceph fs new <fs_name> <metadata> 
ceph fs new cephfs cephfs_metadata cephfs_data

View filesystems:

ceph fs ls

Stat the MDs:

ceph mds stat

Reference:

Create a Ceph file system — Ceph Documentation
https://docs.ceph.com/en/quincy/cephfs/createfs/

Local Mount

If you are on one of the cluster nodes, and if you look at the ceph.keyring, you will notice the default user was likely 'admin' (client.admin).

Assuming /etc/ceph/ceph.keyring exits:

mount -t ceph :/ /mnt -o name=admin

New style, with the mount.ceph helper, assuming the fs was created as 'mycfs':

mount.ceph admin@.mycfs=/ /mnt
# Or:
mount -t ceph admin@.cfs=/ /mnt

Specify it all:

mount -t ceph {device-string}={path-to-mounted} {mount-point} -o {key-value-args} {other-args}
mount -t ceph <name>@<fsid>.<fs_name>=/ /mnt
mount -t ceph cephuser@.cephfs=/ /mnt -o secretfile=/etc/ceph/cephuser.secret
mount -t ceph cephuser@.cephfs=/ /mnt -o secret=AQATSKdNGBnwLhAAnNDKnH65FmVKpXZJVasUeQ==

Reference:

Mount CephFS using Kernel Driver — Ceph Documentation
https://docs.ceph.com/en/latest/cephfs/mount-using-kernel-driver/

Documentation

Charmed Ceph Documentation | Ubuntu
https://ubuntu.com/ceph/docs

Remove MicroCeph

If you decide you want to remove everything cleanly:

sudo snap remove microceph

keywords