Ceph

From Omnia
Jump to navigation Jump to search

OSD

List OSDs

volume lvm list

Note: only shows local OSDs..

ceph-volume lvm list

Example:

====== osd.0 =======

  [block]       /dev/ceph-64fda9eb-2342-43e3-bc3e-78e5c1bcda31/osd-block-ff991dbd-7698-44ab-ad90-102340ec05c7

      block device              /dev/ceph-64fda9eb-2342-43e3-bc3e-78e5c1bcda31/osd-block-ff991dbd-7698-44ab-ad90-102340ec05c7
      block uuid                uvsm7p-c9KU-iaVe-GJGv-NBRM-xGrr-XPf3eB
      cephx lockbox secret
      cluster fsid              ff74f760-84b2-4dc4-b518-8408e3f10779
      cluster name              ceph
      crush device class
      encrypted                 0
      osd fsid                  ff991dbd-7698-44ab-ad90-102340ec05c7
      osd id                    0
      osdspec affinity
      type                      block
      vdo                       0
      devices                   /dev/fioa

[1]

osd tree

ceph osd tree

Example:

ID  CLASS  WEIGHT   TYPE NAME           STATUS  REWEIGHT  PRI-AFF
-1         3.69246  root default
-3         1.09589      host vm-05
 0    ssd  1.09589          osd.0           up   1.00000  1.00000
-7         1.09589      host vm-06
 2    ssd  1.09589          osd.2         down         0  1.00000
-5         1.50069      host vm-07
 1    ssd  1.50069          osd.1           up   1.00000  1.00000

osd stat

ceph osd stat

osd dump

ceph osd dump

Mark OSD Online (In)

 ceph osd in [OSD-NUM]

Mark OSD Offline (Out)

 ceph osd out [OSD-NUM]

Deleted OSD

First mark it out:

ceph osd out osd.{osd-num}

Mark it down:

ceph osd down osd.{osd-num}

Remove it:

ceph osd rm osd.{osd-num}

Check tree for removal:

ceph osd tree

---

If you get an error that it is busy.. [2]

Go to host that has the OSD and stop the service:

systemctl stop ceph-osd@{osd-num}

Remove it again:

ceph osd rm osd.{osd-num}

Check tree for removal:

ceph osd tree

If 'ceph osd tree' reports 'DNE (do not exist), then do the following...

Remove from the CRUSH:

ceph osd crush rm osd.{osd-num}

Clear auth:

ceph auth del osd.{osd-num}.

ref: [3]

Create OSD

Create OSD:[4]

pveceph osd create /dev/sd[X]

If the disk was in use before (for example, for ZFS or as an OSD) you first need to zap all traces of that usage:

ceph-volume lvm zap /dev/sd[X] --destroy

Create OSD ID:

ceph osd create
 # will generate the next ID in sequence

Create directory:

mount -o user_xattr /dev/{hdd} /var/lib/ceph/osd/ceph-{osd-number}

Init data directory:

ceph-osd -i {osd-num} --mkfs --mkkey

Register:

ceph auth add osd.{osd-num} osd 'allow *' mon 'allow rwx' -i /var/lib/ceph/osd/ceph-{osd-num}/keyring

Add to CRUSH map:

ceph osd crush add {id-or-name} {weight}  [{bucket-type}={bucket-name} ...]

POOL

Pool Stats

ceph osd pool stats

References

keywords