Ceph: Difference between revisions
Jump to navigation
Jump to search
Line 68: | Line 68: | ||
ceph osd rm {osd-num} | ceph osd rm {osd-num} | ||
<ref>Adding/Removing OSDs — Ceph Documentation - https://docs.ceph.com/en/latest/rados/operations/add-or-rm-osds/</ref> | You might need to all remove from the CRUSH: | ||
ceph osd crush rm osd.{osd-num} | |||
ref: <ref>Adding/Removing OSDs — Ceph Documentation - https://docs.ceph.com/en/latest/rados/operations/add-or-rm-osds/</ref> | |||
=== Create OSD === | === Create OSD === |
Revision as of 06:20, 13 January 2024
OSD
List OSDs
volume lvm list
Note: only shows local OSDs..
ceph-volume lvm list
Example:
====== osd.0 ======= [block] /dev/ceph-64fda9eb-2342-43e3-bc3e-78e5c1bcda31/osd-block-ff991dbd-7698-44ab-ad90-102340ec05c7 block device /dev/ceph-64fda9eb-2342-43e3-bc3e-78e5c1bcda31/osd-block-ff991dbd-7698-44ab-ad90-102340ec05c7 block uuid uvsm7p-c9KU-iaVe-GJGv-NBRM-xGrr-XPf3eB cephx lockbox secret cluster fsid ff74f760-84b2-4dc4-b518-8408e3f10779 cluster name ceph crush device class encrypted 0 osd fsid ff991dbd-7698-44ab-ad90-102340ec05c7 osd id 0 osdspec affinity type block vdo 0 devices /dev/fioa
osd tree
ceph osd tree
Example:
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 3.69246 root default -3 1.09589 host vm-05 0 ssd 1.09589 osd.0 up 1.00000 1.00000 -7 1.09589 host vm-06 2 ssd 1.09589 osd.2 down 0 1.00000 -5 1.50069 host vm-07 1 ssd 1.50069 osd.1 up 1.00000 1.00000
osd stat
ceph osd stat
osd dump
ceph osd dump
Mark OSD Online (In)
ceph osd in [OSD-NUM]
Mark OSD Offline (Out)
ceph osd out [OSD-NUM]
Deleted OSD
ceph osd rm {osd-num}
You might need to all remove from the CRUSH:
ceph osd crush rm osd.{osd-num}
ref: [2]
Create OSD
Create OSD:[3]
pveceph osd create /dev/sd[X]
If the disk was in use before (for example, for ZFS or as an OSD) you first need to zap all traces of that usage:
ceph-volume lvm zap /dev/sd[X] --destroy
Create OSD ID:
ceph osd create # will generate the next ID in sequence
Create directory:
mount -o user_xattr /dev/{hdd} /var/lib/ceph/osd/ceph-{osd-number}
Init data directory:
ceph-osd -i {osd-num} --mkfs --mkkey
Register:
ceph auth add osd.{osd-num} osd 'allow *' mon 'allow rwx' -i /var/lib/ceph/osd/ceph-{osd-num}/keyring
Add to CRUSH map:
ceph osd crush add {id-or-name} {weight} [{bucket-type}={bucket-name} ...]
POOL
Pool Stats
ceph osd pool stats
References
- ↑ https://docs.ceph.com/en/quincy/ceph-volume/lvm/list/
- ↑ Adding/Removing OSDs — Ceph Documentation - https://docs.ceph.com/en/latest/rados/operations/add-or-rm-osds/
- ↑ https://pve.proxmox.com/pve-docs/chapter-pveceph.html#pve_ceph_osd_create