Proxmox/LVM: Difference between revisions

From Omnia
Jump to navigation Jump to search
(Created page with "== Issues == === pvs vgs lvs cat /etc/pve/storage.cfg pveversion -v ref: https://forum.proxmox.com/threads/no-such-logical-volume-pve-data.117999/ === mount error: no mds server is up or the cluster is laggy === dmesg :: <pre> $ sudo dmesg [124164.692856] libceph: mon2 (1)10.204.176.107:6789 session established [124164.693222] libceph: client128014094 fsid ff74f760-84b2-4dc4-b518-8408e3f10779 [124164.693398] ceph: No mds server is up or the cluster is laggy </...")
 
 
(3 intermediate revisions by the same user not shown)
Line 1: Line 1:
== Issues ==
== Issues ==


===
=== When Problems Collect These ===


  pvs
  pvs
Line 38: Line 38:
Sep 07 22:15:48 lmt-vm-07 kernel: ceph: No mds server is up or the cluster is laggy
Sep 07 22:15:48 lmt-vm-07 kernel: ceph: No mds server is up or the cluster is laggy
Sep 07 22:15:48 lmt-vm-07 systemd[1]: mnt-pve-cephfs.mount: Mount process exited, code=exited, status=32/n/a
Sep 07 22:15:48 lmt-vm-07 systemd[1]: mnt-pve-cephfs.mount: Mount process exited, code=exited, status=32/n/a
Sep 07 22:58:08 lmt-vm-05 ceph-osd[1580]: 2024-09-07T22:58:08.130-0600 7fe1b4a1e6c0 -1 osd.0 30574 heartbeat_check: no reply from 10.x.x.x:6804 osd.2 ever on either front or back, first ping sent 2024-09-07T22:55:09.541700-0600 (oldest deadline 2024-09-07T22:55:29.541700-0600)
Sep 07 22:58:08 lmt-vm-05 ceph-osd[1580]: 2024-09-07T22:58:08.130-0600 7fe1b4a1e6c0 -1 osd.0 30574 get_health_metrics reporting 57 slow ops, oldest is osd_op(client.127471574.0:26450 5.a 5.a018ebca (undecoded) ondisk+write+known_if_redirected e30552)
Sep 07 22:58:08 lmt-vm-05 ceph-mon[1371]: 2024-09-07T22:58:08.322-0600 7f155561c6c0 -1 mon.lmt-vm-05@0(leader) e4 get_health_metrics reporting 9 slow ops, oldest is osd_failure(failed timeout osd.0 [v2:10.204.176.105:6802/1580,v1:10.x.x.x:6803/1580] for 23sec e30564 v30564)
</pre>
</pre>


Line 43: Line 47:
  mount -t ceph 10.255.255.1,10.255.255.2,10.255.255.3:/ /storage/ -o 'mds_namespace=storage,rw,relatime,name=admin,secret=X,ms_mode=crc'
  mount -t ceph 10.255.255.1,10.255.255.2,10.255.255.3:/ /storage/ -o 'mds_namespace=storage,rw,relatime,name=admin,secret=X,ms_mode=crc'


Grr:
<pre>
root@vm-07:~# mount -v -t ceph admin@.cephfs=/ /mnt/cephceph -o secretfile=/etc/pve/priv/ceph/cephfs.secret
parsing options: rw,secretfile=/etc/pve/priv/ceph/cephfs.secret
mount.ceph: resolved to: "10.x.x.x:3300,10.x.x.x:3300,10.x.x.x:3300,x.x.x:3300"
mount.ceph: trying mount with new device syntax: admin@ff74f760-84b2-4dc4-b518-8408e3f10779.cephfs=/
mount.ceph: options "name=admin,ms_mode=prefer-crc,key=admin,mon_addr=10.x.x.x:3300/10.x.x.x:3300/10.x.x.x:3300/10.x.x.x:3300" will pass to kernel
mount error: no mds server is up or the cluster is laggy
</pre>


== Mouting Ceph ==
== Mouting Ceph ==
Line 67: Line 81:


  mount -t ceph cephuser@.cephfs=/ /mnt/mycephfs -o secretfile=/etc/ceph/cephuser.secret
  mount -t ceph cephuser@.cephfs=/ /mnt/mycephfs -o secretfile=/etc/ceph/cephuser.secret
== keywords ==

Latest revision as of 04:59, 8 September 2024

Issues

When Problems Collect These

pvs
vgs
lvs
cat /etc/pve/storage.cfg
pveversion -v


ref: https://forum.proxmox.com/threads/no-such-logical-volume-pve-data.117999/

mount error: no mds server is up or the cluster is laggy

dmesg ::

$ sudo dmesg
[124164.692856] libceph: mon2 (1)10.204.176.107:6789 session established
[124164.693222] libceph: client128014094 fsid ff74f760-84b2-4dc4-b518-8408e3f10779
[124164.693398] ceph: No mds server is up or the cluster is laggy

journalctl -xe ::

$ sudo journalctl -xe
Sep 07 22:15:48 lmt-vm-07 systemd[1]: Mounting mnt-pve-cephfs.mount - /mnt/pve/cephfs...
░░ Subject: A start job for unit mnt-pve-cephfs.mount has begun execution
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ A start job for unit mnt-pve-cephfs.mount has begun execution.
░░
░░ The job identifier is 76731.
Sep 07 22:15:48 lmt-vm-07 mount[519064]: mount error: no mds server is up or the cluster is laggy
Sep 07 22:15:48 lmt-vm-07 kernel: libceph: mon0 (1)10.204.176.105:6789 session established
Sep 07 22:15:48 lmt-vm-07 kernel: libceph: client128035060 fsid ff74f760-84b2-4dc4-b518-8408e3f10779
Sep 07 22:15:48 lmt-vm-07 kernel: ceph: No mds server is up or the cluster is laggy
Sep 07 22:15:48 lmt-vm-07 systemd[1]: mnt-pve-cephfs.mount: Mount process exited, code=exited, status=32/n/a

Sep 07 22:58:08 lmt-vm-05 ceph-osd[1580]: 2024-09-07T22:58:08.130-0600 7fe1b4a1e6c0 -1 osd.0 30574 heartbeat_check: no reply from 10.x.x.x:6804 osd.2 ever on either front or back, first ping sent 2024-09-07T22:55:09.541700-0600 (oldest deadline 2024-09-07T22:55:29.541700-0600)
Sep 07 22:58:08 lmt-vm-05 ceph-osd[1580]: 2024-09-07T22:58:08.130-0600 7fe1b4a1e6c0 -1 osd.0 30574 get_health_metrics reporting 57 slow ops, oldest is osd_op(client.127471574.0:26450 5.a 5.a018ebca (undecoded) ondisk+write+known_if_redirected e30552)
Sep 07 22:58:08 lmt-vm-05 ceph-mon[1371]: 2024-09-07T22:58:08.322-0600 7f155561c6c0 -1 mon.lmt-vm-05@0(leader) e4 get_health_metrics reporting 9 slow ops, oldest is osd_failure(failed timeout osd.0 [v2:10.204.176.105:6802/1580,v1:10.x.x.x:6803/1580] for 23sec e30564 v30564)

trying something like: [1]

mount -t ceph 10.255.255.1,10.255.255.2,10.255.255.3:/ /storage/ -o 'mds_namespace=storage,rw,relatime,name=admin,secret=X,ms_mode=crc'

Grr:

root@vm-07:~# mount -v -t ceph admin@.cephfs=/ /mnt/cephceph -o secretfile=/etc/pve/priv/ceph/cephfs.secret
parsing options: rw,secretfile=/etc/pve/priv/ceph/cephfs.secret
mount.ceph: resolved to: "10.x.x.x:3300,10.x.x.x:3300,10.x.x.x:3300,x.x.x:3300"
mount.ceph: trying mount with new device syntax: admin@ff74f760-84b2-4dc4-b518-8408e3f10779.cephfs=/
mount.ceph: options "name=admin,ms_mode=prefer-crc,key=admin,mon_addr=10.x.x.x:3300/10.x.x.x:3300/10.x.x.x:3300/10.x.x.x:3300" will pass to kernel
mount error: no mds server is up or the cluster is laggy

Mouting Ceph

Mount CephFS using Kernel Driver — Ceph Documentation
https://docs.ceph.com/en/quincy/cephfs/mount-using-kernel-driver/
mount -t ceph {device-string}={path-to-mounted} {mount-point} -o {key-value-args} {other-args}
mkdir /mnt/mycephfs
mount -t ceph <name>@<fsid>.<fs_name>=/ /mnt/mycephfs
name is the username of the CephX user we are using to mount CephFS. fsid is the FSID of the ceph cluster which can be found using ceph fsid command. fs_name is the file system to mount.

example:

mount -t ceph cephuser@b3acfc0d-575f-41d3-9c91-0e7ed3dbb3fa.cephfs=/ -o mon_addr=192.168.0.1:6789,secret=AQATSKdNGBnwLhAAnNDKnH65FmVKpXZJVasUeQ==

When using the mount helper, monitor hosts and FSID are optional. mount.ceph helper figures out these details automatically by finding and reading ceph conf file, .e.g:

mount -t ceph cephuser@.cephfs=/ -o secret=AQATSKdNGBnwLhAAnNDKnH65FmVKpXZJVasUeQ==

Note that the dot (.) still needs to be a part of the device string.

A potential problem with the above command is that the secret key is left in your shell’s command history. To prevent that you can copy the secret key inside a file and pass the file by using the option secretfile instead of secret:

mount -t ceph cephuser@.cephfs=/ /mnt/mycephfs -o secretfile=/etc/ceph/cephuser.secret

keywords