VMware/vSphere/vmkfstools
vmkfstools
Clone vmdk
vmkfstools -d thin -i [old.vmdk] [new.vmdk] vmkfstools --diskformat thin --clonevirtualdisk [old.vmdk] [new.vmdk]
Rename vmdk
vmkfstools -E [old.vmdk] [new.vmdk] vmkfstools --renamevirtualdisk [old.vmdk] [new.vmdk]
Create Filesystem
VMFS5:
vmkfstools -C vmfs5 [/dev/disks/t10.some.disk:1] -S [LABEL]
VMFS3:
vmkfstools -C vmfs3 -b 4M [/vmfs/devices/disks/mpx/vmhba2:C0:T0:L0:1] -S [LABEL] vmkfstools -C vmfs3 -b 1M [/vmfs/devices/disks/mpx/vmhba2:C0:T0:L0:1] -S [LABEL]
help
vmkfstools - Command line options:
# vmkfstools No valid command specified OPTIONS FOR FILE SYSTEMS: vmkfstools -C --createfs vmfs3 -b --blocksize #[mMkK] -S --setfsname fsName -Z --spanfs span-partition -G --growfs grown-partition deviceName -P --queryfs -h --humanreadable -T --upgradevmfs -x --upgradetype [zeroedthick|eagerzeroedthick|thin] -u --upgradefinish vmfsPath OPTIONS FOR VIRTUAL DISKS: vmkfstools -c --createvirtualdisk #[gGmMkK] -d --diskformat [zeroedthick| thin| eagerzeroedthick] -a --adaptertype [buslogic|lsilogic|ide] -w --writezeros -j --inflatedisk -k --eagerzero -U --deletevirtualdisk -E --renamevirtualdisk srcDisk -i --clonevirtualdisk srcDisk -d --diskformat [zeroedthick| thin| eagerzeroedthick| rdm:<device>|rdmp:<device>| 2gbsparse] -X --extendvirtualdisk #[gGmMkK] [-d --diskformat eagerzeroedthick] -M --migratevirtualdisk -r --createrdm /vmfs/devices/disks/... -q --queryrdm -z --createrdmpassthru /vmfs/devices/disks/... -v --verbose # -g --geometry vmfsPath OPTIONS FOR DEVICES: -L --lock [reserve|release|lunreset|targetreset|busreset] /vmfs/devices/disks/... -B --breaklock /vmfs/devices/disks/... For more information, please run 'man vmkfstools' to refer to the online manual.
VMware VMDK
See: VMware VMDK
RDM
Two RDM types:
- rdm: Virtual compatibility mode raw disk mapping. An rdm virtual disk grants access to the entire raw disk and the virtual disk can participate in snapshots.
- "This is an additional available Raw Device Mapping format with virtual compatibility mode set. A subset of SCSI commands are passed-through to the guest operating system to/from a mapped physical raw LUN. An added benefit of this format is the support of virtual machine snapshots." [1]
- rdmp: Physical compatibility mode (pass-through) raw disk mapping. An rdmp virtual disk passes SCSI commands directly to the hardware, but the virtual disk cannot participate in snapshots.
- "This is the default Raw Device Mapping format with Physical compatibility mode. Most SCSI commands are passed-through to the guest operating system to/from a mapped physical raw LUN. This is required for cross-host virtual machine clustering; both virtual machines share the same mapping file. This format does not support virtual machine snapshots" [2]
RDM devices will appear as "Virtual disk" to VMs:
Device: Virtual disk, Bus: 2, Target: 1, Lun: 0, Type: Direct Access
RDM Passthru devices will appear as "IODRIVE" to VMs:
Device: IODRIVE, Bus: 2, Target: 2, Lun: 0, Type: Direct Access
vmkfstools --help:
-r --createrdm /vmfs/devices/disks/... # createType="vmfsRawDeviceMap" -z --createrdmpassthru /vmfs/devices/disks/... # createType="vmfsPassthroughRawDeviceMap" -a --adaptertype [buslogic|lsilogic|ide] # ddb.adapterType = "lsilogic" (default is buslogic) -q --queryrdm
Create RDM:
vmkfstools -r /vmfs/devices/disks/vml.48d30...1c4cd RDM.vmdk -a lsilogic
Create RDM Passthru:
vmkfstools -z /vmfs/devices/disks/eui.01000...00056 RDMp.vmdk -a lsilogic
Query RDM:
vmkfstools -q RDM.vmdk Disk RDM.vmdk is a Non-passthrough Raw Device Mapping Maps to: vml.010000000039303737494f44524956
vmkfstools -q RDMp.vmdk Disk RDMp.vmdk is a Passthrough Raw Device Mapping Maps to: vml.010000000031323432473030333949
References:
- VMware KB: Cloning and converting virtual machine disks with vmkfstools - http://kb.vmware.com/kb/1028042
-- VMX --
# RDM scsi0:1.present = "TRUE" scsi0:1.fileName = "/vmfs/volumes/mydatastore/RDM.vmdk" scsi0:1.deviceType = "scsi-hardDisk" # RDM Passthru scsi0:2.present = "TRUE" scsi0:2.fileName = "/vmfs/volumes/mydatastore/RDMp.vmdk" scsi0:2.mode = "independent-persistent" scsi0:2.deviceType = "scsi-hardDisk"
-- Issues --
Warning, you must specify the device with /vmfs/devices/disks/... or you will get the following error:
Failed to create virtual disk: The specified device is not a valid physical disk device (20).
VMware KB: Raw Device Mapping option is greyed out - http://kb.vmware.com/kb/1017704
VMware KB: Creating Raw Device Mapping (RDM) is not supported for local storage - http://kb.vmware.com/kb/1017530
- "This behaviour is by design. It is mandatory that RDM candidates or devices support SCSI Inquiry VPD page code 0x83 to be used for RDMs. In other words, the device must export a global serial number for ESX to uniquely identify the device for use as an RDM.
- This is capability generally not possible or included on local controllers and their attached storage, although some controllers may have an implementation for this. As this cannot be guaranteed across all supported platforms, local storage RDMs are not supported and by default filtered or disabled as RDM candidates on VMware ESX Server. The RDM Filter can be disabled under such conditions.
- The option RdmFilter.HbaShared is selected in ESX/ESXi 4.1 or 5.x. To deselect this option, click Configuration > Software > Advanced Settings > RDM Filter, deselect the option, then click OK. For more information, see Configuring advanced options for ESX/ESXi (1038578)."
UNMAP
ESXi 5.0 and ESXi 5.1
"vSphere 5.0 introduced the VAAI Thin Provisioning Block Space Reclamation (UNMAP) Primitive. This feature was designed to efficiently reclaim deleted space to meet continuing storage needs. ESXi 5.0 issues UNMAP commands for space reclamation during several operations. However, there was varying or slow response time from storage devices in response to these commands." [3]
"VMware introduced a new feature in vSphere 5.0 called Space Reclamation, as part of VAAI Block Thin Provisioning. Space reclamation is a garbage collection process that helps storage partners to efficiently reclaim deleted space in coordination with vSphere 5.0. ESXi 5.0 issues UNMAP commands for Space Reclamation in critical regions during several operations with the expectation that the operation would complete quickly. Due to varied response times from the storage devices, UNMAP command can result in poor performance of the system and should be disabled on the ESXi 5.0 host." [4]
To confirm if SCSI UNMAP is supported on a LUN:
esxcli storage core device vaai status get -d [naa.XXXX] Delete Status: unsupported Delete Status: supported
vmkfstools hidden options:
-y reclaim blocks # pass percentage (recommended 60%) -F unmap blocks # pass virtual disk - not really used
Unclaim Usage:
cd /vmfs/volumes/volume_name vmkfstools -y [percentage_of_deleted_blocks_to_reclaim]
"This command creates temporary files at the top level of the datastore. These files can be as large as the aggregate size of blocks being reclaimed."
"The time that the operation takes to complete varies by storage vendor. As the operation is time consuming, consider running it during a maintenance window because the high I/O generated by the SCSI UNMAP operation may impact storage performance on the array, thus impacting running virtual machines."
Success Example:
# cd /vmfs/volumes/ds-virt-06 # vmkfstools -y 60 Attempting to reclaim 60% of free capacity 359.8 GB (215.9 GB) on VMFS-5 file system 'ds-virt-06' with max file size 64 TB. Create file .vmfsBalloona33ueq of size 215.9 GB to reclaim free blocks. Done.
Not supported path Example:
# cd / # vmkfstools -y 60 Space reclamation only works on VMFS3+ volumes.
Virtual disk example:
# cd /vmfs/volumes/mydisk/vm1 # vmkfstools -F vm1.vmdk vmfsDisk: 1, rdmDisk: 0, blockSize: 512 Unmap: 100% done.
Determine if UNMAP is enabled: (option is hidden on ESXi 5.0)
# /VMFS3/EnableBlockDelete = Description: Enable VMFS block delete esxcli system settings advanced list --option /VMFS3/EnableBlockDelete # value of 0 is disabled (default) # value of 1 is enabled
You may need to enable UNMAP support using this command:
esxcli system settings advanced set --int-value 1 --option /VMFS3/EnableBlockDelete
References:
- VMware KB: Using vmkfstools to reclaim VMFS deleted blocks on thin-provisioned LUNs - http://kb.vmware.com/kb/2014849
- VMware KB: Disabling VAAI Thin Provisioning Block Space Reclamation (UNMAP) in ESXi 5.0 - http://kb.vmware.com/kb/2007427
ESXi 5.5
To confirm if SCSI UNMAP is supported on a LUN:
esxcli storage core device vaai status get -d naa.XXXX Delete Status: unsupported Delete Status: supported
Execute UNMAP:
esxcli storage vmfs unmap -l [mydatastore] esxcli storage vmfs unmap -l [mydatastore] -n 100 # not sure what difference this really makes (maybe load performance)
Usage Help:
esxcli storage vmfs unmap unmap # Reclaim the space by unmapping free blocks from VMFS Volume -n|--reclaim-unit=<long> # Number of VMFS blocks that should be unmapped per iteration. -l|--volume-label=<str> # The label of the VMFS volume to unmap the free blocks. -u|--volume-uuid=<str> # The uuid of the VMFS volume to unmap the free blocks.
Determine if UNMAP is enabled: (option is hidden on ESXi 5.0)
# /VMFS3/EnableBlockDelete = Description: Enable VMFS block delete esxcli system settings advanced list --option /VMFS3/EnableBlockDelete # value of 0 is disabled (default) # value of 1 is enabled
You may need to enable UNMAP support using this command:
esxcli system settings advanced set --int-value 1 --option /VMFS3/EnableBlockDelete
---
CHogan [5]
"We should highlight the difference between reclaiming dead space at the volume level and reclaiming dead space within the Guest OS. To answer the first question which related to reclaiming dead space on a VMFS volume which is sitting on top of a thin provisioned LUN, VAAI UNMAP is still a manual process, but the command syntax has changed to make it a little clearer. Check it out via: # esxcli storage vmfs unmap To answer the second poster's question about reclaiming dead space in the Guest OS, the ability to reclaim dead space from withing the Guest OS is still limited to a single use case in vSphere, which is reclaiming space in VMware View desktops deployed on the new SE Sparse Disk format. I don't know if we allow trim/unmaps to pass directly through - I suspect we will not as the granularity of a VMDK block does not match that of a VMFS (4KB versus 1MB), so even if we could unclaim a single 4KB block within the Guest, we would have to find 256 contiguous blocks to unclaim a 1MB VMFS block. This is why we have the SE Sparse format, which allows blocks to be moved around within the VMDK to give us this contiguous range of free space. However, I'm open to be correctly by some of the folks who work in this area, but my gut feel is no, we do not pass these primitives through. HTH Cormac
Dhana: [6]
In ESX 5.5 we still do not track the blocks that were originally written to and then freed. So when you run the CLI command we actually issue unmaps on all the vmfs blocks that are marked as free in the vmfs metadata. The "-n" option specifies how much of the free space we reserve to issue the unmap in one shot. IOW, this amount of free space is not available for allocation while the unmaps are being issued by the host and processed by the array. regards, Dhana.
References:
- VMware Communities: Any changes to UNMAP support in ESXi5.5 - http://communities.vmware.com/message/2224488