<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://aznot.com/index.php?action=history&amp;feed=atom&amp;title=ESX%2FPSOD</id>
	<title>ESX/PSOD - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://aznot.com/index.php?action=history&amp;feed=atom&amp;title=ESX%2FPSOD"/>
	<link rel="alternate" type="text/html" href="https://aznot.com/index.php?title=ESX/PSOD&amp;action=history"/>
	<updated>2026-05-07T02:49:10Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.41.0</generator>
	<entry>
		<id>https://aznot.com/index.php?title=ESX/PSOD&amp;diff=2546&amp;oldid=prev</id>
		<title>Kenneth: /* ESXi 5.0 Dump Collector */</title>
		<link rel="alternate" type="text/html" href="https://aznot.com/index.php?title=ESX/PSOD&amp;diff=2546&amp;oldid=prev"/>
		<updated>2015-10-15T20:40:07Z</updated>

		<summary type="html">&lt;p&gt;&lt;span dir=&quot;auto&quot;&gt;&lt;span class=&quot;autocomment&quot;&gt;ESXi 5.0 Dump Collector&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;== PSOD ==&lt;br /&gt;
&lt;br /&gt;
http://i.snag.gy/3nZHO.jpg&lt;br /&gt;
&lt;br /&gt;
A PSOD (Purple Screen of Death) is the VMware ESX version of a Windows BSOD (Blue Screen of Death).  This occurs when the kernel panics and can no longer function.  There most common causes for a PSOD are:&lt;br /&gt;
&lt;br /&gt;
* Hardware failure&lt;br /&gt;
* Out of memory&lt;br /&gt;
* Hung CPU conditions&lt;br /&gt;
* Misbehaving drivers (null pointers, invalid memory access, etc)&lt;br /&gt;
* NMI (Non Maskable Interrupts)&lt;br /&gt;
&lt;br /&gt;
When a PSOD occurs, one should collect the following:&lt;br /&gt;
&lt;br /&gt;
* Screenshot of PSOD kernel stack trace screen (if possible)&lt;br /&gt;
* Support logs from the vm-support command&lt;br /&gt;
* Kernel log(should be included in vm-support, but better safe then sorry)&lt;br /&gt;
* Kernel core dump (only needed if a developer asks for it)&lt;br /&gt;
&lt;br /&gt;
If the cause of the PSOD isn&amp;#039;t obvious from the PSOD kernel stack trace screen, then the kernel log is the second best place to look for the cause of a kernel panic.&lt;br /&gt;
&lt;br /&gt;
== Core Dump Extract ==&lt;br /&gt;
&lt;br /&gt;
To manually collect the kernel log:&lt;br /&gt;
&lt;br /&gt;
Quick Log and Dump collection:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# will output: vmkernel-log.1 and vmkernel-zdump.1&lt;br /&gt;
# esxi 5.x will put kernel dump here: /scratch/core/vmkernel-zdump.*&lt;br /&gt;
# esxcfg-dumppart -C -D /vmfs/devices/disks/$( esxcfg-dumppart --get-active | awk &amp;#039;{print $1}&amp;#039; )&lt;br /&gt;
esxcfg-dumppart -L /vmfs/devices/disks/$( esxcfg-dumppart --get-active | awk &amp;#039;{print $1}&amp;#039; )&lt;br /&gt;
esxcfg-dumppart -C -D /vmfs/devices/disks/$( esxcfg-dumppart --get-active | awk &amp;#039;{print $1}&amp;#039; ) -z `pwd`/vmkernel-zdump&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To extract the Kernel Log (vmkernel-log.1) from a existing Kernel Dump (vm-support:/var/core/vmkernel-zdump.1):&lt;br /&gt;
 esxcfg-dumppart -L vmkernel-zdump.1&lt;br /&gt;
&lt;br /&gt;
The core dumps are also collected as part of the vm-support tool collection:&lt;br /&gt;
 vm-support&lt;br /&gt;
&lt;br /&gt;
=== vmkernel dump version mismatch ===&lt;br /&gt;
&lt;br /&gt;
If the esxcfg-dumppart version doesn&amp;#039;t match the vmkernel dump:&lt;br /&gt;
 Error running command. Unable to extract log. Error: vmkernel dump version mismatch!&lt;br /&gt;
 Expected version: 196648, this dump file: 196647&lt;br /&gt;
&lt;br /&gt;
vmkernel dump versions:&lt;br /&gt;
 131106 - VMware ESXi 5.0.0 GA&lt;br /&gt;
 131106 - VMware ESXi 5.0.0 Update 1&lt;br /&gt;
 131106 - VMware ESXi 5.0.0 Update 2&lt;br /&gt;
 131106 - VMware ESXi 5.0.0 Update 3&lt;br /&gt;
&lt;br /&gt;
 196647 - VMware ESXi 5.1.0 GA&lt;br /&gt;
 196647 - VMware ESXi 5.1.0 Update 1&lt;br /&gt;
&lt;br /&gt;
 196648 - VMware ESXi 5.1.0 Update 2&lt;br /&gt;
&lt;br /&gt;
 262193 - VMware ESXi 5.5.0 GA&lt;br /&gt;
&lt;br /&gt;
 262194 - VMware ESXi 5.5.0 Update 1&lt;br /&gt;
 262194 - VMware ESXi 5.5.0 Update 2&lt;br /&gt;
&lt;br /&gt;
 52 - VMware ESXi 6.0.0 GA (rc, may actually change with actual release)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&lt;br /&gt;
VMware-VMvisor-Installer-5.0.0-469512.x86_64.iso&lt;br /&gt;
VMware-VMvisor-Installer-5.0.0.update01-623860.x86_64.iso&lt;br /&gt;
VMware-VMvisor-Installer-5.0.0.update02-914586.x86_64.iso&lt;br /&gt;
VMware-VMvisor-Installer-5.0.0.update03-1311175.x86_64.iso&lt;br /&gt;
&lt;br /&gt;
VMware-VMvisor-Installer-5.1.0-799733.x86_64.iso&lt;br /&gt;
VMware-VMvisor-Installer-5.1.0.update01-1065491.x86_64.iso&lt;br /&gt;
VMware-VMvisor-Installer-5.1.0.update02-1483097.x86_64.iso&lt;br /&gt;
&lt;br /&gt;
VMware-VMvisor-Installer-5.5.0-1331820.x86_64.iso&lt;br /&gt;
VMware-VMvisor-Installer-5.5.0.update01-1623387.x86_64.iso&lt;br /&gt;
VMware-VMvisor-Installer-5.5.0.update02-2068190.x86_64.iso&lt;br /&gt;
&lt;br /&gt;
VMware-VMvisor-Installer-6.0.0-2159203.x86_64.iso&lt;br /&gt;
&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Commands ==&lt;br /&gt;
&lt;br /&gt;
List core dump partitions:&lt;br /&gt;
 esxcfg-dumppart --list&lt;br /&gt;
 esxcfg-dumppart --get-config&lt;br /&gt;
&lt;br /&gt;
List active core dump partitions:&lt;br /&gt;
 esxcfg-dumppart --get-active&lt;br /&gt;
&lt;br /&gt;
Quick Log and Dump extract:&lt;br /&gt;
 # output: vmkernel-log.1 and vmkernel-zdump.1&lt;br /&gt;
 esxcfg-dumppart -L $( esxcfg-dumppart --get-active | awk &amp;#039;{print $2}&amp;#039; )&lt;br /&gt;
 esxcfg-dumppart -C -D /vmfs/devices/disks/$( esxcfg-dumppart --get-active | awk &amp;#039;{print $1}&amp;#039; )&lt;br /&gt;
&lt;br /&gt;
 # ESXi sometimes dump to /scratch/core/vmkernel-zdump.1&lt;br /&gt;
 mv /scratch/core/vmkernel-zdump.1 .&lt;br /&gt;
 # or&lt;br /&gt;
 esxcfg-dumppart -C -D /vmfs/devices/disks/$( esxcfg-dumppart --get-active | awk &amp;#039;{print $1}&amp;#039; ) -z /scratch/dump_out&lt;br /&gt;
&lt;br /&gt;
=== Extract Log File from PSOD ===&lt;br /&gt;
&lt;br /&gt;
Get core dump partition:&lt;br /&gt;
 esxcfg-dumppart --list&lt;br /&gt;
 esxcfg-dumppart --get-active  # second column&lt;br /&gt;
&lt;br /&gt;
Extract kernel log:&lt;br /&gt;
 esxcfg-dumppart -L [CORE_DUMP_PARTITION]&lt;br /&gt;
 esxcfg-dumppart -L /dev/sda2    # esx&lt;br /&gt;
 esxcfg-dumppart -L /vmfs/devices/disks/naa.60024e8073ba3100138d088b03c89bbf:7    # esxi&lt;br /&gt;
&lt;br /&gt;
Tricky automatic kernel log extract:&lt;br /&gt;
 esxcfg-dumppart -L $( esxcfg-dumppart --get-active | awk &amp;#039;{print $2}&amp;#039; )&lt;br /&gt;
&lt;br /&gt;
VMware KB: Extracting the log file after an ESX or ESXi host fails with a purple screen error - http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&amp;amp;externalId=1006796&lt;br /&gt;
: This article provides steps to extract a log from a vmkernel-zdump file after a purple diagnostic screen error. This log contains similar information to that seen on the purple diagnostic screen and can be used in further troubleshooting.&lt;br /&gt;
&lt;br /&gt;
Extract the log file from a vmkernel-zdump file using a command line utility on the ESX or ESXi host. This utility differs for different versions of ESX or ESXi.&lt;br /&gt;
&lt;br /&gt;
For ESX 3.0 and 3.5, use the vmkdump utility:&lt;br /&gt;
&lt;br /&gt;
 # vmkdump -l &amp;lt;vmkernel-zdump-filename&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For ESXi 3.5, ESX and ESXi 4.x, use the esxcfg-dumppart utility:&lt;br /&gt;
&lt;br /&gt;
 # esxcfg-dumppart -L &amp;lt;vmkernel-zdump-filename&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To extract the log file from a vmkernel-zdump file:&lt;br /&gt;
&lt;br /&gt;
Find the vmkernel-zdump file in the /root/ or /var/core/ directory:&lt;br /&gt;
&lt;br /&gt;
 # ls /root/vmkernel* /var/core/vmkernel*&lt;br /&gt;
 /var/core/vmkernel-zdump-073108.09.16.1&lt;br /&gt;
&lt;br /&gt;
Use the vmkdump or esxcfg-dumppart utility to extract the log. For example:&lt;br /&gt;
&lt;br /&gt;
 # vmkdump -l /var/core/vmkernel-zdump-073108.09.16.1&lt;br /&gt;
 created file vmkernel-log.1&lt;br /&gt;
&lt;br /&gt;
 # esxcfg-dumppart -L /var/core/vmkernel-zdump-073108.09.16.1&lt;br /&gt;
 created file vmkernel-log.1&lt;br /&gt;
&lt;br /&gt;
The vmkernel-log.1 file is plain text, though may start with null characters. Focus on the end of the log, which looks similar to:&lt;br /&gt;
&lt;br /&gt;
 VMware ESX Server [Releasebuild-98103]&lt;br /&gt;
 PCPU 1 locked up. Failed to ack TLB invalidate.&lt;br /&gt;
 frame=0x3a37d98 ip=0x625e94 cr2=0x0 cr3=0x40c66000 cr4=0x16c&lt;br /&gt;
 es=0xffffffff ds=0xffffffff fs=0xffffffff gs=0xffffffff&lt;br /&gt;
 eax=0xffffffff ebx=0xffffffff ecx=0xffffffff edx=0xffffffff&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Note: The file name created for the log in this example is vmkernel-log.1. If another file with the same name already exists, the new file is created with the number suffix incremented.&lt;br /&gt;
&lt;br /&gt;
== Copy Core Dump ==&lt;br /&gt;
&lt;br /&gt;
Use &amp;#039;-C or --copy&amp;#039;:&lt;br /&gt;
 esxcfg-dumppart -C&lt;br /&gt;
&lt;br /&gt;
 esxcfg-dumppart --copy --devname /vmfs/devices/disks/naa.xxxxx:x&lt;br /&gt;
               --newonly --zdumpname esxdump # (to copy new zdump only)&lt;br /&gt;
&lt;br /&gt;
Tricky automatic core dump:&lt;br /&gt;
 esxcfg-dumppart -C -D /vmfs/devices/disks/$( esxcfg-dumppart --get-active | awk &amp;#039;{print $1}&amp;#039; )&lt;br /&gt;
&lt;br /&gt;
== Deactivate Core Dump ==&lt;br /&gt;
&lt;br /&gt;
Deactivate the active partition:&lt;br /&gt;
 esxcfg-dumppart --deactivate&lt;br /&gt;
&lt;br /&gt;
Activate the active partition:&lt;br /&gt;
 esxcfg-dumppart --activate&lt;br /&gt;
&lt;br /&gt;
Wipe dump partition: (must be deactivated)&lt;br /&gt;
 esxcfg-dumppart --deactivate&lt;br /&gt;
 #  dd if=/dev/zero of=$( esxcfg-dumppart --get-config | awk &amp;#039;{print $2}&amp;#039; ) conv=notrunc  # takes longer needlessly&lt;br /&gt;
 dd if=/dev/zero of=$( esxcfg-dumppart --get-config | awk &amp;#039;{print $2}&amp;#039; ) count=512 conv=notrunc&lt;br /&gt;
 esxcfg-dumppart --activate&lt;br /&gt;
&lt;br /&gt;
Set and activate a partition:&lt;br /&gt;
 # esxcfg-dumppart --set [DEVICE]:[PARTITION]&lt;br /&gt;
 # OR # esxcfg-dumppart --set /vmfs/devices/disks/[DEVICE]:[PARTITION]&lt;br /&gt;
 esxcfg-dumppart --set $( esxcfg-dumppart --get-config | awk &amp;#039;{print $1}&amp;#039; )&lt;br /&gt;
&lt;br /&gt;
== Dump Partition Example ==&lt;br /&gt;
&lt;br /&gt;
This example, partition 7 is the dump partition - VMware ESXi 5.0.0 build-623860&lt;br /&gt;
&lt;br /&gt;
For kicks:&lt;br /&gt;
 # [&amp;quot;partNum startSector endSector type attr&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
Get Partitions: (interesting that the type for 7 doesn&amp;#039;t indicate it is a dump parition here)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# partedUtil get $( esxcfg-dumppart --get-config | awk &amp;#039;{print $2}&amp;#039; | awk &amp;#039;{FS=&amp;quot;:&amp;quot;;print $1}&amp;#039; )&lt;br /&gt;
36468 255 63 585871964&lt;br /&gt;
1 64 8191 0 128&lt;br /&gt;
5 8224 520191 0 0&lt;br /&gt;
6 520224 1032191 0 0&lt;br /&gt;
7 1032224 1257471 0 0&lt;br /&gt;
8 1257504 1843199 0 0&lt;br /&gt;
2 1843200 10229759 0 0&lt;br /&gt;
3 10229760 585871930 0 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Get Partitions with named types and ugly-type-id:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# partedUtil getptbl $( esxcfg-dumppart --get-config | awk &amp;#039;{print $2}&amp;#039; | awk &amp;#039;{FS=&amp;quot;:&amp;quot;;print $1}&amp;#039; )&lt;br /&gt;
gpt&lt;br /&gt;
36468 255 63 585871964&lt;br /&gt;
1 64 8191 C12A7328F81F11D2BA4B00A0C93EC93B systemPartition 128      # root (4MB)&lt;br /&gt;
5 8224 520191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0        # /bootbank (260MB)&lt;br /&gt;
6 520224 1032191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0     # /altbootbank (260MB)&lt;br /&gt;
7 1032224 1257471 9D27538040AD11DBBF97000C2911D1B8 vmkDiagnostic 0  # core dump partition (155MB)&lt;br /&gt;
8 1257504 1843199 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0    # /store (300MB)&lt;br /&gt;
2 1843200 10229759 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0   # /scratch (4.2GB)&lt;br /&gt;
3 10229760 585871930 AA31E02A400F11DB9590000C2911D1B8 vmfs 0        # datastore1 (remaining space)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Get usable first and last sectors:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# partedUtil getUsableSectors $( esxcfg-dumppart --get-config | awk &amp;#039;{print $2}&amp;#039; | awk &amp;#039;{FS=&amp;quot;:&amp;quot;;print $1}&amp;#039; )&lt;br /&gt;
34 585871930&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Also interesting:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 # df&lt;br /&gt;
Filesystem         Bytes          Used    Available Use% Mounted on&lt;br /&gt;
VMFS-5      294473695232    1326448640 293147246592   0% /vmfs/volumes/datastore1&lt;br /&gt;
vfat          4293591040      98828288   4194762752   2% /vmfs/volumes/4fac472e-d1ddf2c4-a597-6431504f5534 (/scratch)&lt;br /&gt;
vfat           261853184     134205440    127647744  51% /vmfs/volumes/95ec6872-9e0d6b2c-3537-b1c307ab1cf4 (/bootbank)&lt;br /&gt;
vfat           261853184     147767296    114085888  56% /vmfs/volumes/b9d1ea75-466c705d-94db-eecb9f72749b (/altbootbank)&lt;br /&gt;
vfat           299712512     188481536    111230976  63% /vmfs/volumes/4fac4727-706fb5d8-d1cf-6431504f5534 (/store)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* /vmfs/volumes/datastore1 matches partition 3 (off by ~250MB)&lt;br /&gt;
* /vmfs/volumes/4fac472e-d1ddf2c4-a597-6431504f5534 (/scratch) matches partition 2 (off by ~300KB)&lt;br /&gt;
* /vmfs/volumes/95ec6872-9e0d6b2c-3537-b1c307ab1cf4 (/bootbank) matches partition 5 or 6 (off by ~300KB)&lt;br /&gt;
* /vmfs/volumes/b9d1ea75-466c705d-94db-eecb9f72749b (/altbootbank) matches partition 5 or 6 (off by ~300KB)&lt;br /&gt;
* /vmfs/volumes/4fac4727-706fb5d8-d1cf-6431504f5534 (/store) matches partition 8 (off by ~160KB)&lt;br /&gt;
&lt;br /&gt;
== ESXi 5.0 Dump Collector ==&lt;br /&gt;
&lt;br /&gt;
 UDP port 6500&lt;br /&gt;
&lt;br /&gt;
ESXi Network Dump Collector in VMware vSphere 5.0 - http://kb.vmware.com/kb/1032051&lt;br /&gt;
&lt;br /&gt;
&amp;quot;The netdump protocol is used for sending coredumps from a failed ESXi host to the Dump Collector service. This service only supports IPv4. By default, this service listens on UDP port 6500. The network traffic is not encrypted, and there is no authentication or authorization mechanism to ensure the integrity or validity of any data received by the Dump Collector service. It is recommended that the VMkernel network used for network coredump collection be physically or logically segmented (such as a separate LAN/VLAN) to ensure that the traffic is not intercepted.&amp;quot; [http://kb.vmware.com/kb/1032051]&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
Enable ESXi 5.x Dump Collector: (example for esxlogger)&lt;br /&gt;
 # NOTE: Have to specify IP address!&lt;br /&gt;
 # esxcli system coredump network set --interface-name vmk0 --server-ipv4 10.50.47.100 --server-port 6500&lt;br /&gt;
 esxcli system coredump network set --interface-name vmk0 --server-ipv4 10.50.47.97 --server-port 6500&lt;br /&gt;
 esxcli system coredump network set --enable true&lt;br /&gt;
 auto-backup.sh&lt;br /&gt;
 &lt;br /&gt;
 # (Optional) Check that ESXi Dump Collector is configured correctly:&lt;br /&gt;
 esxcli system coredump network get&lt;br /&gt;
&lt;br /&gt;
Test by triggering a core dump:&lt;br /&gt;
 vsish -e set /reliability/crashMe/Panic&lt;br /&gt;
&lt;br /&gt;
References:&lt;br /&gt;
* [http://pubs.vmware.com/vsphere-50/index.jsp?topic=%2Fcom.vmware.vsphere.install.doc_50%2FGUID-85D78165-E590-42CF-80AC-E78CBA307232.html Configure ESXi Dump Collector with ESXCLI]&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
Disable coredump:&lt;br /&gt;
 esxcli system coredump network set --enable false&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
Unconfigured:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# esxcli system coredump network get&lt;br /&gt;
   Enabled: false&lt;br /&gt;
   Host VNic:&lt;br /&gt;
   Network Server IP:&lt;br /&gt;
   Network Server Port: 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Configured:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# esxcli system coredump network get&lt;br /&gt;
   Enabled: true&lt;br /&gt;
   Host VNic: vmk0&lt;br /&gt;
   Network Server IP: 10.50.47.97&lt;br /&gt;
   Network Server Port: 6500&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
Check Network Dump Server Service:&lt;br /&gt;
&lt;br /&gt;
Check services under VCSA management web interface:&lt;br /&gt;
 https://VCSA:5480/&lt;br /&gt;
&lt;br /&gt;
To check if the NetDumper service is running in VCSA: [http://kb.vmware.com/kb/1039058]&lt;br /&gt;
 /etc/init.d/vmware-netdumper status&lt;br /&gt;
&lt;br /&gt;
Check Network Dump Client: (and ability to talk to server) ESXi 5.1+&lt;br /&gt;
 esxcli system coredump network check&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
VMware KB: Troubleshooting the ESXi Dump Collector service in vSphere 5.0 - http://kb.vmware.com/kb/2003042&lt;br /&gt;
&lt;br /&gt;
Send test traffic &amp;#039;&amp;#039;&amp;#039;from the ESXi host&amp;#039;&amp;#039;&amp;#039; to the Dump Collector service  at the IP Address and Port from step 5 using the command:&lt;br /&gt;
&lt;br /&gt;
Example from KB:&lt;br /&gt;
 # from an ESXi server...&lt;br /&gt;
 nc -z -w 1 -s VMkernelIPAddress -u DumpCollectorIPAddressDumpCollectorPortNumber&lt;br /&gt;
 # example:&lt;br /&gt;
 nc -z -s 10.55.66.77 -u 10.11.12.13 6500&lt;br /&gt;
&lt;br /&gt;
FIO Example:&lt;br /&gt;
 virt-01# nc -z -s 10.50.48.38 -u 10.50.47.97 6500&lt;br /&gt;
 Connection to 10.50.47.97 6500 port [udp/*] succeeded!&lt;br /&gt;
&lt;br /&gt;
 virt-01:/ # nc -z -u 10.50.47.97 6500&lt;br /&gt;
 Connection to 10.50.47.97 6500 port [udp/*] succeeded!&lt;br /&gt;
&lt;br /&gt;
Note: The nc command reports a successful connection regardless of whether the remote Netdump Server receives the traffic.&lt;br /&gt;
&lt;br /&gt;
Review the logs from the receiving Dump Collector service for messages indicating that the connection was established.&lt;br /&gt;
&lt;br /&gt;
For example, the vCenter 5.0 Dump Collector logs report the unknown client connection with a message similar to:&lt;br /&gt;
 yyyy-mm-ddTHH:MM:SS.nnnZ| netdumper| Bad magic:0xa656761. Expected:0xadeca1bf&lt;br /&gt;
 yyyy-mm-ddTHH:MM:SS.nnnZ| netdumper| Skipping bad packet.&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
Troubleshooting  &amp;quot;Couldn&amp;#039;t attach to dump server&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Error:&lt;br /&gt;
 Starting network coredump from HostIP to DumpCollectorIP.&lt;br /&gt;
 Netdump: FAILED: Couldn&amp;#039;t attach to dump server at IP DumpCollectorIP.&lt;br /&gt;
 Stopping Netdump.&lt;br /&gt;
 &lt;br /&gt;
 Dump: nnn: APR timed out for IP DumpCollectorIP.&lt;br /&gt;
&lt;br /&gt;
VMware KB: Troubleshooting the ESXi Dump Collector service in VMware vSphere 5.x - http://kb.vmware.com/kb/2003042&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
References:&lt;br /&gt;
* VMware: VMware ESXi Chronicles: Setting up the ESXi 5.0 Dump Collector - http://blogs.vmware.com/esxi/2011/07/setting-up-the-esxi-50-dump-collector.html&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
If you move the storage:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mkdir /var/log/vmware/netdumper&lt;br /&gt;
chown netdumper:netdumper /var/log/vmware/netdumper&lt;br /&gt;
&lt;br /&gt;
cat /etc/fstab&lt;br /&gt;
&lt;br /&gt;
vi /etc/sysconfig/netdumper&lt;br /&gt;
   NETDUMPER_DIR=&amp;quot;/storage/nfs.core.QzJe1rjk/vc/core/netdumps&amp;quot;&lt;br /&gt;
&lt;br /&gt;
chown netdumper:netdumper /storage/nfs.core.QzJe1rjk/vc/core/netdumps&lt;br /&gt;
&lt;br /&gt;
/etc/init.d/vmware-netdumper status&lt;br /&gt;
/etc/init.d/vmware-netdumper start&lt;br /&gt;
&lt;br /&gt;
cat /var/log/vmware/netdumper/netdumper.log&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Trigger PSOD ==&lt;br /&gt;
&lt;br /&gt;
Crash system:&lt;br /&gt;
 vsish -e set /reliability/crashMe/Panic&lt;br /&gt;
&lt;br /&gt;
Interactively:&lt;br /&gt;
 vsish&lt;br /&gt;
   cd /reliability/crashMe/&lt;br /&gt;
   set /reliability/crashMe/Panic&lt;br /&gt;
&lt;br /&gt;
PSOD timeout:&lt;br /&gt;
  /sbin/vsish -e set /config/Misc/intOpts/BlueScreenTimeout 10&lt;br /&gt;
&lt;br /&gt;
how IOVP calls it:&lt;br /&gt;
  sleep 5;/sbin/vsish -e set /reliability/crashMe/Panic 1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- http://i.snag.gy/3nZHO.jpg --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- http://i.imgur.com/TxivVDp.png?1 --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== KB Articles ==&lt;br /&gt;
&lt;br /&gt;
VMware KB: Manually regenerating core dump files in VMware ESX and ESXi - http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;cmd=displayKC&amp;amp;externalId=1002769&lt;br /&gt;
: This article provides instructions to extract a core dump file from the VMKCore partition following a purple screen error.&lt;br /&gt;
&lt;br /&gt;
VMware KB: Collecting diagnostic information from an ESX or ESXi host that experiences a purple diagnostic screen - http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;cmd=displayKC&amp;amp;externalId=1004128&lt;br /&gt;
: This article provides instruction for collecting support diagnostic information when troubleshooting a purple screen fault in VMware ESX or ESXi.&lt;br /&gt;
&lt;br /&gt;
VMware KB: Configuring an ESX/ESXi host to capture a VMkernel coredump from a purple diagnostic screen - http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;cmd=displayKC&amp;amp;externalId=1000328&lt;br /&gt;
: This article provides an overview of configuring VMware ESX/ESXi with a location for storing diagnostic information during a purple diagnostic screen and host failure.&lt;br /&gt;
&lt;br /&gt;
VMware KB: Configuring an ESXi 5.0 host to capture a VMkernel coredump from a purple diagnostic screen to a diagnostic partition - http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&amp;amp;docType=kc&amp;amp;docTypeID=DT_KB_1_1&amp;amp;externalId=2004299&lt;br /&gt;
: This article provides steps for adding a VMKcore diagnostic partition on a local or shared disk post-installation using the esxcli command line utility. A diagnostic partition can also be created using the vSphere Client.&lt;br /&gt;
* several commands for ESXi 5&lt;br /&gt;
&lt;br /&gt;
VMware KB: Configuring an ESX/ESXi 3.0-4.1 host to capture a VMkernel coredump from a purple diagnostic screen to a diagnostic partition - http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&amp;amp;docType=kc&amp;amp;docTypeID=DT_KB_1_1&amp;amp;externalId=2004297&lt;br /&gt;
:  This article provides steps for adding a VMKcore diagnostic partition on a local or shared disk post-installation.&lt;br /&gt;
 esxcfg-dumppart --list&lt;br /&gt;
 esxcfg-dumppart --set &amp;quot;&amp;lt;VM Kernel Name&amp;gt;&amp;quot;&lt;br /&gt;
 esxcfg-dumppart --set &amp;quot;mpx.vmhba2:C0:T0:L0:7&amp;quot;&lt;br /&gt;
 esxcfg-dumppart --smart-activate&lt;br /&gt;
 esxcfg-dumppart --get-active&lt;br /&gt;
&lt;br /&gt;
VMware KB: Interpreting an ESX host purple diagnostic screen - http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&amp;amp;externalId=1004250&lt;br /&gt;
: This article provides information to decode ESX host purple screen errors.&lt;br /&gt;
&lt;br /&gt;
VMware KB: Extracting the log file after an ESX or ESXi host fails with a purple screen error - http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&amp;amp;externalId=1006796&lt;br /&gt;
: This article provides steps to extract a log from a vmkernel-zdump file after a purple diagnostic screen error. This log contains similar information to that seen on the purple diagnostic screen and can be used in further troubleshooting.&lt;br /&gt;
&lt;br /&gt;
VMware KB: Interpreting an ESX host purple diagnostic screen - http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&amp;amp;externalId=1004250&lt;br /&gt;
: This article provides information to decode ESX host purple screen errors.&lt;br /&gt;
&lt;br /&gt;
References:&lt;br /&gt;
* &amp;quot;purple diagnostic (PSOD) screen&amp;quot; [http://kb.vmware.com/kb/1026321]&lt;br /&gt;
&lt;br /&gt;
== Issues ==&lt;br /&gt;
&lt;br /&gt;
=== Out of space ===&lt;br /&gt;
&lt;br /&gt;
Error:&lt;br /&gt;
 DiskDump: Partial Dump: Out of space o=0x63ff200 I&lt;br /&gt;
&lt;br /&gt;
Cause:&lt;br /&gt;
* &amp;quot;This issue occurs because the default slot size for the core dump partition cannot accommodate a complete core dump of a host that is using large amounts of memory.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Solution:&lt;br /&gt;
* Select another paritition for core dumps&lt;br /&gt;
* Use ESXi 5.0 Dump Collector (generally preferred)&lt;br /&gt;
&lt;br /&gt;
References:&lt;br /&gt;
* VMware KB: ESXi hosts with more than 128 GB of physical memory fail to generate valid core dumps - http://kb.vmware.com/kb/2012362&lt;br /&gt;
&lt;br /&gt;
== keywords ==&lt;br /&gt;
&lt;br /&gt;
vmware psod core dump kernel dump coredump capture-vmkernel-coredump-psod&lt;br /&gt;
&lt;br /&gt;
[[Category:VMware]]&lt;/div&gt;</summary>
		<author><name>Kenneth</name></author>
	</entry>
</feed>