

After changing back to "Failover" mode on my physical NICs fixed my issue even for traffic coming through the router. However, any network activity not going through the router (activity just between computers connected only to the switch or between guests within the ESXi host), those were fine. That fixed my network issue and lag:Īny traffic going through my router, such as my ssh wifi traffic when connecting to ESXi host from laptop, was apparently confused by having essentially two devices with same IP. I changed this to use the 2nd physical nic as a failover NIC intstead of teaming. Yet I had two of the NICS in my ESXi host connected to that switch and those NICs were attempting to act as teamed. I have an unmanaged TP-Link 8 Port Gigabit switch (tl-sg108).

This was causing serious lag issues when trying to access ESXi command line via ssh from my laptop and going through router -> TP-Link unmanaged switch -> My problem was because I tried to enable NIC teaming on the physical adapters. Let me know if you have other ideas or diagnose ideas. In the meantime if ESXi boot dies, I'll address it then. I'm inclined to simply ignore it because I'm doing development on the VMs. Or I experiment by installing ESXi onto a USB stick to see if that improves the ESXi CL response. If it dies I'll replace with USB stick or new SSD boot disk. I continue and ignore the command line lag (I don't often go into esxi CL). But ssh into the esxi cl and that is crazy slow. I can ssh into any of my guest VMs on the ESXi Host and no problem. Also, I'm on the LAN locally where the host is, no complex network setup. However if I go to the command line of ESXI host and simply try to cd to a folder I have to often wait about 10 seconds before it even registers my keystrokes. The ESXi ui is very normal with no perceived lag, even when pulling up logs there. Slowness appears only at ESXi command line. vCenter Client is running fine and so is the ESXi GUI, no lags there. Does esxi command line get very slow/laggy when boot disk about to crash?ĬPU and Memory seems to be okay for the VMs currently powered on. Please let me know what you think about my diagnosis. This would be short term solution until I can replace the boot disk with a new SSD. I'm considering installing latest ESXi 6.7 onto a USB thumbdrive and see if that improves things. I plan to replace this Intel boot SSD with a Samsung DCT 883 240GB eventually too, but I may do that sooner rather than later. The guest VMs themselves seem to be performing fine for the most part but with some high disk kernal command latency and queue command latency from time to time during heavy I0 (they are HDDs which I will be replacing with DCT 883 SSDs in future). The media wearout indicator has me worried too. These numbers look strange because they mostly say "100".

Looks like it has plenty of space there with only 1% used, so I don't believe I'm low on space on the boot partition. I ran a df -h and I think the 4GB partition is my boot partition. I don't think my issue is network related, but I could be wrong. I had an internal vswitch for fast network connections between my guest VMs but I've since removed it incase it was causing the problem. I don't believe it is a network issue as I currently have just the one virtual switch used by vmkernal and all my VMs, connected to two physical network adapters in the host which in turn connect to simple unmanaged switch. It is an old circa 2013 Intel enterprise 80GB ssd I purchased used. Just typing at the esxi command line takes about 10 - 15 second delays about every 10 seconds or so.
SLOW SSH SHELL HOW TO
I'm new to VMware ESXi having just built this sytstem in January so i'm trying to see how to diagnose my issue.

I'm wondering if the boot disk of my ESXi 6.7 host in my homelab is about to die.
