|
|
憨厚的紫菜汤 · GitLab application ...· 4 周前 · |
|
|
帅呆的篮球 · 子查詢 (SQL Server) - ...· 2 周前 · |
|
|
至今单身的小蝌蚪 · UPDATE - Azure ...· 2 周前 · |
|
|
刚分手的哑铃 · 在JavaScript中使用for循环创建对 ...· 1 年前 · |
|
|
千杯不醉的仙人球 · SringBoot中MultipartFil ...· 1 年前 · |
|
|
爽快的手链 · Matplotlib绘图双纵坐标轴设置及控制 ...· 2 年前 · |
|
|
踏实的骆驼 · bash - Crontab never ...· 2 年前 · |
| esxi host vsphere update |
| https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-esxi-70u3c-release-notes.html |
|
|
有情有义的葡萄酒
2 年前 |
IMPORTANT : VMware removed ESXi 7.0 Update 3, 7.0 Update 3a and 7.0 Update 3b from all sites on November 19, 2021 due to an upgrade-impacting issue. Build 19193900 for ESXi 7.0 Update 3c ISO replaces build 18644231, 18825058, and 18905247 for ESXi 7.0 Update 3, 7.0 Update 3a, and 7.0 Update 3b respectively. To make sure you run a smooth upgrade to vSphere 7.0 Update 3c, see VMware knowledge base articles 86447 and 87327 .
For new features in the rolled back releases, see this list:
vSphere Memory Monitoring and Remediation, and support for snapshots of PMem VMs : vSphere Memory Monitoring and Remediation collects data and provides visibility of performance statistics to help you determine if your application workload is regressed due to Memory Mode. vSphere 7.0 Update 3 also adds support for snapshots of PMem VMs. For more information, see vSphere Memory Monitoring and Remediation .
Extended support for disk drives types:
Starting with vSphere 7.0 Update 3, vSphere Lifecycle Manager validates the following types of disk drives and storage device configurations:
• HDD (SAS/SATA)
• SSD (SAS/SATA)
• SAS/SATA disk drives behind single-disk RAID-0 logical volumes
For more information, see
Cluster-Level Hardware Compatibility Checks
.
Use vSphere Lifecycle Manager images to manage a vSAN stretched cluster and its witness host : Starting with vSphere 7.0 Update 3, you can use vSphere Lifecycle Manager images to manage a vSAN stretched cluster and its witness host. For more information, see Using vSphere Lifecycle Manager Images to Remediate vSAN Stretched Clusters .
vSphere Cluster Services (vCLS) enhancements: With vSphere 7.0 Update 3, vSphere admins can configure vCLS virtual machines to run on specific datastores by configuring the vCLS VM datastore preference per cluster. Admins can also define compute policies to specify how the vSphere Distributed Resource Scheduler (DRS) should place vCLS agent virtual machines (vCLS VMs) and other groups of workload VMs.
Improved interoperability between vCenter Server and ESXi versions: Starting with vSphere 7.0 Update 3, vCenter Server can manage ESXi hosts from the previous two major releases and any ESXi host from version 7.0 and 7.0 updates. For example, vCenter Server 7.0 Update 3 can manage ESXi hosts of versions 6.5, 6.7 and 7.0, all 7.0 update releases, including later than Update 3, and a mixture of hosts between major and update versions.
New VMNIC tag for NVMe-over-RDMA storage traffic:
ESXi 7.0 Update 3 adds a new VMNIC tag for NVMe-over-RDMA storage traffic. This VMkernel port setting enables NVMe-over-RDMA traffic to be routed over the tagged interface. You can also use the ESXCLI command
esxcli network ip interface tag add -i <interface name> -t NVMeRDMA
to enable the
NVMeRDMA VMNIC
tag.
NVMe over TCP support: vSphere 7.0 Update 3 extends the NVMe-oF suite with the NVMe over TCP storage protocol to enable high performance and parallelism of NVMe devices over a wide deployment of TCP/IP networks.
New features, resolved, and known issues of ESXi are described in the release notes for each release. Release notes for earlier releases of ESXi 7.0 are:
For internationalization, compatibility, and open source components, see the VMware vSphere 7.0 Release Notes .
This release of ESXi 7.0 Update 3c delivers the following patches:
Build Details
Download Filename : VMware-ESXi-7.0U3c-19193900-depot Build : 19193900 Download Size : 395.8 MB md5sum : e39a951f4e96e92eae41c94947e046ec sha256checksum : 20cdcd6fd8f22f5f8a848b45db67316a3ee630b31a152312f4beab737f2b3cdc Host Reboot Required : Virtual Machine Migration or Shutdown Required : -->For a table of build numbers and versions of VMware ESXi, see VMware knowledge base article 2143832 .
IMPORTANT :
ESXi
and
esx-update
bulletins depend on each other. Always include both in a single ESXi host patch baseline or include the rollup bulletin in the baseline to avoid failure during host patching.
Rollup Bulletin
This rollup bulletin contains the latest VIBs with all the fixes after the initial release of ESXi 7.0.
Bulletin ID Category Severity ESXi70U3c-19193900 Bugfix CriticalImage Profiles
VMware patch and update releases contain general and critical image profiles. Application of the general release image profile applies to new bug fixes.
Image Profile Name ESXi70U3c-19193900-standard ESXi70U3c-19193900-no-toolsESXi Image
Name and Version Release Date Category DetailFor information about the individual components and bulletins, see the Product Patches page and the Resolved Issues section.
In vSphere 7.x, the Update Manager plug-in, used for administering vSphere Update Manager, is replaced with the Lifecycle Manager plug-in. Administrative operations for vSphere Update Manager are still available under the Lifecycle Manager plug-in, along with new capabilities for vSphere Lifecycle Manager.
The typical way to apply patches to ESXi 7.x hosts is by using the vSphere Lifecycle Manager. For details, see
About vSphere Lifecycle Manager
and
vSphere Lifecycle Manager Baselines and Images
.
You can also update ESXi hosts without using the Lifecycle Manager plug-in, and use an image profile instead. To do this, you must manually download the patch offline bundle ZIP file after you log in to
VMware Customer Connect
. From the
Select a Product
drop-down menu, select
ESXi (Embedded and Installable)
and from the
Select a Version
drop-down menu, select
7.0
. For more information, see the
Upgrading Hosts by Using ESXCLI Commands
and the
VMware ESXi Upgrade
guide.
The resolved issues are grouped as follows.
Due to a rare issue with handling Asynchronous Input/Output (AIO) calls, hostd and vpxa services on an ESXi host might fail and trigger alarms in the vSphere Client. In the backtrace, you see errors such as:
#0 0x0000000bd09dcbe5 in __GI_raise (sig=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
#1 0x0000000bd09de05b in __GI_abort () at abort.c:90
#2 0x0000000bc7d00b65 in Vmacore::System::SignalTerminateHandler (info=, ctx=) at bora/vim/lib/vmacore/posix/defSigHandlers.cpp:62
#3 <signal called="" handler="">
#4 NfcAioProcessCloseSessionMsg (closeMsg=0xbd9280420, session=0xbde2c4510) at bora/lib/nfclib/nfcAioServer.c:935
#5 NfcAioProcessMsg (session=session@entry=0xbde2c4510, aioMsg=aioMsg@entry=0xbd92804b0) at bora/lib/nfclib/nfcAioServer.c:4206
#6 0x0000000bd002cc8b in NfcAioGetAndProcessMsg (session=session@entry=0xbde2c4510) at bora/lib/nfclib/nfcAioServer.c:4324
#7 0x0000000bd002d5bd in NfcAioServerProcessMain (session=session@entry=0xbde2c4510, netCallback=netCallback@entry=0 '\000') at bora/lib/nfclib/nfcAioServer.c:4805
#8 0x0000000bd002ea38 in NfcAioServerProcessClientMsg (session=session@entry=0xbde2c4510, done=done@entry=0xbd92806af "") at bora/lib/nfclib/nfcAioServer.c:5166
This issue is resolved in this release. The fix makes sure the
AioSession
object works as expected.
In very rare conditions, ESXi hosts with NVIDIA vGPU-powered virtual machines might intermittently fail with a purple diagnostic screen with a kernel panic error. The issue might affect multiple ESXi hosts, but not at the same time. In the backlog, you see kernel reports about heartbeat timeouts against CPU for x seconds and the stack informs about a P2M cache.
This issue is resolved in this release.
When you enable Latency Sensitivity on virtual machines, some threads of the Likewise Service Manager (lwsmd), which sets CPU affinity explicitly, might compete for CPU resources on such virtual machines. As a result, you might see the ESXi host and the hostd service to become unresponsive.
This issue is resolved in this release. The fix makes sure lwsmd does not set CPU affinity explicitly.
The VNVME retry logic in ESXi 7.0 Update 3 has an issue that might potentially cause silent data corruption. Retries rarely occur and they can potentially, not always, cause data errors. The issue affects only ESXi 7.0 Update 3.
This issue is resolved in this release.
In rare cases, when you delete a large component in an ESXi host, followed by a reboot, the reboot might start before all metadata of the component gets deleted. The stale metadata might cause the ESXi host to fail with a purple diagnostic screen.
This issue is resolved in this release. The fix makes sure no pending metadata remains before a reboot of ESXi hosts.
Event delivery to applications might delay indefinitely due to a race condition in the VMKAPI driver. As a result, the virtual desktop infrastructure in some environments, such as systems using NVIDIA graphic cards, might become unresponsive or lose connection to the VDI client.
This issue is resolve in this release.
Several issues in the implementation of ACPICA semaphores in ESXi 7.0 Update 3 and earlier can result in VMKernel panics, typically during boot. An issue in the semaphore implementation can cause starvation, and on several call paths the VMKernel might improperly try to acquire an ACPICA semaphore or to sleep within ACPICA while holding a spinlock. Whether these issues cause problems on a specific machine depends on details of the ACPI firmware of the machine.
These issues are resolved in this release. The fix involves a rewrite of the ACPICA semaphores in ESXi, and correction of the code paths that try to enter ACPICA while holding a spinlock.
I/O operations on a software iSCSI adapter might cause a rare race condition inside the
iscsi_vmk
driver. As a result, ESXi hosts might intermittently fail with a purple diagnostic screen.
This issue is resolved in this release.
If you use a VDS of version earlier than 6.6 on a vSphere 7.0 Update 1 or later system, and you change the LAG hash algorithm, for example from L3 to L2 hashes, ESXi hosts might fail with a purple diagnostic screen.
This issue is resolved in this release.
In vCenter Server advanced performance charts, you see an increasing number of packet drop count for all virtual machines that have NetX redirection enabled. However, if you disable NetX redirection, the count becomes 0.
This issue is resolved in this release.
In rare cases, incorrect mapping of completion queues (CQ) when the total number of I/O channels of a Emulex FC HBA is not an exact multiple of the number of event queues (EQ), might cause booting of an ESXi host to fail with a purple diagnostic screen. In the backtrace, you can see an error in the
lpfc_cq_create()
method.
This issue is resolved in this release. The fix ensures correct mapping of CQs to EQs.
During internal communication between UNIX domain sockets, a heap allocation might occur instead of cleaning ancillary data such as file descriptors. As a result, in some cases, the ESXi host might report an out of memory condition and fail with a purple diagnostic screen with
#PF Exception 14
and errors similar to
UserDuct_ReadAndRecvMsg()
.
This issue is resolved in this release. The fix cleans ancillary data to avoid buffer memory allocations.
When you set up optional configurations for NTP by using ESXCLI commands, the settings might not persist after the ESXi host reboots.
This issue is resolved in this release. The fix makes sure that optional configurations are restored into the local cache from ConfigStore during ESXi host bootup.
In systems with vSphere Distributed Switch of version 6.5.0 and ESXi hosts of version 7.0 or later, when you change the LACP hashing algorithm, this might cause an unsupported LACP event error due to a temporary string array used to save the event type name. As a result, multiple ESXi hosts might fail with a purple diagnostic screen.
This issue is resolved in this release. To avoid facing the issue, in vCenter Server systems of version 7.0 and later make sure you use a vSphere Distributed Switch version later than 6.5.0.
Remediation of clusters that you manage with vSphere Lifecycle Manager baselines might take long after updates from ESXi 7.0 Update 2d and earlier to a version later than ESXi 7.0 Update 2d.
This issue is resolved in this release.
In certain cases, for example virtual machines with RDM devices running on servers with SNMP, a race condition between device open requests might lead to failing vSphere vMotion operations.
This issue is resolved in this release. The fix makes sure that device open requests are sequenced to avoid race conditions. For more information, see VMware knowledge base article 86158 .
In some environments, after upgrading to ESXi 7.0 Update 2d and later, in the vSphere Client you might see the error
Host has lost time synchronization
. However, the alarm might not indicate an actual issue.
This issue is resolved in this release. The fix replaces the error message with a log function for backtracing but prevents false alarms.
The OpenSSL package is updated to version openssl-1.0.2zb.
The Python package is updated to address CVE-2021-29921.
With the OPENSSL command
openssl s_client -cipher <CIPHER> -connect localhost:9080
you can connect to port 9080 by using restricted DES/3DES ciphers.
This issue is resolved in this release. You cannot connect to port 9080 by using the following ciphers: DES-CBC3-SHA, EDH-RSA-DES-CBC3-SHA, ECDHE-RSA-DES-CBC3-SHA, and AECDH-DES-CBC3-SHA.
The following VMware Tools ISO images are bundled with ESXi 7.0 Update 3c:
windows.iso
: VMware Tools 11.3.5 supports Windows 7 SP1 or Windows Server 2008 R2 SP1 and later.
linux.iso
: VMware Tools 10.3.23 ISO image for Linux OS with glibc 2.11 or later.
The following VMware Tools ISO images are available for download:
windows.iso
: for Windows Vista (SP2) and Windows Server 2008 Service Pack 2 (SP2).
winPreVista.iso
: for Windows 2000, Windows XP, and Windows 2003.
linuxPreGLibc25.iso
: supports Linux guest operating systems earlier than Red Hat Enterprise Linux (RHEL) 5, SUSE Linux Enterprise Server (SLES) 11, Ubuntu 7.04, and other distributions with glibc version earlier than 2.5.
solaris.iso
: VMware Tools image 10.3.10 for Solaris.
darwin.iso
: Supports Mac OS X versions 10.11 and later.
Follow the procedures listed in the following documents to download VMware Tools for platforms not bundled with ESXi:
In rare cases, hardware issues might cause an SQlite DB corruption that makes multiple VMs become inaccessible and lead to some downtime for applications.
This issue is resolved in this release.
A new datastore normally has a high number of large file block (LFB) resources and a lesser number of small file block (SFB) resources. For workflows that consume SFBs, such as virtual machine operations, LFBs convert to SFBs. However, due to a delay in updating the conversion status, newly converted SFBs might not be recognized as available for allocation. As a result, you see an error such as
Insufficient disk space on datastore
when you try to power on, clone, or migrate a virtual machine.
This issue is resolved in this release.
Due to an issue that allows the duplication of the unique ID of vSphere Virtual Volumes, virtual machine snapshot operations might fail, or the source volume might get deleted. The issue is specific to Pure storage and affects Purity release lines 5.3.13 and earlier, 6.0.5 and earlier, and 6.1.1 and earlier.
This issue is resolved in this release.
In the vSphere Client, you might see vSAN health errors such as
vSAN cluster partition
or v
SAN object health
when data-in-transit encryption is enabled. The issue occurs because when a rekey operation starts in a vSAN cluster, a temporary resource issue might cause key exchange between peers to fail.
This issue is resolved in this release.
In environments with VMs of 575 GB or more reserved memory that do not use Encrypted vSphere vMotion, a live migration operation might race with another live migration and cause the ESXi host to fail with a purple diagnostic screen.
This issue is resolved in this release. However, in very rare cases, the migration operation might still fail, regardless that the root cause for the purple diagnostic screen condition is fixed. In such cases, retry the migration when no other live migration is in progress on the source host, or enable Encrypted vSphere vMotion on the virtual machines.
Networking Issues
RDMA traffic by using the iWARP protocol on Intel x722 cards might time out and not complete.
This issue is resolved in this release.
Installation, Upgrade and Migration Issues
Due to the I/O sensitivity of USB and SD devices, the VMFS-L locker partition on such devices that stores VMware Tools and core dump files might get corrupted.
This issue is resolved in this release. By default, ESXi loads the locker packages to the RAM disk during boot.
After an upgrade of the
brcmfcoe
driver on Hitachi storage arrays, ESXi hosts might fail to boot and lose connectivity.
This issue is resolved in this release.
ESXi 7.0 Update 2 introduced a system statistics provider interface that requires reading the datastore stats for every ESXi host on every 5 min. If a datastore is shared by multiple ESXi hosts, such frequent reads might cause a read latency on the storage array and lead to excessive storage read I/O load.
This issue is resolved in this release.
Virtual Machine Management Issues
Performance and functionality of features that require VMCI might be affected on virtual machines with enabled AMD SEV-ES, because such virtual machines cannot create VMCI sockets.
This issue is resolved in this release.
In rare cases, when a guest OS reboot is initiated outside the guest, for example from the vSphere Client, virtual machines might fail, generating a VMX dump. The issue might occur when the guest OS is heavily loaded. As a result, responses from the guest to VMX requests are delayed prior to the reboot. In such cases, the
vmware.log
file of the virtual machines includes messages such as:
I125: Tools: Unable to send state change 3: TCLO error. E105: PANIC: NOT_REACHED bora/vmx/tools/toolsRunningStatus.c:953.
This issue is resolved in this release.
Miscellaneous Issues
In rare cases, in an asynchronous read I/O containing a
SCATTER_GATHER_ELEMENT
array of more than 16 members, at least 1 member might fall in the last partial block of a file. This might lead to corrupting VMFS memory heap, which in turn causes ESXi hosts to fail with a purple diagnostic screen.
This issue is resolved in this release.
ESXi 7.0 Update 3 introduced an uniform UNMAP granularity for VMFS and SEsparse snapshots, and set the maximum UNMAP granularity reported by VMFS to 2GB. However, in certain environments, when the guest OS makes a trim or unmap request of 2GB, such a request might require the VMFS metadata transaction to do lock acquisition of more than 50 resource clusters. VMFS might not handle such requests correctly. As a result, an ESXi host might fail with a purple diagnostic screen. VMFS metadata transaction requiring lock actions on greater then 50 resource clusters is rare and can only happen on aged datastores. The issue impacts only thin-provisioned VMDKs. Thick and eager zero thick VMDKs are not impacted.
Along with the purple diagnostic screen, in the
/var/run/log/vmkernel
file you see errors such as:
2021-10-20T03:11:41.679Z cpu0:2352732)@BlueScreen: NMI IPI: Panic requested by another PCPU. RIPOFF(base):RBP:CS [0x1404f8(0x420004800000):0x12b8:0xf48] (Src 0x1, CPU0)
2021-10-20T03:11:41.689Z cpu0:2352732)Code start: 0x420004800000 VMK uptime: 11:07:27:23.196
2021-10-20T03:11:41.697Z cpu0:2352732)Saved backtrace from: pcpu 0 Heartbeat NMI
2021-10-20T03:11:41.715Z cpu0:2352732)0x45394629b8b8:[0x4200049404f7]HeapVSIAddChunkInfo@vmkernel#nover+0x1b0 stack: 0x420005bd611e
This issue is resolved in this release.
An issue in the time service event monitoring service, which is enabled by default, might cause the hostd service to fail. In the
vobd.log
file, you see errors such as:
2021-10-21T18:04:28.251Z: [UserWorldCorrelator] 304957116us: [esx.problem.hostd.core.dumped] /bin/hostd crashed (1 time(s) so far) and a core file may have been created at /var/core/hostd-zdump.000. This may have caused connections to the host to be dropped.
.
2021-10-21T18:04:28.251Z: An event (esx.problem.hostd.core.dumped) could not be sent immediately to hostd; queueing for retry. 2021-10-21T18:04:32.298Z: [UserWorldCorrelator] 309002531us: [vob.uw.core.dumped] /bin/hostd(2103800) /var/core/hostd-zdump.001
2021-10-21T18:04:36.351Z: [UserWorldCorrelator] 313055552us: [vob.uw.core.dumped] /bin/hostd(2103967) /var/core/hostd-zdump.002
This issue is resolved in this release.
The known issues are grouped as follows.
If you had NSX for vSphere with VXLAN enabled on a vSphere Distributed Switch (VDS) of version 7.0 and migrated to NSX-T Data Center by using NSX V2T migration, stale NSX for vSphere properties in the VDS or some hosts might prevent ESXi 7.x hosts updates. Host update fails with a platform configuration error.
Workaround: Upload the
CleanNSXV.py
script to the
/tmp
dir in vCenter Server. Log in to the appliance shell as a user with super administrative privileges (for example,
root
) and follow these steps:
CleanNSXV.py
by using the command
PYTHONPATH=$VMWARE_PYTHON_PATH python /tmp/CleanNSXV.py --user <vc_admin_user> --password <passwd>
. The
<vc_admin_user>
parameter is a vCenter Server user with super administrative privileges and
<passwd>
parameter is the user password.
PYTHONPATH=$VMWARE_PYTHON_PATH python /tmp/CleanNSXV.py --user 'administrator@vsphere.local' --password 'Admin123'
com.vmware.netoverlay.layer0
and
com.vmware.net.vxlan.udpport,
are removed from the ESXi hosts:
net-dvs -l | grep "com.vmware.netoverlay.layer0\|com.vmware.net.vxlan.udpport"
.
To download the
CleanNSXV.py
script and for more details, see VMware knowledge base article
87423
.
The cURL version in ESXi 7.0 Update 3c is 7.77.0, while ESXi650-202110001 and ESXi670-202111001 have the newer fixed version 7.78.0. As a result, if you upgrade from ESXi650-202110001 or ESXi670-202111001 to ESXi 7.0 Update 3c, cURL 7.7.0 might expose your system to the following vulnerabilities:
CVE-2021-22926: CVSS 7.5
CVE-2021-22925: CVSS 5.3
CVE-2021-22924: CVSS 3.7
CVE-2021-22923: CVSS 5.3
CVE-2021-22922: CVSS 6.5
Workaround: None. cURL version 7.78.0 comes with a future ESXi 7.x release.
To view a list of previous known issues, click here .
The earlier known issues are grouped as follows.
Some services in ESXi that run on top of the host operating system, including slpd, the CIM object broker, sfcbd, and the related openwsmand service, have proven security vulnerabilities. VMware has addressed all known vulnerabilities in VMSA-2019-0022 and VMSA-2020-0023 , and the fixes are part of the vSphere 7.0 Update 2 release. While sfcbd and openwsmand are disabled by default in ESXi, slpd is enabled by default and you must turn it off, if not necessary, to prevent exposure to a future vulnerability after an upgrade.
Workaround: To turn off the slpd service, run the following PowerCLI commands:
$ Get-VMHost | Get-VmHostService | Where-Object {$_.key -eq “slpd”} | Set-VMHostService -policy “off”
$ Get-VMHost | Get-VmHostService | Where-Object {$_.key -eq “slpd”} | Stop-VMHostService -Confirm:$false
Alternatively, you can use the command
chkconfig slpd off && /etc/init.d/slpd stop
.
The openwsmand service is not on the ESXi services list and you can check the service state by using the following PowerCLI commands:
$esx=(Get-EsxCli -vmhost xx.xx.xx.xx -v2)
$esx.system.process.list.invoke() | where CommandLine -like '*openwsman*' | select commandline
In the ESXi services list, the sfcbd service appears as sfcbd-watchdog.
For more information, see VMware knowledge base articles
76372
and
1025757
.
In VMware® vSphere Trust Authority™, if you have enabled HA on the Trusted Cluster and one or more hosts in the cluster fails attestation, an encrypted virtual machine cannot power on.
Workaround: Either remove or remediate all hosts that failed attestation from the Trusted Cluster.
In VMware® vSphere Trust Authority™, if you have enabled DRS on the Trusted Cluster and one or more hosts in the cluster fails attestation, DRS might try to power on an encrypted virtual machine on an unattested host in the cluster. This operation puts the virtual machine in a locked state.
Workaround: Either remove or remediate all hosts that failed attestation from the Trusted Cluster.
If you try to migrate or clone an encrypted virtual machine across vCenter Server instances using the vSphere Client, the operation fails with the following error message: "The operation is not allowed in the current state."
Workaround: You must use the vSphere APIs to migrate or clone encrypted virtual machines across vCenter Server instances.
If you change the preferred site in a stretched cluster, some objects might incorrectly appear as compliant, because their policy settings might not automatically change. For example, if you configure a virtual machine to keep data at the preferred site, when you change the preferred site, data might remain on the nonpreferred site.
Workaround: Before you change a preferred site, in Advanced Settings, lower the
ClomMaxCrawlCycleMinutes
setting to 15 min to make sure objects policies are updated. After the change, revert the
ClomMaxCrawlCycleMinutes
option to the earlier value.
The default name for new vCLS VMs deployed in vSphere 7.0 Update 3 environment uses the pattern vCLS-UUID. vCLS VMs created in earlier vCenter Server versions continue to use the pattern vCLS (n). Since the use of parenthesis () is not supported by many solutions that interoperate with vSphere, you might see compatibility issues.
Workaround: Reconfigure vCLS by using retreat mode after updating to vSphere 7.0 Update 3.
The vCenter Server Upgrade/Migration pre-checks fail when the Security Token Service (STS) certificate does not contain a Subject Alternative Name (SAN) field. This situation occurs when you have replaced the vCenter 5.5 Single Sign-On certificate with a custom certificate that has no SAN field, and you attempt to upgrade to vCenter Server 7.0. The upgrade considers the STS certificate invalid and the pre-checks prevent the upgrade process from continuing.
Workaround: Replace the STS certificate with a valid certificate that contains a SAN field then proceed with the vCenter Server 7.0 Upgrade/Migration.
After upgrade, previously installed 32-bit CIM providers stop working because ESXi requires 64-bit CIM providers. Customers may lose management API functions related to CIMPDK, NDDK (native DDK), HEXDK, VAIODK (IO filters), and see errors related to
uwglibc
dependency.
The syslog reports module missing, "32 bit shared libraries not loaded."
Workaround: There is no workaround. The fix is to download new 64-bit CIM providers from your vendor.
You cannot install drivers applicable to ESXi 7.0 Update 1 on hosts that run ESXi 7.0 or 7.0b.
The operation fails with an error, such as:
VMW_bootbank_qedrntv_3.40.4.0-12vmw.701.0.0.xxxxxxx requires vmkapi_2_7_0_0, but the requirement cannot be satisfied within the ImageProfile.
Please refer to the log file for more details.
Workaround: Update the ESXi host to 7.0 Update 1. Retry the driver installation.
If you attempt to update your environment to 7.0 Update 2 from an earlier version of ESXi 7.0
by using vSphere Lifecycle Manager patch baselines, UEFI booting of ESXi hosts might stop with an error such as:
Loading /boot.cfg
Failed to load crypto64.efi
Fatal error: 15 (Not found)
Workaround: For more information, see VMware knowledge base articles 83063 and 83107 .
With vCenter Server 7.0 Update 2, you can create a new cluster by importing the desired software specification from a single reference host. However, if legacy VIBs are in use on an ESXi host, vSphere Lifecycle Manager cannot extract in the vCenter Server instance where you create the cluster a reference software specification from such a host. In the
/var/log/lifecycle.log
, you see messages such as:
020-11-11T06:54:03Z lifecycle: 1000082644: HostSeeding:499 ERROR Extract depot failed: Checksum doesn't match. Calculated 5b404e28e83b1387841bb417da93c8c796ef2497c8af0f79583fd54e789d8826, expected: 0947542e30b794c721e21fb595f1851b247711d0619c55489a6a8cae6675e796 2020-11-11T06:54:04Z lifecycle: 1000082644: imagemanagerctl:366 ERROR Extract depot failed. 2020-11-11T06:54:04Z lifecycle: 1000082644: imagemanagerctl:145 ERROR [VibChecksumError]
Workaround: Follow the steps described in VMware knowledge base article 83042 .
After updating to ESXi 7.0 Update 2, you might see a short burst of log messages after every ESXi boot.
Such logs do not indicate any issue with ESXi and you can ignore these messages. For example:
2021-01-19T22:44:22Z watchdog-vaai-nasd: '/usr/lib/vmware/nfs/bin/vaai-nasd -f' exited after 0 seconds (quick failure 127) 1
2021-01-19T22:44:22Z watchdog-vaai-nasd: Executing '/usr/lib/vmware/nfs/bin/vaai-nasd -f'
2021-01-19T22:44:22.990Z aainasd[1000051135]: Log for VAAI-NAS Daemon for NFS version=1.0 build=build-00000 option=DEBUG
2021-01-19T22:44:22.990Z vaainasd[1000051135]: DictionaryLoadFile: No entries loaded by dictionary.
2021-01-19T22:44:22.990Z vaainasd[1000051135]: DictionaryLoad: Cannot open file "/usr/lib/vmware/config": No such file or directory.
2021-01-19T22:44:22.990Z vaainasd[1000051135]: DictionaryLoad: Cannot open file "//.vmware/config": No such file or directory.
2021-01-19T22:44:22.990Z vaainasd[1000051135]: DictionaryLoad: Cannot open file "//.vmware/preferences": No such file or directory.
2021-01-19T22:44:22.990Z vaainasd[1000051135]: Switching to VMware syslog extensions
2021-01-19T22:44:22.992Z vaainasd[1000051135]: Loading VAAI-NAS plugin(s).
2021-01-19T22:44:22.992Z vaainasd[1000051135]: DISKLIB-PLUGIN : Not loading plugin /usr/lib/vmware/nas_plugins/lib64: Not a shared library.
Workaround: None
After you upgrade to ESXi 7.0 Update 2, if you check vSphere Quick Boot compatibility of your environment by using the
/usr/lib/vmware/loadesx/bin/loadESXCheckCompat.py
command, you might see some warning messages for missing VIBs in the shell. For example:
Cannot find VIB(s) ... in the given VIB collection.
Ignoring missing reserved VIB(s) ..., they are removed from reserved VIB IDs.
Such warnings do not indicate a compatibility issue.
Workaround: The missing VIB messages can be safely ignored and do not affect the reporting of vSphere Quick Boot compatibility. The final output line of the
loadESXCheckCompat
command unambiguously indicates if the host is compatible.
If you attempt auto bootstrapping a cluster that you manage with a vSphere Lifecycle Manager image to perform a stateful install and overwrite the VMFS partitions, the operation fails with an error. In the support bundle, you see messages such as:
2021-02-11T19:37:43Z Host Profiles[265671 opID=MainThread]: ERROR: EngineModule::ApplyHostConfig. Exception: [Errno 30] Read-only file system
Workaround: Follow vendor guidance to clean the VMFS partition in the target host and retry the operation. Alternatively, use an empty disk. For more information on the disk-partitioning utility on ESXi, see VMware knowledge base article 1036609 .
Upgrades to ESXi 7.x from 6.5.x and 6.7.0 by using the
esxcli software profile update
or
esxcli software profile install
ESXCLI commands might fail, because the ESXi bootbank might be less than the size of the image profile. In the ESXi Shell or the PowerCLI shell, you see an error such as:
[InstallationError]
The pending transaction requires 244 MB free space, however the maximum supported size is 239 MB.
Please refer to the log file for more details.
The issue also occurs when you attempt an ESXi host upgrade by using the ESXCLI commands
esxcli software vib update
or
esxcli software vib install
.
Workaround: You can perform the upgrade in two steps, by using the
esxcli software profile update
command to update ESXi hosts to ESXi 6.7 Update 1 or later, and then update to 7.0 Update 1c. Alternatively, you can run an upgrade by using an ISO image and the vSphere Lifecycle Manager.
If you migrate a linked clone across vCenter Servers, operations such as power on and delete might fail for the source virtual machine with an
Invalid virtual machine state
error.
Workaround: Keep linked clones on the same vCenter Server as the source VM. Alternatively, promote the linked clone to full clone before migration.
Migration across vCenter Servers of virtual machines with more than 180 virtual disks and 32 snapshot levels to a datastore on NVMe over TCP storage might fail. The ESXi host preemptively fails with an error such as
The migration has exceeded the maximum switchover time of 100 second(s)
.
Workaround: None
If you try to migrate a virtual machine with enabled VPMC by using vSphere vMotion, the operation might fail if the target host is using some of the counters to compute memory or performance statistics. The operation fails with an error such as
A performance counter used by the guest is not available on the host CPU
.
Workaround: Power off the virtual machine and use cold migration. For more information, see VMware knowledge base article 81191 .
When a VIB install, upgrade, or remove operation immediately precedes an interactive or scripted upgrade to ESXi 7.0 Update 3 by using the installer ISO, the ConfigStore might not keep some configurations of the upgrade. As a result, ESXi hosts become inaccessible after the upgrade operation, although the upgrade seems successful. To prevent this issue, the ESXi 7.0 Update 3 installer adds a temporary check to block such scenarios. In the ESXi installer console, you see the following error message:
Live VIB installation, upgrade or removal may cause subsequent ESXi upgrade to fail when using the ISO installer
.
Workaround: Use an alternative upgrade method to avoid the issue, such as using ESXCLI or the vSphere Lifecycle Manager.
If you have configured vCenter Server for either Smart Card or RSA SecurID authentication, see the VMware knowledge base article at https://kb.vmware.com/s/article/78057 before starting the vSphere 7.0 upgrade process. If you do not perform the workaround as described in the KB, you might see the following error messages and Smart Card or RSA SecurID authentication does not work.
"Smart card authentication may stop working. Smart card settings may not be preserved, and smart card authentication may stop working."
or
"RSA SecurID authentication may stop working. RSA SecurID settings may not be preserved, and RSA SecurID authentication may stop working."
Workaround: Before upgrading to vSphere 7.0, see the VMware knowledge base article at https://kb.vmware.com/s/article/78057 .
When you upgrade a vCenter Server deployment using an external Platform Services Controller, you converge the Platform Services Controller into a vCenter Server appliance. If the upgrade fails with the error
install.vmafd.vmdir_vdcpromo_error_21
, the VMAFD firstboot process has failed. The VMAFD firstboot process copies the VMware Directory Service Database (data.mdb) from the source Platform Services Controller and replication partner vCenter Server appliance.
Workaround: Disable TCP Segmentation Offload (TSO) and Generic Segmentation Offload (GSO) on the Ethernet adapter of the source Platform Services Controller or replication partner vCenter Server appliance before upgrading a vCenter Server with an external Platform Services Controller. See Knowledge Base article: https://kb.vmware.com/s/article/74678
Authentication using RSA SecurID will not work after upgrading to vCenter Server 7.0. An error message will alert you to this issue when attempting to login using your RSA SecurID login.
Workaround: Reconfigure the smart card or RSA SecureID.
Migration of vCenter Server for Windows to vCenter Server appliance 7.0 fails with the error message
IP already exists in the network
. This prevents the migration process from configuring the network parameters on the new vCenter Server appliance. For more information, examine the log file:
/var/log/vmware/upgrade/UpgradeRunner.log
Workaround:
In vSphere 7.0, you can configure the number of virtual functions for an SR-IOV device by using the Virtual Infrastructure Management (VIM) API, for example, through the vSphere Client. The task does not require reboot of the ESXi host. After you use the VIM API configuration, if you try to configure the number of SR-IOV virtual functions by using the
max_vfs
module parameter, the changes might not take effect because they are overridden by the VIM API configuration.
Workaround: None. To configure the number of virtual functions for an SR-IOV device, use the same method every time. Use the VIM API or use the
max_vfs
module parameter and reboot the ESXi host.
During a major upgrade, if the source instance of the vCenter Server appliance is configured with multiple secondary networks other than the VCHA NIC, the target vCenter Server instance will not retain secondary networks other than the VCHA NIC. If the source instance is configured with multiple NICs that are part of DVS port groups, the NIC configuration will not be preserved during the upgrade. Configurations for vCenter Server appliance instances that are part of the standard port group will be preserved.
Workaround: None. Manually configure the secondary network in the target vCenter Server appliance instance.
After upgrading or migrating a vCenter Server with an external Platform Services Controller, if the newly upgraded vCenter Server is not joined to an Active Directory domain, users authenticating using Active Directory will lose access to the vCenter Server instance.
Workaround: Verify that the new vCenter Server instance has been joined to an Active Directory domain. See Knowledge Base article: https://kb.vmware.com/s/article/2118543
If there are non-ASCII strings in the Oracle events and tasks table the migration can fail when exporting events and tasks data. The following error message is provided: UnicodeDecodeError
Workaround: None.
The non-compliant status indicates an inconsistency between the profile and the host.
This inconsistency might occur because ESXi 7.0 does not allow duplicate claim rules, but the profile you use contains duplicate rules. For example, if you attempt to use the Host Profile that you extracted from the host before upgrading ESXi 6.5 or ESXi 6.7 to version 7.0 and the Host Profile contains any duplicate claim rules of system default rules, you might experience the problems.
Workaround:
After installing or upgrading to vCenter Server 7.0, when you navigate to the Update panel within the vCenter Server Management Interface, the error message "Check the URL and try again" displays. The error message does not prevent you from using the functions within the Update panel, and you can view, stage, and install any available updates.
Workaround: None.
When SATA disks on HPE Gen10 servers with SmartPQI controllers without expanders are hot removed and hot inserted back to a different disk bay of the same machine, or when multiple disks are hot removed and hot inserted back in a different order, sometimes a new local name is assigned to the disk. The VMFS datastore on that disk appears as a snapshot and will not be mounted back automatically because the device name has changed.
Workaround: None. SmartPQI controller does not support unordered hot remove and hot insert operations.
Occasionally, all active paths to NVMeOF device register I/O errors due to link issues or controller state. If the status of one of the paths changes to Dead, the High Performance Plug-in (HPP) might not select another path if it shows high volume of errors. As a result, the I/O fails.
Workaround: Disable the configuration option /Misc/HppManageDegradedPaths to unblock the I/O.
VOMA check is not supported for NVMe based VMFS datastores and will fail with the error:
ERROR: Failed to reserve device. Function not implemented
Example:
# voma -m vmfs -f check -d /vmfs/devices/disks/: <partition#>
Running VMFS Checker version 2.1 in check mode
Initializing LVM metadata, Basic Checks will be done
Checking for filesystem activity
Performing filesystem liveness check..|Scanning for VMFS-6 host activity (4096 bytes/HB, 1024 HBs).
ERROR: Failed to reserve device. Function not implemented
Aborting VMFS volume check
VOMA failed to check device : General Error
Workaround: None. If you need to analyse VMFS metadata, collect it using the
-l
option, and pass to VMware customer support. The command for collecting the dump is:
voma -l -f dump -d /vmfs/devices/disks/:<partition#>
If an FCD and a VM are encrypted with different crypto keys, your attempts to attach the encrypted FCD to the encrypted VM using the
VM reconfigure API
might fail with the error message:
Cannot decrypt disk because key or password is incorrect.
Workaround: Use the
attachDisk API
rather than the
VM reconfigure API
to attach an encrypted FCD to an encrypted VM.
This problem does not occur when a non-head extent of the spanned VMFS datastore fails along with the head extent. In this case, the entire datastore becomes inaccessible and no longer allows I/Os.
In contrast, when only a non-head extent fails, but the head extent remains accessible, the datastore heartbeat appears to be normal. And the I/Os between the host and the datastore continue. However, any I/Os that depend on the failed non-head extent start failing as well. Other I/O transactions might accumulate while waiting for the failing I/Os to resolve, and cause the host to enter the non responding state.
Workaround: Fix the PDL condition of the non-head extent to resolve this issue.
The Virtual NVMe Controller is the default disk controller for the following guest operating systems when using Hardware Version 15 or later:
Windows 10
Windows Server 2016
Windows Server 2019
Some features might not be available when using a Virtual NVMe Controller. For more information, see https://kb.vmware.com/s/article/2147714
Note : Some clients use the previous default of LSI Logic SAS. This includes ESXi host client and PowerCLI.
Workaround: If you need features not available on Virtual NVMe, switch to VMware Paravirtual SCSI (PVSCSI) or LSI Logic SAS. For information on using VMware Paravirtual SCSI (PVSCSI), see https://kb.vmware.com/s/article/1010398
Claim rules determine which multipathing plugin, such as NMP, HPP, and so on, owns paths to a particular storage device. ESXi 7.0 does not support duplicate claim rules. However, the ESXi 7.0 host does not alert you if you add duplicate rules to the existing claim rules inherited through an upgrade from a legacy release. As a result of using duplicate rules, storage devices might be claimed by unintended plugins, which can cause unexpected outcome.
Workaround: Do not use duplicate core claim rules. Before adding a new claim rule, delete any existing matching claim rule.
The CNS QueryVolume API enables you to obtain information about the CNS volumes, such as volume health and compliance status. When you check the compliance status of individual volumes, the results are obtained quickly. However, when you invoke the CNS QueryVolume API to check the compliance status of multiple volumes, several tens or hundreds, the query might perform slowly.
Workaround: Avoid using bulk queries. When you need to get compliance status, query one volume at a time or limit the number of volumes in the query API to 20 or fewer. While using the query, avoid running other CNS operations to get the best performance.
If a VMFS datastore on an ESXi host is backed by an NVMe over Fabrics namespace or device, in case of an all paths down (APD) or permanent device loss (PDL) failure, the datastore might be inaccessible even after recovery. You cannot access the datastore from either the ESXi host or the vCenter Server system.
Workaround: To recover from this state, perform a rescan on a host or cluster level. For more information, see Perform Storage Rescan .
After you delete an FCD disk that backs a CNS volume, the volume might still show up as existing in the CNS UI. However, your attempts to delete the volume fail. You might see an error message similar to the following:
The object or item referred to could not be found
.
Workaround: The next full synchronization will resolve the inconsistency and correctly update the CNS UI.
When you attach multiple volumes to the same pod simultaneously, the attach operation might occasionally choose the same controller slot. As a result, only one of the operations succeeds, while other volume mounts fail.
Workaround: After Kubernetes retries the failed operation, the operation succeeds if a controller slot is available on the node VM.
This might occur when, for example, you use an incompliant storage policy to create a CNS volume. The operation fails, while the vSphere Client shows the task status as successful.
Workaround: The successful task status in the vSphere Client does not guarantee that the CNS operation succeeded. To make sure the operation succeeded, verify its results.
This issue might occur when the CNS Delete API attempts to delete a persistent volume that is still attached to a pod. For example, when you delete the Kubernetes namespace where the pod runs. As a result, the volume gets cleared from CNS and the CNS query operation does not return the volume. However, the volume continues to reside on the datastore and cannot be deleted through the repeated CNS Delete API operations.
Workaround: None.
The new queue-pair feature added to ixgben driver to improve networking performance on Intel 82599EB/X540/X550 series NICs might reduce throughput under some workloads in vSphere 7.0 as compared to vSphere 6.7.
Workaround: To achieve the same networking performance as vSphere 6.7, you can disable the queue-pair with a module parameter. To disable the queue-pair, run the command:
# esxcli system module parameters set -p "QPair=0,0,0,0..." -m ixgben
After running the command, reboot.
If the I/O devices on your ESXi host provide more than a total of 512 distinct interrupt sources, some sources are erroneously assigned an interrupt-remapping table entry (IRTE) index in the AMD IOMMU that is greater than the maximum value. Interrupts from such a source are lost, so the corresponding I/O device behaves as if interrupts are disabled.
Workaround: Use the ESXCLI command
esxcli system settings kernel set -s iovDisableIR -v true
to disable the AMD IOMMU interrupt remapper. Reboot the ESXi host so that the command takes effect.
In some environments, if you set link speed to auto-negotiation for network adapters by using the command
esxcli network nic set -a -n vmmicx
, the devices might fail and reboot does not recover connectivity. The issue is specific to a combination of some Intel X710/X722 network adapters, a SFP+ module and a physical switch, where auto-negotiate speed/duplex scenario is not supported.
Workaround: Make sure you use an Intel-branded SFP+ module. Alternatively, use a Direct Attach Copper (DAC) cable.
vSphere 7.0 Update 2 supports Solarflare x2542 and x2541 network adapters configured in 1x100G port mode. However, you might see a hardware limitation in the devices that causes the actual throughput to be up to some 70Gbps in a vSphere environment.
Workaround: None
A NIC with PCI device ID 8086:1537 might stop to send and receive VLAN tagged packets after a reset, for example, with a command
vsish -e set /net/pNics/vmnic0/reset 1
.
Workaround: Avoid resetting the NIC. If you already face the issue, use the following commands to restore the VLAN capability, for example at vmnic0:
# esxcli network nic software set --tagging=1 -n vmnic0
# esxcli network nic software set --tagging=0 -n vmnic0
Any change in the NetQueue balancer settings by using the command
esxcli/localcli network nic queue loadbalancer set -n <nicname> --<lb_setting>
causes NetQueue, which is enabled by default, to be disabled after an ESXi host reboot.
Workaround: After a change in the NetQueue balancer settings and host reboot, use the command
configstorecli config current get -c esx -g network -k nics
to retrieve ConfigStore data to verify whether the
/esx/network/nics/net_queue/load_balancer/enable
is working as expected.
After you run the command, you see output similar to:
{
"mac": "02:00:0e:6d:14:3e",
"name": "vmnic1",
"net_queue": {
"load_balancer": {
"dynamic_pool": true,
"enable": true
}
},
"virtual_mac": "00:50:56:5a:21:11"
}
If the output is not as expected, for example
"load_balancer": "enable": false"
, run the following command:
esxcli/localcli network nic queue loadbalancer state set -n <nicname> -e true
If you configure an NSX distributed virtual port for use in PVRDMA traffic, the RDMA protocol traffic over the PVRDMA network adapters does not comply with the NSX network policies.
Workaround: Do not configure NSX distributed virtual ports for use in PVRDMA traffic.
Rollback from converged VDS that supports both vSphere 7 traffic and NSX-T 3 traffic on the same VDS to one N-VDS for NSX-T traffic is not supported in vSphere 7.0 Update 3.
Workaround: None
If you do not set the
supported_num_ports
module parameter for the
nmlx5_core
driver on an ESXi host with multiple network adapters of versions Mellanox ConnectX-4, Mellanox ConnectX-5 and Mellanox ConnectX-6, the driver might not allocate sufficient memory for operating all the NIC ports for the host. As a result, you might experience network loss or ESXi host failure with purple diagnostic screen, or both.
Workaround: Set the
supported_num_ports
module parameter value in the
nmlx5_core
network driver equal to the total number of Mellanox ConnectX-4, Mellanox ConnectX-5 and Mellanox ConnectX-6 network adapter ports on the ESXi host.
Virtual machines requiring high network throughput can experience throughput degradation when upgrading from vSphere 6.7 to vSphere 7.0 with NetIOC enabled.
Workaround: Adjust the
ethernetx.ctxPerDev
setting to enable multiple worlds.
When you migrate VMkernel ports from one port group to another, IPv6 traffic does not pass through VMkernel ports using IPsec.
Workaround: Remove the IPsec security association (SA) from the affected server, and then reapply the SA. To learn how to set and remove an IPsec SA, see the vSphere Security documentation.
ESX network performance may increase with a portion of CPU usage.
Workaround: Remove and add the network interface with only 1 rx dispatch queue. For example:
esxcli network ip interface remove --interface-name=vmk1
esxcli network ip interface add --interface-name=vmk1 --num-rxqueue=1
A VM might stop receiving Ethernet traffic after a hot-add, hot-remove or storage vMotion. This issue affects VMs where the uplink of the VNIC has SR-IOV enabled. PVRDMA virtual NIC exhibits this issue when the uplink of the virtual network is a Mellanox RDMA capable NIC and RDMA namespaces are configured.
Workaround: You can hot-remove and hot-add the affected Ethernet NICs of the VM to restore traffic. On Linux guest operating systems, restarting the network might also resolve the issue. If these workarounds have no effect, you can reboot the VM to restore network connectivity.
With the introduction of the DDNS, the DNS record update only works for VCSA deployed with DHCP configured networking. While changing the IP address of the vCenter server via VAMI, the following error is displayed:
The specified IP address does not resolve to the specified hostname.
Workaround: There are two possible workarounds.
./opt/vmware/share/vami/vami_config_net
Use option 6 to change the IP adddress of eth0. Once changed, execute the following script:
./opt/likewise/bin/lw-update-dns
Restart all the services on the VCSA to update the IP information on the DNS server.
As the number of logical switches increases, it may take more time for the NSX DVPG in vCenter Server to be removed after deleting the corresponding logical switch in NSX Manager. In an environment with 12000 logical switches, it takes approximately 10 seconds for an NSX DVPG to be deleted from vCenter Server.
Workaround: None.
In vSphere 7.0, NSX Distributed Virtual port groups consume significantly larger amounts of memory than opaque networks. For this reason, NSX Distributed Virtual port groups can not support the same scale as an opaque network given the same amount of memory.
Workaround:To support the use of NSX Distributed Virtual port groups, increase the amount of memory in your ESXi hosts. If you verify that your system has adequate memory to support your VMs, you can directly increase the memory of
hostd
using the following command.
localcli --plugin-dir /usr/lib/vmware/esxcli/int/ sched group setmemconfig --group-path host/vim/vmvisor/hostd --units mb --min 2048 --max 2048
Note that this will cause
hostd
to use memory normally reserved for your environment's VMs. This may have the affect of reducing the number of VMs your ESXi host can support.
If the network reservation is configured on a VM, it is expected that DRS only migrates the VM to a host that meets the specified requirements. In a cluster with NSX transport nodes, if some of the transport nodes join the transport zone by NSX-T Virtual Distributed Switch (N-VDS), and others by vSphere Distributed Switch (VDS) 7.0, DRS may incorrectly launch vMotion. You might encounter this issue when:
Workaround: Make all transport nodes join the transport zone by N-VDS or the same VDS 7.0 instance.
Workaround: Use a Distributed Port Group on the same DVS.
If you navigate to the Edit Settings dialog for physical network adapters and attempt to enable SR-IOV, the operation might fail when using QLogic 4x10GE QL41164HFCU CNA. Attempting to enable SR-IOV might lead to a network outage of the ESXi host.
Workaround: Use the following command on the ESXi host to enable SRIOV:
esxcfg-module
In vSphere 7.0, when using NSX-T networking on vSphere VDS with a DRS cluster, if the hosts do not join the NSX transport zone by the same VDS or NVDS, it can cause vCenter Server to fail.
Workaround: Have hosts in a DRS cluster join the NSX transport zone using the same VDS or NVDS.
When you change the vCenter IP address (PNID change), the registered vendor providers go offline.
Workaround: Re-register the vendor providers.
When you use cross vCenter vMotion to move a VM's storage and host to a different vCenter server instance, you might receive the error
The operation is not allowed in the current state.
This error appears in the UI wizard after the Host Selection step and before the Datastore Selection step, in cases where the VM has an assigned storage policy containing host-based rules such as encryption or any other IO filter rule.
Workaround: Assign the VM and its disks to a storage policy without host-based rules. You might need to decrypt the VM if the source VM is encrypted. Then retry the cross vCenter vMotion action.
When you navigate to Host > Monitor > Hardware Health > Storage Sensors on vCenter UI, the storage information displays either incorrect or unknown values. The same issue is observed on the host UI and the MOB path “runtime.hardwareStatusInfo.storageStatusInfo” as well.
Workaround: None.
vSphere UI host advanced settings shows the current product locker location as empty with an empty default. This is inconsistent as the actual product location
symlink
is created and valid. This causes confusion to the user. The default cannot be corrected from UI.
Workaround: User can use the esxcli command on the host to correct the current product locker location default as below.
1. Remove the existing Product Locker Location setting with:
"esxcli system settings advanced remove -o ProductLockerLocation"
2. Re-add the Product Locker Location setting with the appropriate default:
2.a. If the ESXi is a full installation, the default value is
"/locker/packages/vmtoolsRepo" export PRODUCT_LOCKER_DEFAULT="/locker/packages/vmtoolsRepo"
2.b. If the ESXi is a PXEboot configuration such as autodeploy, the default value is: "
/vmtoolsRepo" export PRODUCT_LOCKER_DEFAULT="/vmtoolsRepo"
Run the following command to automatically figure out the location:
export PRODUCT_LOCKER_DEFAULT=`readlink /productLocker`
Add the setting:
esxcli system settings advanced add -d "Path to VMware Tools repository" -o ProductLockerLocation -t string -s $PRODUCT_LOCKER_DEFAULT
You can combine all the above steps in step 2 by issuing the single command:
esxcli system settings advanced add -d "Path to VMware Tools repository" -o ProductLockerLocation -t string -s `readlink /productLocker`
When a vCenter Cloud Gateway is deployed in the same environment as an on-premises vCenter Server, and linked to an SDDC, the SDDC vCenter Server will appear in the on-premises vSphere Client. This is unexpected behavior and the linked SDDC vCenter Server should be ignored. All operations involving the linked SDDC vCenter Server should be performed on the vSphere Client running within the vCenter Cloud Gateway.
Workaround: None.
UEFI HTTP booting of virtual machines is supported only on hosts of version ESXi 7.0 Update 2 and later and VMs with HW version 19 or later.
Workaround: Use UEFI HTTP booting only in virtual machines with HW version 19 or later. Using HW version 19 ensures the virtual machines are placed only on hosts with ESXi version 7.0 Update 2 or later.
Virtual machine snapshot operations fail in vSphere Virtual Volumes datastores on Purity version 5.3.10 with an error such as
An error occurred while saving the snapshot: The VVol target encountered a vendor specific error
. The issue is specific for Purity version 5.3.10.
Workaround: Upgrade to Purity version 6.1.7 or follow vendor recommendations.
When you run the guest customization script for a Linux guest operating system, the
precustomization
section of the customization script that is defined in the customization specification runs before the guest customization and the
postcustomization
section runs after that. If you enable Cloud-Init in the guest operating system of a virtual machine, the
postcustomization
section runs before the customization due to a known issue in Cloud-Init.
Workaround: Disable Cloud-Init and use the standard guest customization.
When you perform group migration operations on VMs with multiple disks and multi-level snapshots, the operations might fail with the error
com.vmware.vc.GenericVmConfigFault Failed waiting for data. Error 195887167. Connection closed by remote host, possibly due to timeout.
Workaround: Retry the migration operation on the failed VMs one at a time.
The URLs that contain an HTTP query parameter are not supported. For example,
http://webaddress.com?file=abc.ovf
or the Amazon pre-signed S3 URLs.
Workaround: Download the files and deploy them from your local file system.
Perform the following steps:
As a result, from the VMs and Templates inventory tree you cannot see the objects in the third nested folder.
Workaround: To see the objects in the third nested folder, navigate to the second nested folder and select the VMs tab.
vSphere HA and Fault Tolerance IssuesSome VMs might be in orphaned state after cluster wide APD recovers, even if HA and VMCP are enabled on the cluster.
This issue might be encountered when the following conditions occur simultaneously:
Workaround: You must unregister and reregister the orphaned VMs manually within the cluster after the APD recovers.
If you do not manually reregister the orphaned VMs, HA attempts failover of the orphaned VMs, but it might take between 5 to 10 hours depending on when APD recovers.
The overall functionality of the cluster is not affected in these cases and HA will continue to protect the VMs. This is an anomaly in what gets displayed on VC for the duration of the problem.
vSphere Lifecycle Manager IssuesNSX-T is not compatible with the vSphere Lifecycle Manager functionality for image management. When you enable a cluster for image setup and updates on all hosts in the cluster collectively, you cannot enable NSX-T on that cluster. However, you can deploy NSX Edges to this cluster.
Workaround: Move the hosts to a new cluster that you can manage with baselines and enable NSX-T on that new cluster.
If vSphere Lifecycle Manager is enabled on a cluster, vSAN File Services cannot be enabled on the same cluster and vice versa. In order to enable vSphere Lifecycle Manager on a cluster, which has VSAN File Services enabled already, first disable vSAN File Services and retry the operation. Please note that if you transition to a cluster that is managed by a single image, vSphere Lifecycle Manager cannot be disabled on that cluster.
Workaround: None.
If hardware support manager is unavailable for a cluster that you manage with a single image, where a firmware and drivers addon is selected and vSphere HA is enabled, the vSphere HA functionality is impacted. You may experience the following errors.
Applying HA VIBs on the cluster encountered a failure.
A general system error occurred: Failed to get Effective Component map.
A general system error occurred: Cannot find hardware support package from depot or hardware support manager.
Workaround:
Removing I/OFilter from a cluster by remediating the cluster in vSphere Lifecycle Manager, fails with the following error message:
iofilter XXX already exists
. Тhe iofilter remains listed as installed.
Workaround:
UninstallIoFilter_Task
from the vCenter Server managed object (IoFilterManager).
ResolveInstallationErrorsOnCluster_Task
from the vCenter Server managed object (IoFilterManager) to update the database.
Adding one or multiple ESXi hosts during a remediation process of a vSphere HA enabled cluster, results in the following error message:
Applying HA VIBs on the cluster encountered a failure.
Workaround: Аfter the cluster remediation operation has finished, perform one of the following tasks.
Disabling and re-enabling vSphere HA during remediation process of a cluster, may fail the remediation process due to vSphere HA health checks reporting that hosts don't have vSphere HA VIBs installed. You may see the following error message:
Setting desired image spec for cluster failed
.
Workaround: Аfter the cluster remediation operation has finished, disable and re-enable vSphere HA for the cluster.
In large clusters with more than 16 hosts, the recommendation generation task could take more than an hour to finish or may appear to hang. The completion time for the recommendation task depends on the number of devices configured on each host and the number of image candidates from the depot that vSphere Lifecycle Manager needs to process before obtaining a valid image to recommend.
Workaround: None.
In large clusters with more than 16 hosts, the validation report generation task could take up to 30 minutes to finish or may appear to hang. The completion time depends on the number of devices configured on each host and the number of hosts configured in the cluster.
Workaround: None
You can encounter incomplete error messages for localized languages in the vCenter Server user interface. The messages are displayed, after a cluster remediation process in vSphere Lifecycle Manager fails. For example, your can observe the following error message.
The error message in English language:
Virtual machine 'VMC on DELL EMC -FileServer' that runs on cluster 'Cluster-1' reported an issue which prevents entering maintenance mode: Unable to access the virtual machine configuration: Unable to access file[local-0] VMC on Dell EMC - FileServer/VMC on Dell EMC - FileServer.vmx
The error message in French language:
La VM « VMC on DELL EMC -FileServer », située sur le cluster « {Cluster-1} », a signalé un problème empêchant le passage en mode de maintenance : Unable to access the virtual machine configuration: Unable to access file[local-0] VMC on Dell EMC - FileServer/VMC on Dell EMC - FileServer.vmx
Workaround: None.
Only the ESXi base image is replaced with the one from the imported image.
Workaround: After the import process finishes, edit the image, and if needed, remove the vendor addon, components, and firmware and drivers addon.
Converting a vSphere HA enabled cluster that uses baselines to a cluster that uses a single image, may result a warning message displaying that
vmware-fdm
component will be removed.
Workaround: This message can be ignored. The conversion process installs the
vmware-fdm
component.
In earlier releases of vCenter Server you could configure independent proxy settings for vCenter Server and vSphere Update Manager. After an upgrade to vSphere 7.0, vSphere Update Manager service becomes part of the vSphere Lifecycle Manager service. For the vSphere Lifecycle Manager service, the proxy settings are configured from the vCenter Server appliance settings. If you had configured Update Manager to download patch updates from the Internet through a proxy server but the vCenter Server appliance had no proxy setting configuration, after a vCenter Server upgrade to version 7.0, the vSphere Lifecycle Manager fails to connect to the VMware depot and is unable to download patches or updates.
Workaround: Log in to the vCenter Server Appliance Management Interface, https:// vcenter-server-appliance-FQDN-or-IP-address :5480, to configure proxy settings for the vCenter Server appliance and enable vSphere Lifecycle Manager to use proxy.
In rare occasions, the VMkernel might consider a virtual machine unresponsive, because it fails to send properly PCPU heartbeat, and shut the VM down. In the
vmkernel.log
file, you see messages such as:
2021-05-28T21:39:59.895Z cpu68:1001449770)ALERT: Heartbeat: HandleLockup:827: PCPU 8 didn't have a heartbeat for 5 seconds, timeout is 14, 1 IPIs sent; *may* be locked up.
2021-05-28T21:39:59.895Z cpu8:1001449713)WARNING: World: vm 1001449713: PanicWork:8430: vmm3:VM_NAME:vcpu-3:Received VMkernel NMI IPI, possible CPU lockup while executing HV VT VM
The issue is due to a rare race condition in vCPU timers. Because the race is per-vCPU, larger VMs are more exposed to the issue.
Workaround: Disable PCPU heartbeat by using the command
vsish -e set /reliability/heartbeat/status 0
.
Applying a host profile with version 6.5 to a ESXi host with version 7.0, results in Coredump file profile reported as not compliant with the host.
Workaround: There are two possible workarounds.
Mellanox ConnectX-4 or ConnectX-5 native ESXi drivers might exhibit less than 5 percent throughput degradation when DYN_RSS and GEN_RSS feature is turned on, which is unlikely to impact normal workloads.
Workaround: You can disable DYN_RSS and GEN_RSS feature with the following commands:
# esxcli system module parameters set -m nmlx5_core -p "DYN_RSS=0 GEN_RSS=0"
# reboot
In a vSphere 7.0 implementation of a PVRDMA environment, VMs pass traffic through the HCA for local communication if an HCA is present. However, loopback of RDMA traffic does not work on qedrntv driver. For instance, RDMA Queue Pairs running on VMs that are configured under same uplink port cannot communicate with each other.
In vSphere 6.7 and earlier, HCA was used for local RDMA traffic if SRQ was enabled. vSphere 7.0 uses HCA loopback with VMs using versions of PVRDMA that have SRQ enabled with a minimum of HW v14 using RoCE v2.
The current version of Marvell FastLinQ adapter firmware does not support loopback traffic between QPs of the same PF or port.
Workaround: Required support is being added in the out-of-box driver certified for vSphere 7.0. If you are using the inbox qedrntv driver, you must use a 3-host configuration and migrate VMs to the third host.
There are limitations with the Marvell FastLinQ qedrntv RoCE driver and Unreliable Datagram (UD) traffic. UD applications involving bulk traffic might fail with qedrntv driver. Additionally, UD QPs can only work with DMA Memory Regions (MR). Physical MRs or FRMR are not supported. Applications attempting to use physical MR or FRMR along with UD QP fail to pass traffic when used with qedrntv driver. Known examples of such test applications are
ibv_ud_pingpong
and
ib_send_bw
.
Standard RoCE and RoCEv2 use cases in a VMware ESXi environment such as iSER, NVMe-oF (RoCE) and PVRDMA are not impacted by this issue. Use cases for UD traffic are limited and this issue impacts a small set of applications requiring bulk UD traffic.
Marvell FastLinQ hardware does not support RDMA UD traffic offload. In order to meet the VMware PVRDMA requirement to support GSI QP, a restricted software only implementation of UD QP support was added to the qedrntv driver. The goal of the implementation is to provide support for control path GSI communication and is not a complete implementation of UD QP supporting bulk traffic and advanced features.
Since UD support is implemented in software, the implementation might not keep up with heavy traffic and packets might be dropped. This can result in failures with bulk UD traffic.
Workaround: Bulk UD QP traffic is not supported with qedrntv driver and there is no workaround at this time. VMware ESXi RDMA (RoCE) use cases like iSER, NVMe, RDMA and PVRDMA are unaffected by this issue.
If you trigger QLogic 578xx NIC iSCSI connection or disconnection frequently in a short time, the server might fail due to an issue with the qfle3 driver. This is caused by a known defect in the device's firmware.
Workaround: None.
In Broadcom NVMe over FC environment, ESXi might fail during driver unload or controller disconnect operation and display an error message such as:
@BlueScreen: #PF Exception 14 in world 2098707:vmknvmeGener IP 0x4200225021cc addr 0x19
Workaround: None.
The inbox ixgben driver only recognizes firmware data version or signature for i350/X550 NICs. On some Dell servers the OEM firmware version number is programmed into the OEM package version region, and the inbox ixgben driver does not read this information. Only the 8-digit firmware signature is displayed.
Workaround: To display the OEM firmware version number, install async ixgben driver version 1.7.15 or later.
When you initiate certain destructive operations to X710 or XL710 NICs, such as resetting the NIC or manipulating VMKernel's internal device tree, the NIC hardware might read data from non-packet memory.
Workaround: Do not reset the NIC or manipulate vmkernel internal device state.
NVMe-oF is a new feature in vSphere 7.0. If your server has a USB storage installation that uses vmhba30+ and also has NVMe over RDMA configuration, the VMHBA name might change after a system reboot. This is because the VMHBA name assignment for NVMe over RDMA is different from PCIe devices. ESXi does not guarantee persistence.
Workaround: None.
If the vCenter database size is 300 GB or greater, the file-based backup will fail with a timeout. The following error message is displayed:
Timeout! Failed to complete in 72000 seconds
Workaround: None.
When you restore a vCenter Server 7.0 which is upgraded from 6.x with External Platform Services Controller to vCenter Server 7.0, the restore might fail and display the following error:
Failed to retrieve appliance storage list
Workaround: During the first stage of the restore process, increase the storage level of the vCenter Server 7.0. For example if the vCenter Server 6.7 External Platform Services Controller setup storage type is small, select storage type large for the restore process.
Enabled SSL protocols
configuration parameter is not configured during a host profile remediation and only the system default protocol
tlsv1.2
is enabled. This behavior is observed for a host profile with version 7.0 and earlier in a vCenter Server 7.0 environment.
Workaround: To enable TLSV 1.0 or TLSV 1.1 SSL protocols for SFCB, log in to an ESXi host by using SSH, and run the following ESXCLI command:
esxcli system wbem -P <protocol_name>
Unable to configure Lockdown Mode settings by using Host Profiles
Lockdown Мode cannot be configured by using a security host profile and cannot be applied to multiple ESXi hosts at once. You must manually configure each host.
Workaround: In vCenter Server 7.0, you can configure Lockdown Mode and manage Lockdown Mode exception user list by using a security host profile.
When a host profile is applied to a cluster, Enhanced vMotion Compatibility (EVC) settings are missing from the ESXi hosts
Some settings in the VMware config file
/etc/vmware/config
are not managed by Host Profiles and are blocked, when the config file is modified. As a result, when the host profile is applied to a cluster, the EVC settings are lost, which causes loss of EVC functionalities. For example, unmasked CPUs can be exposed to workloads.
Workaround: Reconfigure the relevant EVC baseline on cluster to recover the EVC settings.
Using a host profile that defines a core dump partition in vCenter Server 7.0 results in an error
In vCenter Server 7.0, configuring and managing a core dump partition in a host profile is not available. Attempting to apply a host profile that defines a core dump partition, results in the following error:
No valid coredump partition found.
Workaround: None. In vCenter Server 7.0., Host Profiles supports only file-based core dumps.
If you run the ESXCLI command to unload the firewall module, the hostd service fails and ESXi hosts lose connectivity
If you automate the firewall configuration in an environment that includes multiple ESXi hosts, and run the ESXCLI command
esxcli network firewall unload
that destroys filters and unloads the firewall module, the hostd service fails and ESXi hosts lose connectivity.
Workaround: Unloading the firewall module is not recommended at any time. If you must unload the firewall module, use the following steps:
Stop the hostd service by using the command:
/etc/init.d/hostd stop.
Unload the firewall module by using the command:
esxcli network firewall unload.
Perform the required operations.
Load the firewall module by using the command:
esxcli network firewall load.
Start the hostd service by using the command:
/etc/init.d/hostd start.
vSphere Storage vMotion operations might fail in a vSAN environment due to an unauthenticated session of the Network File Copy (NFC) manager
Migrations to a vSAN datastore by using vSphere Storage vMotion of virtual machines that have at least one snapshot and more than one virtual disk with different storage policy might fail. The issue occurs due to an unauthenticated session of the NFC manager because the Simple Object Access Protocol (SOAP) body exceeds the allowed size.
Workaround: First migrate the VM home namespace and just one of the virtual disks. After the operation completes, perform a disk only migration of the remaining 2 disks.
Changes in the properties and attributes of the devices and storage on an ESXi host might not persist after a reboot
If the device discovery routine during a reboot of an ESXi host times out, the jumpstart plug-in might not receive all configuration changes of the devices and storage from all the registered devices on the host. As a result, the process might restore the properties of some devices or storage to the default values after the reboot.
Workaround: Manually restore the changes in the properties of the affected device or storage.
If you use a beta build of ESXi 7.0, ESXi hosts might fail with a purple diagnostic screen during some lifecycle operations
If you use a beta build of ESXi 7.0, ESXi hosts might fail with a purple diagnostic screen during some lifecycle operations such as unloading a driver or switching between ENS mode and native driver mode. For example, if you try to change the ENS mode, in the backtrace you see an error message similar to:
case ENS::INTERRUPT::NoVM_DeviceStateWithGracefulRemove hit BlueScreen: ASSERT bora/vmkernel/main/dlmalloc.c:2733
This issue is specific for beta builds and does not affect release builds such as ESXi 7.0.
Workaround: Update to ESXi 7.0 GA.
You cannot create snapshots of virtual machines due to an error that a digest operation has failed
A rare race condition when an All-Paths-Down (APD) state occurs during the update of the Content Based Read Cache (CBRC) digest file might cause inconsistencies in the digest file. As a result, you cannot create virtual machine snapshots. You see an error such as
An error occurred while saving the snapshot: A digest operation has failed
in the backtrace.
Workaround: Power cycle the virtual machines to trigger a recompute of the CBRC hashes and clear the inconsistencies in the digest file.
If you upgrade your ESXi hosts to version 7.0 Update 3, but your vCenter Server is of an earlier version, Trusted Platform Module (TPM) attestation of the ESXi hosts fails
If you upgrade your ESXi hosts to version 7.0 Update 3, but your vCenter Server is of an earlier version, and you enable TPM, ESXi hosts fail to pass attestation. In the vSphere Client, you see the warning Host TPM attestation alarm. The Elliptic Curve Digital Signature Algorithm (ECDSA) introduced with ESXi 7.0 Update 3 causes the issue when vCenter Server is not of version 7.0 Update 3.
Workaround: Upgrade your vCenter Server to 7.0 Update 3 or acknowledge the alarm.
You see warnings in the boot loader screen about TPM asset tags
If a TPM-enabled ESXi host has no asset tag set on, you might see idle warning messages in the boot loader screen such as:
Failed to determine TPM asset tag size: Buffer too small
Failed to measure asset tag into TPM: Buffer too small
Workaround: Ignore the warnings or set an asset tag by using the command
$ esxcli hardware tpm tag set -d
The sensord daemon fails to report ESXi host hardware status
A logic error in the IPMI SDR validation might cause
sensord
to fail to identify a source for power supply information. As a result, when you run the command
vsish -e get /power/hostStats
, you might not see any output.
Workaround: None
If an ESXi host fails with a purple diagnostic screen, the netdump service might stop working
In rare cases, if an ESXi host fails with a purple diagnostic screen, the netdump service might fail with an error such as
NetDump FAILED: Couldn't attach to dump server at IP x.x.x.x
.
Workaround: Configure the VMkernel core dump to use local storage.
You see frequent VMware Fault Domain Manager (FDM) core dumps on multiple ESXi hosts
In some environments, the number of datastores might exceed the FDM file descriptor limit. As a result, you see frequent core dumps on multiple ESXi hosts indicating FDM failure.
Workaround: Increase the FDM file descriptor limit to 2048. You can use the setting
das.config.fdm.maxFds
from the vSphere HA advanced options in the vSphere Client. For more information, see
Set Advanced Options
.
Virtual machines on a vSAN cluster with enabled NSX-T and a converged vSphere Distributed Switch (CVDS) in a VLAN transport zone cannot power on after a power off
If a secondary site is 95% disk full and VMs are powered off before simulating a secondary site failure, during recovery some of the virtual machines fail to power on. As a result, virtual machines become unresponsive. The issue occurs regardless if site recovery includes adding disks or ESXi hosts or CPU capacity.
Workaround: Select the virtual machines that do not power on and change the network to VM Network from Edit Settings on the VM context menu.
If you modify the netq_rss_ens parameter of the nmlx5_core driver, ESXi hosts might fail with a purple diagnostic screen
If you try to enable the
netq_rss_ens
parameter when you configure an enhanced data path on the
nmlx5_core
driver, ESXi hosts might fail with a purple diagnostic screen. The
netq_rss_ens
parameter, which enables NetQ RSS, is disabled by default with a value of
0
.
Workaround: Keep the default value for the
netq_rss_ens
module parameter in the
nmlx5_core
driver.
Upgrade to ESXi 7.0 Update 3 might fail due to changed name of the inbox i40enu network driver
Starting with vSphere 7.0 Update 3, the inbox i40enu network driver for ESXi changes name back to i40en. The i40en driver was renamed to i40enu in vSphere 7.0 Update 2, but the name change impacted some upgrade paths. For example, rollup upgrade of ESXi hosts that you manage with baselines and baseline groups from 7.0 Update 2 or 7.0 Update 2a to 7.0 Update 3 fails. In most cases, the i40enu driver upgrades to ESXi 7.0 Update 3 without any additional steps. However, if the driver upgrade fails, you cannot update ESXi hosts that you manage with baselines and baseline groups. You also cannot use host seeding or a vSphere Lifecycle Manager single image to manage the ESXi hosts. If you have already made changes related to the i40enu driver and devices in your system, before upgrading to ESXi 7.0 Update 3, you must uninstall the i40enu VIB or Component on ESXi, or first upgrade ESXi to ESXi 7.0 Update 2c.
Workaround: For more information, see VMware knowledge base article
85982
.
The Windows guest OS of a virtual machine configured with a virtual NVDIMM of size less than 16MB might fail while initializing a new disk
If you configure a Windows virtual machine with a NVDIMM of size less than 16MB, when you try to initialize a new disk, you might see either the guest OS failing with a blue diagnostic screen or an error message in a pop-up window in the Disk Management screen. The blue diagnostic screen issue occurs in Windows 10, Windows Server 2022, and Windows 11 v21H2 guest operating systems.
Workaround: Increase the size of the virtual NVDIMM to 16MB or larger.
If you use a vSphere Distributed Switch (VDS) of version earlier than 6.6 and change the LAG hash algorithm, ESXi hosts might fail with a purple diagnostic screen
If you use a VDS of version earlier than 6.6 on a vSphere 7.0 Update 1 or later system, and you change the LAG hash algorithm, for example from L3 to L2 hashes, ESXi hosts might fail with a purple diagnostic screen.
Workaround: Upgrade VDS to version 6.6 or later.
HTTP requests from certain libraries to vSphere might be rejected
The HTTP reverse proxy in vSphere 7.0 enforces stricter standard compliance than in previous releases. This might expose pre-existing problems in some third-party libraries used by applications for SOAP calls to vSphere.
If you develop vSphere applications that use such libraries or include applications that rely on such libraries in your vSphere stack, you might experience connection issues when these libraries send HTTP requests to VMOMI. For example, HTTP requests issued from vijava libraries can take the following form:
POST /sdk HTTP/1.1
SOAPAction
Content-Type: text/xml; charset=utf-8
User-Agent: Java/1.8.0_221
The syntax in this example violates an HTTP protocol header field requirement that mandates a colon after SOAPAction. Hence, the request is rejected in flight.
Workaround: Developers leveraging noncompliant libraries in their applications can consider using a library that follows HTTP standards instead. For example, developers who use the vijava library can consider using the latest version of the yavijava library instead.
You might see a dump file when using Broadcom driver lsi_msgpt3, lsi_msgpt35 and lsi_mr3
When using the lsi_msgpt3, lsi_msgpt35 and lsi_mr3 controllers, there is a potential risk to see dump file lsuv2-lsi-drivers-plugin-util-zdump. There is an issue when exiting the storelib used in this plugin utility. There is no impact on ESXi operations, you can ignore the dump file.
Workaround: You can safely ignore this message. You can remove the lsuv2-lsi-drivers-plugin with the following command:
esxcli software vib remove -n lsuv2-lsiv2-drivers-plugin
You might see reboot is not required after configuring SR-IOV of a PCI device in vCenter, but device configurations made by third party extensions might be lost and require reboot to be re-applied.
In ESXi 7.0, SR-IOV configuration is applied without a reboot and the device driver is reloaded. ESXi hosts might have third party extensions perform device configurations that need to run after the device driver is loaded during boot. A reboot is required for those third party extensions to re-apply the device configuration.
Workaround: You must reboot after configuring SR-IOV to apply third party device configurations.
To collapse the list of previous known issues, click
here
.