These are known issues when using Intel® Ethernet Server Adapters with Intel® Ethernet FCoE. This list is current as of Intel® Network Connections software version 17.4. Refer to the User Guide for more information on installing and configuring Intel Ethernet Server Adapters.
Intel® Ethernet FCoE Windows* issues
- Intel® Ethernet Virtual Storage Miniport Driver for FCoE may disappear from Device Manager
The Intel® Ethernet Virtual Storage Miniport Driver for FCoE may disappear from the Device Manager after either:
- A virtual network is removed.
- The underlying Intel NIC adapter settings are modified.
This can occur when the corresponding adapter is virtualized to create a new virtual network or delete or modify an existing Virtual Network. It can also happen when the underlying Intel NIC Adapter settings are modified, including disabling or re-enabling the adapter.
As a workaround, the user should remove all the resource dependency of the Intel® Ethernet Virtual Storage Miniport Driver for FCoE that are currently being used by the system before making any changes to the Intel adapter for virtualization. For example, in one use case scenario, the user may have assigned the FCoE disk(s) from the FCoE storage driver to run one of its Virtual Machines, and at the same time the user wants to alter the configuration of the same Intel adapter for virtualization. In this scenario the user must remove the FCoE disks(s) from the Virtual Machine before altering the Intel adapter configuration.
- Virtual Port may disappear from Virtual Machine
When the Virtual Machine starts, it asks the Intel® Ethernet Virtual Storage Miniport Driver for FCoE (the driver) to create a Virtual Port. If the driver is subsequently disabled, the Virtual Port may disappear. The only way to get the Virtual Port back is to enable the driver and reboot the Virtual Machine.
- Windows Server 2008* with Hyper-V - storage miniport driver does not load when an adapter is added or removed as a VNIC
In Windows Server 2008 with Hyper-V, the storage miniport driver may not automatically load after adding or removing a DCB/FCoE adapter as a shared external virtual device. To load the storage miniport driver, reset the adapter.
- When installing Intel® Ethernet FCoE after installing ANS and creating AFT Team, Storports are not installed
If the user installs ANS and creates an AFT team and then installs FCoE/DCB, the result is that DCB is off by default. If the user then enables DCB on one port, the OS detects Storports and the user must manually click on the new hardware wizard prompts for each of them to install. If the user does not do that, DCB status is non-operational and the reason given is no peer.
- Link Aggregation teams are not supported with existing Intel® Ethernet FCoE Switches
- Intel® PROSet for Windows* Device Manager (DMiX) is not synched with Intel® Ethernet FCoE CTRL-D Utility
When the user disables FCoE via the Control-D menu, the Intel PROSet for Windows Device Manager User Interface states that the flash contains an FCoE image, but that the flash needs to be updated. Updating the flash with the FCoE image again, re-enables FCoE and returns the user to the state where all the FCoE settings are available.
If the user uses the control-D menu to disable FCoE, then they should use the control-D menu to enable it because Intel PROSet for Windows Device Manager does not support enabling or disabling FCoE.
- 82599 and X540-based adapters don't display as SPC-3 compliant in Windows* MPIO configuration
Because the FCoE initiator is a virtualized device it does not have its own unique hardware ID and thus is not displayed as a SPC-3 compliant device in Windows MPIO configuration.
- When removing ALB teaming, all Intel® Ethernet FCoE functions fail, all DMIX tabs are grayed out, and both adapter ports fail
For ANS teaming to work with Microsoft Network Load Balancer (NLB) in unicast mode, the team's LAA must be set to cluster node IP. For ALB mode, Receive Load Balancing must be disabled. For further configuration details, refer to Using teaming adapters with network load balancing may cause network problemsANS teaming works when NLB is in multicast mode, as well. For proper configuration of the adapter in this mode, refer to Event ID 53 - Network Adapter Functionality
- FCoE and TCP/IP traffic on the same VLAN may not work on some switches
This is a known switch design and configuration issue.
Intel® Ethernet FCoE Boot issues
Option ROM Known Issues
Discovery problems with multiple FCoE VLANs
The FCoE Option ROM may not discover the desired VLAN when performing VLAN discovery from the Discover Targets function. If the Discover VLAN box is populated with the wrong VLAN, then enter the desired VLAN before executing Discover Targets.
Windows Known Issues
Brocade switch support in Release 16.4
Intel® Ethernet FCoE Boot does not support Brocade switches in Release 16.4. If necessary, please use Release 16.2.
Windows uses a paging file on the local disk
After imaging, if the local disk is not removed before booting from the FCoE disk then Windows may use the paging file from the local disk.
Crash dump to FCoE disks is only supported to the FCoE Boot LUN
The following scenarios are not supported:
- Crash dump to an FCoE disk if the Windows directory is not on the FCoE Boot LUN.
- Use of the DedicatedDumpFile registry value to direct crash dump to another FCoE LUN.
Stopping the IntelDCB service may cause the OS to hang or crash
FCoE uninstall from a local disk may be blocked because installer inaccurately reports system is booted from FCoE
When the FCoE Option ROM connects to an FCoE disk during boot, the Windows installer may be unable to determine if the system was booted from FCoE or not and will block the FCoE uninstall. To uninstall, configure the Option ROM so that it does not connect to an FCoE disk.
Unable to create VLAN interfaces with Intel® Ethernet FCoE Boot enabled
When booted with FCoE, a user cannot create VLANs and/or Teams for other traffic types. This prevents converged functionality for non-FCoE traffic.
Server adapter configured for FCoE Boot available as External-Shared vNIC via Hyper-V
If a port is set as a boot port, when the user installs the Hyper V role in the system and then goes into the Hyper V Network Manager to select which port to externally virtualize, the boot port displays, which it should not.
When setting the port to a boot port in Intel PROSet for Windows Device Manager (DMIX), a message shows that the user should restart the system for the changes to be effective but does not force a restart. As a result the user level applications are in boot mode (i.e., Data Center Tab is grayed out) but kernel level drivers haven’t been restarted to indicate to the OS that the port is a boot port. When the user then adds the Hyper V service to the system, the OS takes a snap shot of the ports available and this is the snap shot that it uses after the Hyper V role is added, system restarted and the user goes into the Hyper V Virtual Network Manager to virtualize the ports. As a result, the boot port also shows up.
Restart the system after setting a port to a boot port and before adding the Hyper V role. The port does not appear in the list of virtualizable ports in the Hyper V Virtual network manager.
Disable/enable the port in Device Manager after setting it to boot and before adding the Hyper V role. The port does not appear in the list of virtualizable ports in the Hyper V Virtual network manager.
FCoE Linkdown Timeout fails prematurely when Remote Booted
If an FCoE-booted port loses link for longer than the time specified in the Linkdown Timeout advanced setting in the Intel® Ethernet Virtual Storage Miniport Driver for FCoE, the system will crash. Linkdown Timeout values greater than 30 seconds may not provide extra time before a system crash.
|FAQ: Data Center Bridging (DCB) and Fibre Channel over Ethernet (FCoE)|
|Windows* Server Hotfixes Required for MPIO & DSM|