1. vSwitch similarities to a physical L2 switch:
- A vSwitch functions at Layer 2,
- maintains MAC address tables,
- forwards frames to other switch ports based on MAC address,
- supports VLAN configuration,
- is capable of trunking using IEEE 802.1q VLAN tags, and
- is capable of establishing port channels.
2. vSwitches are configured with a specific number of ports: 8, 24, 56, 120, 248, 504, or 1016. VMKernel reserves 8 ports for its own use.
3. Changing the number of ports on a vSwitch requires a reboot of ESX/ESXi host.
4. vSwitch dissimilarities to physical L2 switch:
- Does not support dynamic negotiation protocols for establishing 802.1q trunks or port channels like DTP (Dynamic Trunking Protocol) or PAgP (Port Aggregation Protocol).
- A vSwitch cannot be connected to another vSwitch thereby eliminating a potential looping configuration. Because there is no possibility of looping, the vSwitches do not run Spanning Tree Protocol (STP).
- A vSwitch authoritatively knows the MAC addresses of the virtual machines connected to that vSwitch so there is no need to learn MAC addresses from the network.
- Traffic received by a vSwitch on one uplink is never forwarded out another uplink. So a vSwitch cannot be used as a transit path between 2 physical switches for example, because traffic on one uplink won’t be forwarded out another uplink.
5. Following 3 types of ports and port groups can be configured on a vSwitch:
- Service console port
- VMKernel port
- Virtual Machine port group
All of the above types can be represented by one table/type:
std_vswitch_portgroup - model's VMKernel Ports, Service Console Ports and VM Port groups
- · network_label
- · vlan_id
- · type - one of vmportgroup (0), vmkernel(1), serviceconsole(2)
- · vmkernel_port_operations (bitmask - values) - only applicable for VMKernel ports
000 - Use this portgroup for iSCSI/NAS traffic
001 - Use this portgroup for VMotion
010 - Use this portgroup for fault tolerance logging
100 - Use this portgroup for management traffic - only applicable for ESXi
6. vSphere client combines the creation of vSwitch with creation of new ports or port groups. vSphere client does not ask about creating a new vSwitch, but rather what type of port or port group to create (connection types – Virtual Machine, VMKernel or Service Console are the options).
7. Unlike ports or port groups, uplinks are not necessarily required in order for a vSwitch to function. VMs connected to a vSwitch without any uplinks can communicate with each other but cannot communicate with VMs on other vSwitches or physical systems. Such a configuration is known as an “Internal-only vSwitch”. Communication between VMs connected to an internal-only vSwitch takes place entirely in software and happens at whatever speed the VMKernel can perform the task.
8. VMs connected to an internal-only vSwitch are not VMotion capable. But if the VM is disconnected from the internal-only vSwitch, VMotion will succeed if all other requirements have been met.
9. A vSwitch can also be bound to multiple physical NICs – this configuration is called NIC team. This takes advantage of load distribution and redundancy.
10. vSwitch associated with a physical NIC provides VMs with the amount of bandwidth the physical NIC is configured to support.
11. A single physical NIC cannot be associated with multiple vSwitches.
12. Max number of physical NICs on a ESX/ESXi host is 32 of which only 4 can be 10 Gbps adapters.
13. Service Console ports:
- ESX supports up to 16 service console ports.
- At least one service console port is required to exist on any one vSwitch on an ESX host.
- We will discover but not perform any create/update/delete operations for this port type.
14. VMKernel ports:
- Provide network access for the VMKernel’s TCP/IP stack (which is separate and independent from the Service Console TCP/IP stack).
- VMKernel ports are used for VMotion process, iSCSI/NAS access and VMware FT.
- With ESXi hosts, VMKernel ports are also used for management.
- A VMKernel port comprises of 2 components:
- A VMKernel port on a vSwitch
- A VMKernel NIC – vmknic
e. The vmknic is configured with the interface IP address in the process of creating the VMKernel connection type in vSphere client. The IP address should be a valid IP for the network to which the physical NIC is connected to. One can optionally provide the default gateway if VMKernel NIC is required to reach remote subnets.
- a. IEEE 802.1Q Tagging – marking traffic as belonging to a particular VLAN. VLAN tag (aka VLAN ID) is value between 1 and 4094 which unique identifies the VLAN across the network.
- b. VLANs are handled by configuring different port groups within a vSwitch. A port group can be associated to only 1 VLAN at a time, but multiple port groups can be associated with a single VLAN.
- Figure 16 - Configuring VLAN on Port groups of a vSwitch
- c. To make VLANs work properly with a port group, the uplinks for the vSwitch must be connected to a physical switch port configured as a trunk port. A trunk port understands how to pass traffic from multiple VLANs simultaneously while also preserving the VLAN IDs on the traffic. So the physical switch passes the VLAN tags up to the ESX server, where the vSwitch tries to direct the traffic to a port group with that VLAN ID configured.
- d. The default native VLAN is VLAN ID 1. This is untagged VLAN ID meaning the switch port will strip this native VLAN id from the traffic as they pass. If you want to pass traffic on VLAN 1 to ESX server, then you need to maintain the tagging in the traffic and so you need to configure another VLAN id as native default VLAN. So an untagged VLAN (default native VLAN) can also be any id between 1 and 4094.
- e. A good convention to follow while naming port group or port network label is: VLANXXX-Network Description. For example, VLAN11-IPStorage.
- f. Although VLAN reduce the cost of constructing multiple logical subnets by separating the network segments logically but still all traffic runs on the same physical network underneath. For bandwidth-intensive network operations this disadvantage of shared physical network might outweigh the scalability and cost savings of a VLAN.
16. NIC Teaming
- a. Uplink is a physical adapter bound to the vSwitch and connected to physical switch.
- b. NIC teaming involves connecting multiple adapters to single vSwitch. It provides redundancy and load balancing of network communications to Service Console, VMKernel and virtual machines.
- Figure 17 - NIC Teaming
- c. As seen in figure above, both of the vSwitches have 2 uplinks and each uplink connects to a different physical switch.
- d. ESX/ESXi can have a max of 32 uplinks and these uplinks can be spread across multiple vSwitches or all tossed into a NIC team on one vSwitch.
- e. Building a functional NIC team requires that all uplinks be connected to physical switches in the same broadcast domain. If VLANs are used then all the switches should be configured for VLAN trunking and the appropriate subset of VLANs must be allowed across the VLAN trunk.
- f. The load balancing feature of NIC teaming does not function like the load balancing feature of advanced routing protocol and hence is not a product of identifying the amount of traffic transmitted through a network adapter and shifting traffic to equalize data flow through all available adapters. Rather the load balancing algorithm for NIC teams in a vSwitch is a balance of the number of connections – not amount of traffic. NIC teams on a vSwitch can be configured with one of following 3 policies:
- i. vSwitch port based load balancing (default)
- This policy setting ensures that network adapter connected to a vSwitch port will consistently use the same physical network adapter. In the event one of the uplinks fails, the traffic from failed uplink will failover to another physical adapter.
- This setting is best used when number of virtual network adapters is greater than number of physical network adapters. Link aggregation using 802.3ad teaming is not supported with this and MAC based load balancing policies.
- ii. Source MAC based load balancing
- It has same limitations as previous method – it’s also static mapping of vNIC MAC to a pNIC. Traffic originating from a vNIC will always go through the same physical NIC in this approach (Same as previous).
- iii. IP hash based load balancing (out-IP policy)
- It uses source and destination IP addresses to determine the physical network adapter for communication. This allows traffic originating from a single vNIC to go over multiple physical NICs when communicating with different destinations.
- This policy setting requires all physical NICs be connected to same physical switch. Also the switch must be configured for link aggregation (which can increase the throughput by combining the bandwidth of multiple physical NICs for use by a single vNIC of a VM). ESX/ESXi supports standard 802.3ad teaming in static (manual) mode and does not support LACP or PAgP.
- g. The load balancing feature on vSwitch applies only to the outbound traffic.
- h. Failover detection with NIC teaming can be configured to use either a link status method or a beacon probing method.
- i. Link status method – failure of an uplink is determined by link status provided by physical network adapter. But this can only identify the link status of pNIC to edge switch and not the link status between the edge switch to an upstream switch.
- ii. Beacon probing failover detection – included link status as well as sends Ethernet broadcast frames across all physical network adapters in NIC team which help detect upstream network connection failures as well and will force failover when STP blocks ports, when ports are configured with wrong VLAN or when a switch to switch connection has failed. When a beacon is not returned on a pNIC the vswitch triggers failover.
- i. Failback option controls how ESX will handle a failed network adapter when it recovers from failure. Default setting is yes which means the adapter will be returned to active duty immediately upon recovery and it will replace the standby adapter that may have taken its place during the failure. Setting it to No means the recovered adapter will remain inactive until another adapter fails.
- j. One can even use the setting “Explicit failover order” in which case traffic will move to next available uplink in the list of active adapters. If no active adapters are available then traffic will move down the list to the standby adapters.
17. vNetwork Distributed Virtual Switches:
- a. A dvSwitch spans multiple servers instead of each server having its own set of vSwitches.
- b. First you create a dvSwitch and you add hosts to it during or after creation.
- c. When an additional ESX host is added to a dvSwitch, all of the dvPortgroups will automatically be propagated to the new host with the correct configuration. This is the distributed nature of the dvSwitch – as configuration changes are made via the vSphere client a vSphere server pushes those changes out to all participating hosts in the dvSwitch.
- d. A host cannot be removed from a dvSwitch if it still has VMs connected to a dvPortgroup on that dvSwitch.
- e. Adding dvPortgroup:
- i. Name of dvPortgroup – unique across member hosts of dvSwitch.
- ii. Number of ports: 128 default, max configurable value is 8192.
- iii. VLAN Type:
- 1. None – dvPortgroup will receive only untagged traffic
- 2. VLAN – dvPortgroup will receive tagged traffic and uplinks must connect to switch ports configured as VLAN trunks
- 3. VLAN Trunking – dvPortgroup will pass VLAN tags up to guest OS on any connected VMs
- 4. Private VLAN
- a. Can we have vNIC created without portgroup association? NO - API requires you to specify an existing portgroup while creating vNIC.
- b. You can assign any name to vswitch - for eg. 'ViMaster Switch'
- c. Standard vSwitch Port group name is unique within a host
- d. Hybrid deployments (vDS + vSS) is a supported deployment scenario. If vCenter fails then you won’t be able to manage your vDS so it is recommended that we use vSS for at least the VMKernel and Service Console connections. One can have the VM port groups on vDS though.
- e. We can assign a virtual hard disk carved out of a VMFS datastore over an iSCSI LUN to a VM with no network card (vnics).
- f. For software iSCSI initiator we can tell the vmknic interface being used by identifying which iscsi target is in the same subnet as the vmknic interface.
- g. For hardware iSCSI initiator no vmkernel port needs to be configured and it will show up as a normal FCHBA (Storage adapters) in vSphere client. The ip address will be configured on the HBA directly.