What is Ethernet Fabric?
Fabrics offer one alternative for meeting the demands of the modern data center. An Ethernet fabric is a type of network topology that collapses traditional three-tier data center switching architectures into one or two tiers, ensures network traffic is never more than two hops away from its destination and operationally behaves like one big switch. The goal of an Ethernet fabric is to increase the scalability, performance and resilience of highly virtualized and cloud-ready data centers.
Vendors typically sell commercial Ethernet fabrics as a product suite comprising specialized hardware and software. Fabrics can be implemented using various architectures, the most common among them leaf-spine and mesh. Several terms -- including data center fabric, switch fabric, network fabric and Ethernet fabric, or some amalgamation of all four -- are often used interchangeably to refer to this topology.
Ethernet fabric refers to flatter, highly scalable topologies that enable low latency via one- or two-hop connectivity in data center switching.
Why would one need an Ethernet Fabric?
Applications -- and by extension, the networks that delivered them -- used to be a lot simpler. Application traffic primarily flowed north and south between three tiers of switching that connected servers to clients.
The widespread adoption of server virtualization, however, transformed these devices, enabling these once-static architectures to become virtual machines capable of exploiting a data center's entire capacity by moving traffic among multiple physical servers. Applications were also becoming more complex, with various functions being broken off into different systems and components that would need to communicate across servers.
This influx of east-west traffic -- that is, traffic between and among servers -- has strained traditional three-tier switching architectures and limited scalability. Data would now have to pass through more hops to reach its destination, adding latency and consequently degrading performance.
Meanwhile, the performance and resilience of data center networks were further hamstrung by the pervasive use of Spanning Tree Protocol (STP), an algorithm that prevents bridge loops by shutting down redundant paths in favor of a single active link that transmits data. While STP is sufficient for conventional traffic flows and application architectures, it is an imperfect and fragile approach that uses bandwidth inefficiently.
Ethernet fabrics -- along with complementary technologies such as TRILL and Shortest Path Bridging (SPB) -- offer an alternative to the complexity and inefficiencies of three-tiered networks and Spanning Tree. An interconnected fabric combines the visibility of Layer 2 with the operational benefits of Layer 3.
Limitations of Ethernet Fabrics
- Vendor lock-in: While a fabric offers many benefits, there is one major challenge that can be a deal-breaker for some network engineers: It almost always requires a single-vendor network. With a few exceptions, most vendors have created proprietary enhancements to standard protocols, such as TRILL and SPB, he said. This has rendered most vendors' fabrics incompatible with their competitors' infrastructure.
- The other piece to this is fabrics … are not infinite [in capacity], and there are scaling limitations. "When you get into the multi-thousand-port range, you're starting to get to an area where the management headaches [are such that] you should consider segmenting.
Brocade Offerings
http://www.brocade.com/products/all/switches/index.page?network=ETHERNET_FABRIC
Brocade's TRILL-based Virtual Cluster Switching (VCS) data center fabric is designed for building large Layer 2 network domains. It debuted in 2010 with Brocade's series of fixed-form factor VDX 6700 switches, which are top-of-rack devices that cluster together in self-forming fabrics.
With Brocade VDX 8770 series, the server facing ports capacity has gone up to 8000 ports. The switches initially shipped with Gigabit Ethernet, 10 GbE and 40 GbE line cards, but the 4 Tbps per-slot backplane capacity is aimed at eventually supporting high-density 100 GbE ports. VDX 8770 features 3.5-microsecond port-to-port latency and a highly scalable MAC address table, capable of supporting up to 384,000 virtual machines in a single switch.
The VDX 8770 also has a new feature that allows customers to establish multiple load-balanced Layer 3 gateways within a Layer 2 VCS fabric, allowing them to increase the bandwidth available between Layer 2 domains.
To support high-performance networks, Brocade has upgraded its ADX ADCs (load balancers) to include a multi-tenancy feature that will allow enterprises and cloud service providers to slice up the resources of the appliances and assign virtual instances of an ADC to specific applications and services.
The ADX multi-tenancy doesn't slice resources by individual CPU core. It assigns tenants to processor subsystems, giving enterprises the ability to "mix and match capacity without having to determine where those processor hosts are in the system”.
Multi-tenancy in Ethernet Fabrics
Brocade engineered native multi-tenancy through support of an extension to Transparent Interconnections of Lots of Links (TRILL), the Internet Engineering Task Force's (IETF) standard that enables multi-path forwarding in a Layer 2 network and eliminates the need for spanning tree. The extension, known as Fine-Grained Labeling (FGL), replaces 12-bit virtual local area network (VLAN) labeling in a TRILL-based Ethernet frame with 24-bit FGL labeling. FGL expands the number of network segments that a network engineer can create in a Layer 2 network from 4,000 to 16 million.
"It's better than VLANs. And with VXLAN [Virtual Extensible LAN], NVGRE [Network Virtualization using Generic Routing Encapsulation] and STT, you're creating overlay mechanisms that step you out of Layer 2. [FGL] keeps all of that partitioning still running at Layer 2. In theory, you have better low-level control and performance."
The number of network segments in a Layer 2 network is a basic building block of a multi-tenant data center or cloud. Some vendors have tackled segmentation through overlay technologies like VMware NSX and tunneling protocols such as VXLAN and NVGRE. Brocade has embraced overlays with its support of VMware's NSX VXLAN termination endpoints on its VDX. However, not every IT organization is ready to embrace overlay products, which canblur the organizational lines between the network and server teams. "The networking guys want to manage and control the multi-tenancy solution." FGL uses the same constructs as VLANs, which promises a gentle learning curve to network pros. Also, since the technology is network-based, FGL is hypervisor-agnostic, unlike most overlay products.
Brocade enhanced the AutoQoS feature of VCS to apply automatic Quality of Service policies to storage traffic that traverses the fabric.
Brocade launched the VDX 6740 series of 10/40 Gigabit Ethernet (GbE) top-of-rack switches for the VCS fabric. Brocade is selling both fiber optic and copper versions of the VDX 6740 at the outset. The new 1RU switch has 1.28 Tbps of bandwidth with 64 10 GbE ports and 4 40 GbE ports. A 100 GbE line card for its VDX 8770 chassis, which will start shipping in the first half of 2014.
Although Brocade's VCS multi-tenancy is based on the IETF's FGL specification, Brocade's overall implementation of TRILL remains proprietary, much like the Cisco FabricPath, the only other major TRILL-based Ethernet fabric on the market. This prevents interoperability. While the data forwarding plane of VCS complies with IETF's TRILL specification, the control plane is proprietary.
Software defined networking (SDN) and Ethernet Fabrics: Do the two technologies intersect?
When we think about SDN, we think about the ability to influence network behavior from outside the network by using OpenFlow protocols. OpenFlow is largely supplementary to existing forwarding and routing techniques in a network, and it's supplementary to VCS. What many are going to do with OpenFlow is say, "I've got some unique network behavior that I would like to instantiate using OpenFlow because my networking vendor doesn't provide it natively within the switch." But they're not looking for OpenFlow to displace all of their routing and porting techniques. Outsourcing the entire control plane would be a pretty big bite to take. Brocade’s intent is to implement OpenFlow, both within the VCS fabric and other data center platforms, in that same supplementary fashion.
Another variant of SDN is network virtualization, the ability to apply logical networks or overlay networks on top of an existing physical network infrastructure. One of the benefits of that is to give the customer more freedom in terms of scalability and going beyond VLAN ID and MAC address table size limitations. In introducing logical networks, you're actually increasing the overall administrative overhead that the customer has to deal with. The beauty of a VCS fabric is that through automation and simplicity, you can reduce your administrative burden and your operational overhead in the physical network infrastructure, and invest more time in that logical overlay.
Converged Storage Networking (FCoE): What is the future?
There is some [modest] adoption in convergence and FCoE from the server to the first hop in the network. That is gaining some momentum because it's an opportunity for customers to reduce the adapters in the server and simplify cabling. VCS is capable of supporting that convergence to the switch, at which point we're capable of breaking out native Fibre Channel traffic on the SAN [storage area network] and non-storage traffic on the Ethernet LAN.
More important from a convergence point of view is the growth of IP storage and our ability to leverage VCS to make NAS [network attached storage] or iSCSI work better. VCS provides a lower-latency and higher-throughput environment than a conventional LAN, so it's a better IP storage transport. Also, VCS has a scaling property that aligns with the way customers think about scaling up their NAS environments. They want to be able to add pods of storage and have that federate into the existing IP storage architecture. That's the way you scale a VCS fabric.
References
- http://searchnetworking.techtarget.com/news/2240173618/FCoE-for-converged-networking-Not-quite-Brocade-veep-says
- http://searchnetworking.techtarget.com/news/2240173619/Network-Innovation-Award-Brocade-VCS-Fabric
- http://searchnetworking.techtarget.com/news/2240206057/Brocade-adds-multi-tenancy-to-VCS-Ethernet-fabric-new-ToR-switches
- http://searchnetworking.techtarget.com/news/2240163205/New-Brocade-VDX-chassis-adds-scale-to-data-center-fabric?src=itke%20disc