Network Performance Considerations
The SD-WAN must compensate for the range of issues impacting Internet performance. These issues vary between the last miles to the SD-WAN nodes and the middle mile between them.
Last Mile Optimization
The last mile from the ISP to customer premises is marked by bandwidth limitations and packet loss from several sources, including physical infrastructure issues and contention for uplink capacity. As such, SD-WANs should implement several last-mile optimizations:
Load Balancing and Path Aggregation
Path aggregation increases last-mile bandwidth by combining multiple Internet links. Traffic should be load-balanced between links, running them in active/active configuration to maximize capacity. Should one link experience a blackout or, worse, a brownout, the SD-WAN should switch traffic between links fast enough to preserve the session. At the same time, the SD-WAN should allow the load-balancing configuration to be overwritten and pin applications to certain transports, enabling itself to conform to business policies, such as keeping regulatory traffic on a private network.
Load balancing between multiple connectivity options is particularly important for availability. Although MPLS uptime is understood to be higher than that of Internet services, MPLS is exposed to the same wiring cuts and disruptions that can interfere with an Internet service. By combining multiple Internet circuits, SD-WANs can meet and even exceed the availability of last-mile MPLS services. The following equation shows how availability of aggregated Internet lines can exceed that of MPLS lines, assuming the last mile circuits have the same availability:
Site Availability = 1–((1-Service A Availability)*(1-Service B Availability)*(1-Service N Availability))
A = 1–(1-Ax )n
where “n” is the number of circuits in parallel, and “Ax” is the availability of the circuit. For example, if two circuits with 99 percent uptime individually are combined, they yield an aggregate uptime of 99.99 percent:
In a regular sentence, parentheses within parentheses become brackets. If that's the case with forumulas too, make it: ([1-Service A Availability]*[1-Service B Availability]*[1-Service N Availability]). And what are the asterisks, multiplication signs?
* Assumes 99% uptime per circuit
This performance is possible only if the SD-WAN runs multiple active connections that are diversely routed, that is, sharing no physical infrastructure. Ensuring the latter becomes challenging when both last-mile connections use the same technology, as wiring, ducting, and other physical plant infrastructure are shared among ISPs. Mixing last-mile technologies such as xDSL and 4G, as well as ISPs, helps guarantee last-mile diversity.
Bandwidth throttling enables the administrator to control each application’s bandwidth usage. Custom rules define the maximum percentage of bandwidth available to a given application, such as limiting YouTube traffic to 10 percent of a site’s Internet link.
Bandwidth reservation allows the administrator to reserve bandwidth for business-critical applications like VoIP and video conferencing. Bandwidth reservation guarantees specific applications the bandwidth they need. For example, 10 Mbps out of the 100 Mbps link can be reserved for an important voice meeting.
Forward Error Correction (FEC)
With Forward Error Correction, the SD-WAN corrects packet loss by inserting an additional correction packet into the data stream. The receiving SD-WAN node can then regenerate lost packets automatically, avoiding the latency of a retransmission. FEC should be a dynamic algorithm, increasing or decreasing the number of correction packets based on packet loss rates and link capacity. It should also be configurable to specific protocols, ports, and applications. And the correction packet should be sent on a secondary link, minimizing the possibility of losing the correction packet and the data it’s protecting.
Where connections are too unstable for FEC, SD-WANs should be able to duplicate packets across active connections. Receiving SD-WAN nodes accept the first duplicate packet, ignoring the subsequent one. While duplication significantly increases the likelihood that data will be received by the destination, duplication also consumes bandwidth across the secondary connection. As such, packet duplication should be used as a last resort.
The middle mile is the segment between the last miles of the source and destination ISPs. The middle mile is inherently longer than the last, so latency is a much larger factor. The ISP’s routing policy impacts latency as it’s based on economics not application performance. Latency and loss are also exacerbated by Internet peering policies between carriers. As such, SD-WANs should implement FEC in the middle mile.
Several other optimizations are necessary to address issues specific to the middle mile:
SLA-backed, alternative Backbone
An SLA-backed backbone overcomes the latency and loss caused by Internet peering. This solution is particularly important in global networks where a limited number of paths prevents ISPs from finding alternative routes to a destination. The backbone should be global, with PoPs connected by multiple tier-1 carriers. SLAs should ensure predictable and consistent latency and loss.
To minimize costs, ensure the backbone is based on Internet transit services. Some providers offer Internet entry and exit but carry traffic across MPLS backbones, making these services more expensive than SLA-backed WANs.
To maximize performance, uptime, and reach, the backbone should be constructed from multiple tier-1 carrier networks. PoPs should form an encrypted, software-defined overlay across all backbones and gather latency, loss, and jitter statistics for each network, selecting the optimum path from the provider networks.
Application-aware routing algorithms define “optimum” depending on an application’s performance characteristics and business importance. Voice and other real-time applications are sent across routes with the least loss, and bulk transfer across paths with maximum throughput.
PoPs should be connected by a full mesh. For each packet, the SD-WAN should calculate multiple routes directly to the targeted PoP or via other PoPs. Depending on networking characteristics, the fastest end-to-end path is often not the most direct. By calculating multiple routes at each PoP in the packet’s journey, the SD-WAN identifies the best path available at any time.
To avoid sending more data than the network can transmit, TCP Congestion Control limits the amount of data transferred within a set window of time. The protocol will gradually increase the window until reaching its maximum. As packets are dropped, TCP resets the process. To send and receive more data and optimize bandwidth, the backbone should implement several TCP improvements, including TCP window sizing.
With more enterprise traffic going to the cloud, IaaS and SaaS resources should be connected to the SD-WAN. Cloud integration challenges SD-WAN appliances, as they must sit on both sides of the connection. Organizations must therefore find a way to deploy a virtual or physical appliance either in or near the datacenter hosting the IaaS or SaaS resource. That’s often impossible, and it’s always more complicated than a site-to-site rollout. Look for SD-WAN solutions that offer the following:
Shared Internet Exchange Points (IXP)
By colocating PoPs in the same IXPs as all leading IaaS providers (Amazon AWS, Microsoft Azure, Google Cloud Platform), SD-WAN providers can interconnect their networks directly in the IaaS network via the exchange point rather than through one or more third-party networks. Traffic from customer sites and devices is optimized and routed via the shortest and fastest path to the customer’s cloud infrastructure.
By building PoPs within AWS and other IaaS providers, SD-WAN providers can guarantee that traffic between the customer’s IaaS networks route on the IaaS provider’s high-performance backbone.
When accessing SaaS applications, the SD-WAN should optimize and reduce latency, such as by allocating a unique IP address and attaching it to the PoP physically closest to the SaaS application. SaaS traffic will then route across the SD-WAN and be dropped at the PoP closest to the SaaS datacenter.
The expansion of the mobile workforce and the use of personal devices to access business data (BYOD) has further challenged legacy network security, since mobile users can access the Internet directly, bypassing network-enforced security policies. Forcing these users to establish VPN typically creates performance and user experience problems, especially for global travelers, leading to security policy violations.
These problems sometimes come from the added latency introduced by having to establish a VPN back to a firewall before connecting to the Internet. Even when users connect directly to the Internet through a mobile service, performance suffers from the aforementioned middle-mile and last-mile issues. Allowing mobile users to connect to an optimized backbone addresses these performance concerns.
Top SD-WAN Vendors
Network and security challenges in adistributed environment
Today, global organizations conduct on-demand business across many locations. They rely on remote employees working from business and personal devices, accessing applications both inside local datacenters and in the Cloud. This creates four main networking and security challenges for companies that depend on distributed appliances or traffic backhauling:
Appliance sprawl is costly and complex
Traffic backhauling over the internet can impact user experiences
Traffic backhauling over MPLS is expensive and wasteful
Mobile and cloud access is a “bolt on”
These problems have plagued IT for a long time. Recent technology advances are helping to solve these problems and have given birth to a new category:
Firewall as a Service