Network Functions Virtualization and Software Defined Networking
To remain competitive, today's network operators must respond to evolving markets and traffic types in a timeframe of hours and days rather than the months and years more typical of traditional carrier grade networks.
Software defined networking (SDN) and network functions virtualization (NFV) are two approaches that decouple network functions from hardware through abstraction, offering unprecedented flexibility and control over customer offerings. SDN and NFV reduce operating and capital expenses (OPEX and CAPEX) through application and hardware consolidation, space and power reduction, and improved operational and support efficiencies.
- Lower hardware costs: Take advantage of the economies of scale of the IT industry by transitioning to high-volume, industry-standard servers from purpose-built equipment that employs expensive specialty hardware components such as custom ASICs.
- Consolidate network equipment: Combine multiple network functions, which today require separate boxes, onto a single server (see Figure 1), thereby reducing system count, floor space, and power cable routing requirements.
- Implement multi-tenancy: Support multiple users on the same hardware platform, cutting down on the amount of equipment network operators need to purchase.
Figure 1: From purpose-built devices to virtualized network functions running on industry-standard servers
- Shorten development and test cycles: Use virtualization to create production, test, and development sandboxes on the same infrastructure, saving time and effort.
- Improve operational efficiency: Simplify operations with standard servers supported by a homogeneous set of tools versus application specific hardware with more complex, unique support requirements.
- Reduce energy consumption: Implement power management features available on standard servers, as well as dynamic workload rebalancing, to lower power consumption during off-peak periods.
Service Revenue Opportunities
- Boost innovation: Bring new capabilities to services development while decreasing risk for network operators by enlisting an ecosystem of independent software vendors (ISVs), open source developers, and academia on the leading edge of virtual appliances.
- Deploy services faster: Save weeks or months when adding new services to network nodes by copying the associated software into a virtual machine (VM) instead of procuring and installing a new network appliance.
- Target service by geography: Increase flexibility for service rollouts to a particular geography or customer by downloading the necessary software only to applicable servers.
Communications service providers have stringent timing constraints for their mission-critical applications and services such as voice, video, and charging. In many cases, open source software components must be enhanced in order to satisfy the associated real-time requirements. Consequently, Intel® and Wind River® have been working to improve the performance of network functions running in virtualized SDN and NFV environments.
Wind River Open Virtualization, based on Wind River Linux, provides performance enhancements, management extensions, and application services through open components. Adopting the Yocto Project as its core foundation, Wind River Linux is a carrier grade, turnkey operating system that delivers all of the technologies essential to building a powerful, flexible, responsive, stable, and secure platform.
Figure 2 shows Open Virtualization running with the guest and Wind River Linux host installations. Since performance is a critical requirement, Open Virtualization delivers the following:
- Real-time performance in the kernel
- Near-native application performance
- Ultra-low latency virtualization
Wind River Open Virtualization integrates a range of technologies and techniques to deliver adaptive performance, interrupt delivery streamlining and management, system partitioning and tuning, and security management.
- Near-native performance: Open Virtualization achieves near-real-time performance in SDN and NFV environments when it is necessary to minimize the interrupt latency and the overhead. Traditionally, a major source of performance loss is from VM enters and exits, which typically occur when the virtual machine monitor (VMM) must service an interrupt or handle a special event. Wind River has reduced the typical interrupt latency from between 300 and 700 μS to sub-20 μS, thus achieving near-native (i.e., similar to non-virtualized) performance in a virtualized environment.
- Guest isolation: Open Virtualization provides a high-priority guest with isolation so it can run uninterrupted and have preferential access to the hardware platform (CPU, memory, I/O devices, etc.).
- Virtual interrupt delivery: Open Virtualization enables the VMM to inject a virtual interrupt into a guest in place of an external interrupt, greatly reducing the VM enter/exit and VM-to-VM communication overhead.
- Core pinning: Core pinning guarantees that particular transactions are always sent to the same guest for processing. This eliminates the need for sharing connection and forwarding information among guests, because each guest only needs to know about its own connections.
- NUMA awareness: Open Virtualization uses standard Linux mechanisms to control and present the non-uniform memory access (NUMA) topology visible to guests. Among various usages, this information can help an orchestrator maximize performance by ensuring processes (e.g., QEMU) impacting a VM are not scheduled across CPUs, and the VM's memory space fits within a single NUMA node and does not cross expensive memory boundaries.
- Hot plugging CPUs: By reallocating CPUs faster and more deterministically, Open Virtualization implements dynamic resource pools that control how VMs are pinned to processor cores. For instance, if a VM is assigned four virtual CPUs running on four physical cores and becomes underutilized, Open Virtualization frees up two physical cores by putting all four threads on the other two physical CPUs, thus freeing these CPUs for other VMs.
- Live migration: Open Virtualization can move guests between nodes in a shelf with as little as 500 ms network downtime. This functionality can be coupled with an equipment manufacturer's other high-availability mechanisms designed to perform live migration. In addition, the capability includes various management features, such as blacklisting and reporting.
- Power management: Open Virtualization monitors resource utilization to determine when to put a node in a sleep state in order to save energy during low-use times. There are specific power governors that control power while ensuring determinism and latency specifications are met.
The performance improvement delivered by Wind River Open Virtualization is demonstrated by a series of benchmark tests performed by Wind River. The message signaled interrupt (MSI) latency of an out-of-the box version of KVM and Linux was measured to be as high as 600 μs and the average was around 25 μs. When the same test was run on a system with Open Virtualization, the maximum interrupt latency was less than 14 μs and the average was about 8 μs. This represents a more than 40 times improvement in the worst-case latency of the non-optimized case .