Successful protocol analysis in modern network structures
Hunting the Invisible
Invisible Data in Virtual Infrastructures
In the case of virtualization, both TAPs and SPANs fail. Server virtualization entails seamless migration of machines from one virtual machine to another. The fault tolerance integrated into VMware's vSphere vMotion technology minimizes downtime, but this agility also increases complexity. After all, to be able to monitor virtualized system environments, you need seamless representation of all system changes. The monitoring and analysis solutions required for this work must be able to access virtualized resources and data streams.
The easiest way of connecting virtual machines (VMs) to network resources is to use a virtual Ethernet bridge (VEB; also known as a vSwitch). A vSwitch is thus a software application that enables communication between virtual machines. By default, any VM can use the virtual switch to communicate directly with any other VM on the same physical machine. This means that the data streams between VMs on the same physical computer are no longer transmitted across the connected physical network, which in turn means that these data streams are not accessible to network monitoring.
To be able to tap into data streams between virtual machines, the inter-VM traffic flows need to be forwarded to the monitoring and analysis application. Virtualization solution vendors provide guest access to a virtual network adapter for this purpose. This access to the data ensures that all data packets that occur at the switch are forwarded, which also includes traffic belonging to other users. This, in turn, can mean that the sniffed traffic streams are forwarded to the wrong addressees.
Inter-VM traffic flows can also be routed out via a native VMware vSphere 5 virtual machine. This removes the need to install additional agents or make changes to the hypervisor so that system administrators can see the data traffic between virtualized applications at the packet level in a way you would not normally find in a virtualized environment. The traffic streams between virtual machines on the same ESXi host can be selectively filtered and are then forwarded to designated users (analysis, monitoring tool) on the physical network.
Analysis and Monitoring in Distributed Environments
In cases where distributed server resources are virtualized, checking the transmitted data is typically impossible because of its enormous complexity. Based on a GigaVUE-VM fabric node directly integrated into the VMware vCenter infrastructure, the agility functions (VMware High Availability and Distributed Resource Scheduler) can be used to tap the data required for analysis and monitoring.
Cisco also offers the option of forwarding virtual traffic flows to an external switch. In this case, the packets in question are marked with a special VN tag. The most important components of the tag are the interface ID of the virtual data source and the interface ID of the target interface (VIF) for unique identification of multiple virtual interfaces on a single physical port.
The challenge in transmitting the additionally tagged packets is that of doing so without making any hardware or software changes and without additional performance overhead. Adaptive packet filtering is required here to ensure filtering and forwarding of the incoming traffic streams on the basis of the VN source tag, the target VIF_IDs, the payload encapsulated in the packet, or a combination of these methods. Because the analysis and monitoring tools do not understand the additional VN tag headers, the packet filter needs to delete the VN tag header before forwarding the packets to the analysis system in question.
Network Analysis in the Cloud
The limits of today's physical networks cause problems in realizing distributed, virtual computer landscapes (clouds). Generating a virtual machine takes just a couple of minutes, whereas configuring the required network and security services can take several days. Highly available, virtualized server environments are typically implemented as flat Layer 2 networks.
Based on Virtual Extensible LANs (VXLAN), overlay networks are established on the existing Layer 3 infrastructures. VXLAN technology is all about assigning IP addresses in a larger network array and keeping IP addresses in case of location changes. From a technical point of view, a VXLAN creates logical Layer 2 networks, which are then encapsulated in standard Layer 3 packets.
This approach helps extend logical Layer 2 networks beyond their physical borders (Figure 2). To distinguish between the individual networks, a segment ID is added to each packet. In cloud environments, tunnels of this kind can also be generated and terminated within hypervisors – in other words, within virtual machines. It makes no difference whether the physical host systems reside in a single data center or are distributed across multiple locations all over the world. The logical overlay network is thus independent of the underlying physical network infrastructure.
With the help of this method, a large number of isolated Layer 2 VXLAN networks can be mapped on a single Layer 3 infrastructure. Additionally, VXLANs make it possible to install virtual machines on the same virtual Layer 2 network, although they reside on different Layer 3 networks. To this end, VXLANs add a 24-bit segment ID. This means that millions of isolated Layer 2 VXLAN networks can be implemented on one legacy Layer 3 infrastructure, and the virtual machines installed on the same logical network can communicate with one another directly via Layer 3 structures.
Because the entire Layer 2 data traffic is hidden in tunnels with this solution, monitoring and data analysis become more difficult. A visibility structure integrated into the network can help make dynamic data connections visible in distributed virtual machine structures and VXLAN overlay networks. The data streams to be monitored are output to the analysis and monitoring components in a targeted way.
Buy this article as PDF