What is Netflow?
For two decades, Netflow technology has provided IT teams with deeper insight into traffic volume, network performance, and application flows. As cloud migration and a remote workforce redefine network computing, enriched flow records keep pace with the rapid evolution.
Netflow is a protocol developed by Cisco for monitoring network IP traffic activity. The collected data is analyzed to establish meaningful metrics and trends related to application flows, usage, and bandwidth allocation.
- Netflow-enabled devices send traffic and related statistics to a collector for pre-processing. An analyzer converts the protocol into useful charts and graphs that can provide visibility into network and device health. This monitoring method does not require probes to be deployed within the infrastructure.
- IT teams commonly use Netflow as a traffic analyzer to develop valuable point of origin, path, and destination metrics. Network flow data empowers these teams to monitor and troubleshoot QoS, access, and security issues more effectively.
- Additional Netflow “flavors” including JFlow, sFlow, and AWS VPC Flow logs have been developed over the years, with each employing a slightly different method of IP-based data collection and analysis.
- The standard for internet protocol flow information export (IPFIX) was published by the IETF in 2008. This standard defines common practices for collecting and transferring IP flow data.
What Does VIAVI Offer for Netflow and Beyond?
VIAVI recognizes the value of Netflow data for monitoring, troubleshooting, and security applications. In complex hybrid IT environments, IP attributes continue to provide meaningful network statistics, even as IoT, cloud migration, and diverse user locations make network visibility more challenging.
The VIAVI Observer platform enhances traditional Netflow capabilities with a multi-layer approach to real-time performance monitoring, high-fidelity forensics, and user-centric root cause analysis.
-
Observer GigaFlow combines flow, user, network, machine, and application data within a single enriched flow record. Maintaining this information over time for each individual host/user helps NetOps and SecOps teams to protect network integrity and investigate anomalous activities.
-
Observer GigaStor is an industry-leading, high-capacity appliance for comprehensive packet capture, analysis, and storage. In-depth network conversation details are preserved for troubleshooting and optimization activities. Rewind and navigation functions enable detailed reviews when service anomalies or intrusions occur.
-
Observer GigaTest, an optional add-on to Observer Apex provides active testing capabilities for cloud and remote user awareness into network and IT service health
-
Observer Apex is an optimized network performance monitoring (NPM) software platform. Valuable inputs from GigaFlow, GigaTest and GigaStor are distilled into customized dashboards, application dependency maps, and intuitive reporting features. End-user experience scores are generated for every network conversation, providing critical individual and enterprise-wide visibility into IT service health.
-
The Netflow protocol provides a roadmap for collecting, sorting, and analyzing IP traffic entering a router interface. Each arriving packet is assessed to determine whether it is part of an existing flow or the first packet of a new flow.
- IP packets are examined for core attributes including the source IP address and port number, destination IP address and port number, protocol ID, differentiated service value, and ingress interface.
- The Netflow attributes are used to determine whether a given packet is indicative of a new flow sequence. A detected change results in a new flow record being created in the flow cache.
- Completed sequences are exported to the collector. Since these flow records are relatively compact in size, many can be captured within a single packet and very little bandwidth is required to report vital traffic statistics.
- The Netflow collector continually processes and stores flow records so that information on usage trends, bandwidth sinks, and potential threats is readily available.
- The Netflow analyzer software application converts data outputs into intuitive graphs, charts, and reports that enhance network monitoring, troubleshooting, and capacity planning activities.
Netflow logs give IT teams visibility into network health and performance without capturing all traffic at the packet level. Compact flow records allow months or years of network data to be stored in a modest-sized database. Since external probes are not required, deployment is extremely cost-effective. Netflow contributes to reduced downtime, optimized network speed, and improved scalability by streamlining root cause analysis for QoS and security issues.
The relatively minor resource requirements for Netflow traffic analyzer storage and processing stem from the intended use of the protocol as a metadata monitoring system. This means high level network performance trends are captured but specific details like application request codes and database errors are not. The lack of in-depth information can lead to forensic gaps that make troubleshooting and issue resolution beyond network congestion challenging.
Network flow data has become an important pillar of visibility, providing important real-time usage and performance metrics that support monitoring, alerts, issue isolation, and root cause analysis. As networks and users become more decentralized, infrastructure data, machine data, and packet capture are among the additional data sources IT teams have sought out to minimize blind spots. An emphasis on end-user experience is the key to bringing these disparate sources together and maximizing their utility.
- Traditional Netflow data collection and storage involves the aggregating, pruning, or de-duplicating of information. This can compromise data fidelity and make it more difficult to recreate forensic events or troubleshoot complex problems.
- GigaFlow reimagines flow data by intelligently stitching together traditional flow such as iFlow, NetFlow and others, SNMP, user ID, device, and cloud information sources like VPC flow logs. This results in enriched flow records that chronicle all communication traversing the network.
What is the Difference Between Netflow and VIAVI GigaFlow?
GigaFlow builds upon the success of Netflow with the concept of enriched and enhanced flow with at least five important distinctions:
- Interaction Analysis: Unlike a traditional Netflow analyzer, GigaFlow compiles Layer 2 to layer 3 insights within a single network flow record. This enables the interactions between user, IP, MAC, and application to be analyzed and graphically presented.
- Third Party Awareness: Extending the breadth of flow records to 3rd party application and device data enhances visibility in hybrid IT environments.
- Forensic Value: GigaFlow records are created, analyzed, and stored for extended periods of time which makes it easier to navigate to past anomalies or security breaches.
- User-Centric Data: Using GigaFlow, a username is often enough to uncover important application usage and interface information. Knowing precisely who and what is connected throughout the network at any given time differentiates QoS and security vs a conventional Netflow collector.
- Network Capacity Planning: Utilization, volume, and traffic type are among the metrics identifying present or future chokepoints. GigaFlow includes interactive capacity planning features to assess WAN utilization and prevent capacity issues from degrading the end-user experience.
As part of the overall Observer 3D visibility narrative, GigaFlow captures the network and infrastructure leave behind as an unending trail revealing who is connected and how they are communicating. GigaFlow raises the bar for network monitoring by encompassing all device, application, and network environments, allowing IT teams to see more clearly than ever before. This important detail strengthens the end-user experience (EUE) scoring derived from packet and active test analysis
- Easy access to user-centered intelligence: The biggest challenge in troubleshooting performance or security issues can be knowing where to start. The user-centric approach employed by GigaFlow breaks down this obstacle by using in-depth client device, access behavior, and application flows. Performance or security issues are quickly isolated within the application, server, network, or client domain.
- Automated integration of user, machine, application, and network perspectives: GigaFlow collects unaltered flow data and stitches together multiple data sources (flow, SNMP, user identity, and session syslog) into an enriched flow record. In-depth details on connectivity, traffic control, and usage patterns down to the individual user and session level are readily available. The manual configuration and deciphering of Netflow logs is no longer required.
- The most complete, granular data set available for accurate investigations: Many elements of network infrastructure are capable of generating copious flow data while others are limited to counts and amounts defined by early versions of NetFlow. GigaFlow seamlessly completes the arduous task of organizing flow log data into enriched flow records. The network flow cadence is intelligently converted into valuable response time metrics.
Click to enlarge
A History of Netflow
Netflow was originally developed by Cisco in 1996 as a new packet switching technology for their routers. While Express Forwarding was eventually adopted for packet switching, Netflow became a widely deployed IP-based flow monitoring protocol. This new method provided superior traffic insight compared to the Simple Network Management Protocol (SNMP) which was already in use to track and organize information on managed network devices.
- While Netflow version 1 is now obsolete and versions 2-4 were never released, version 5 is still in use today. This early release is limited to IPv4 traffic capture.
- Version 9 of the protocol is compatible with IPv6 and employs a template-based approach that adapts to different data types and attributes. Flexible Netflow is an extension of version 9 that allows users to monitor a wider range of packet information.
- In 2003, version 9 was selected to become part of the IPFIX standard, although this standard provisioned for variable length fields and Netflow did not. The IPFIX vs Netflow discussion has continued to evolve as IPFIX has become a more open standard.
Continue your Enriched-Flow education with VIAVI!
Are you ready to take the next step? Complete one of the following forms to continue: