Measures reported by NetFlowTest
Cisco IOS NetFlow is a flexible and extensible method to record network performance data. It efficiently provides a key set of services for IP applications, including network traffic accounting, usage-based network billing, network planning, security, Denial of Service monitoring capabilities, and network monitoring. NetFlow provides valuable information about network users and applications, peak usage times, and traffic routing.
By polling the Netflow MIB of a Netflow-enabled Cisco router at configured intervals, this test collects a wide variety of per-flow statistics on traffic on that Cisco router. With the help of these metrics, you can quickly identify the net flow on which a large amount of data was transacted, who the talkers were, the type of communication that they engaged in, and also instantly drill down to the interfaces impacted by this communication.
When users complaint of a network slowdown, knowing which two hosts are engaged in a bandwidth-intensive communication is sure to take you closer to determining what activity the two hosts were performing, and whether it can be terminated to conserve bandwidth.
Note:
This test will work only if Netflow is enabled on a router. To achieve this, follow the steps below:
- Enter global configuration mode on the router or MSFC, and issue the following commands for each interface on which you want to enable NetFlow:
interface {interface} {interface_number}
ip route-cache flow
bandwidth <kbps>
exit
This enables NetFlow on the specified interface alone. Remember that on a Cisco IOS device, NetFlow is enabled on a per-interface basis. The bandwidth command is optional, and is used to set the speed of the interface in kilobits per second.
- Then, issue the following command to break up long-lived flows into 1-minute fragments. You can choose any number of minutes between 1 and 60. If you leave it at the default of 30 minutes your traffic reports will have spikes. It is important to set this value to 1 minute in order to generate alerts and view troubleshooting data.
ip flow-cache timeout active 1
- Next, issue the following command to ensure that flows that have finished are periodically exported. The default value is 15 seconds. You can choose any number of seconds between 10 and 600. However, if you choose a value greater than 250 seconds, NetFlow Analyzer may report traffic levels that are too low.
ip flow-cache timeout inactive 15
- Finally, enable ifIndex persistence (interface names) globally. This ensures that the ifIndex values are persisted during device reboots.
snmp-server ifindex persist
The measures made by this test are as follows:
| Measurement |
Description |
Measurement Unit |
Interpretation |
| Total_Flow_Data |
Indicates the amount of data transmitted and received in this net flow. |
KB |
Compare the value of this measure across flows to identify which flow is experiencing high levels of network traffic. This way, you can also identify the two hosts that are interacting over the network, generating heavy traffic in the process.
Use the detailed diagnosis of this measure to determine the input and output interfaces that have been impacted by the traffic and their current speeds. |
| Total_Flow_Packet |
Indicates the total number of packets received and transmitted in this net flow. |
Pkts |
|
| Pct_inFlow_Data |
Indicates the percentage of total traffic for this flow that is flowing through the input interface. |
Percent |
Compare the value of this measure across flows to know which flow is receiving large volumes of data via the input interface. |
| Pct_outFlow_Data |
Indicates the percentage of total traffic for this flow that is flowing through the output interface. |
Percent |
Compare the value of this measure across flows to know which flow is transmitting large volumes of data via the output interface. |
| Protocol_Number |
Indicates the protocol used in this net flow. |
|
The table below lists the protocols that can be reported by this measure, and their numeric equivalents:
| Protocol |
Numeric value |
| ICMP |
1 |
| IGMP |
2 |
| GGP |
3 |
| IPv4 |
4 |
| ST |
5 |
| TCP |
6 |
| CBT |
7 |
| EGP |
8 |
| IGP |
9 |
| BBN-RCC-MON |
10 |
| NVP-II |
11 |
| PUP |
12 |
| ARGUS |
13 |
| EMCON |
14 |
| XNET |
15 |
| CHAOS |
16 |
| CHAOS |
16 |
| UDP |
17 |
| MUX |
18 |
| RDP |
27 |
| IPv6 |
41 |
| IPv6-Route |
43 |
| IPv6-Frag |
44 |
| IDRP |
45 |
| RSVP |
46 |
| SWIPE |
53 |
| MOBILE |
55 |
| IPv6-ICMP |
58 |
| IPv6-NoNxt |
59 |
| IPv6-Opts |
60 |
| VISA |
70 |
| PVP |
75 |
| DGP |
86 |
| IPIP |
94 |
| PNNI |
102 |
| UDPLite |
136 |
Note:
By default, this measure reports one of the Protocols listed in the table above to indicate the protocol for the net flow. The graph of this measure however, represents the same using the numeric equivalents only.
|
| Bandwidth_FlowData |
Indicates the bandwidth that is currently utlized by this net flow. |
Mbps |
This measure is applicable only if the Report Interface Bandwidth flag is set to Yes. Compare the value of this measure across flows to know which flow consumes more bandwidth for transmitting and receiving data. |
| Input_Bandwidth |
Indicates the percentage of bandwidth for this flow that is flowing through the input interface. |
Percent |
This measure is applicable only if the Report Interface Bandwidth flag is set to Yes. Compare the value of this measure across flows to know which flow uses more badwidth for receiving large volumes of data via the input interface. |
| Output_Bandwidth |
Indicates the percentage of bandwidth for this flow that is flowing through the output interface. |
Percent |
This measure is applicable only if the Report Interface Bandwidth flag is set to Yes. Compare the value of this measure across flows to know which flow consumes more badwidth for transmitting large volumes of data via the output interface. |
|