OpenNetMon: Network Monitoring in OpenFlow
Software-Defined Networks
Niels L. M. van Adrichem, Christian Doerr and Fernando A. Kuipers
Network Architectures and Services, Delft University of Technology
Mekelweg 4, 2628 CD Delft, The Netherlands
{N.L.M.vanAdrichem, C.Doerr, F.A.Kuipers}@tudelft.nl
Abstract—We present OpenNetMon, an approach and opensource software implementation to monitor per-flow metrics,
especially throughput, delay and packet loss, in OpenFlow networks. Currently, ISPs over-provision capacity in order to meet
QoS demands from customers. Software-Defined Networking
and OpenFlow allow for better network control and flexibility
in the pursuit of operating networks as efficiently as possible.
Where OpenFlow provides interfaces to implement fine-grained
Traffic Engineering (TE), OpenNetMon provides the monitoring
necessary to determine whether end-to-end QoS parameters
are actually met and delivers the input for TE approaches to
compute appropriate paths. OpenNetMon polls edge switches, i.e.
switches with flow end-points attached, at an adaptive rate that
increases when flow rates differ between samples and decreases
when flows stabilize to minimize the number of queries. The
adaptive rate reduces network and switch CPU overhead while
optimizing measurement accuracy. We show that not only local
links serving variable bit-rate video streams, but also aggregated
WAN links benefit from an adaptive polling rate to obtain
accurate measurements. Furthermore, we verify throughput,
delay and packet loss measurements for bursty scenarios in our
experiment testbed.
I. I NTRODUCTION
Recently, Software-Defined Networking (SDN) has attracted
the interest of both research and industry. As SDN offers
interfaces to implement fine-grained network management,
monitoring and control, it is considered a key element to
implement QoS and network optimization algorithms. As such,
SDN has received a lot of attention from an academic perspective, enabling researchers to perform experiments which were
previously difficult or too expensive to perform. Additionally,
industry is already adopting vendor-independent network management protocols such as OpenFlow to configure and monitor
their networks.
A key requirement for network management in order to
reach QoS agreements and traffic engineering is accurate
traffic monitoring. In the past decade, network monitoring
has been an active field of research, particularly because it
is difficult to retrieve online and accurate measurements in IP
networks due to the large number and volume of traffic flows
and the complexity of deploying a measurement infrastructure
[1]. Many flow-based measurement techniques consume too
much resources (bandwidth, CPU) due to the fine-grained
monitoring demands, while other monitoring solutions require
large investments in hardware deployment and configuration
978-1-4799-0913-1/14/$31.00 c 2014 IEEE
management. Instead, Internet Service Providers (ISPs) overprovision their network capacity to meet QoS constraints
[2]. Nonetheless, over-provisioning conflicts with operating a
network as efficient as possible and does not facilitate finegrained Traffic Engineering (TE). TE in turn, needs granular
real-time monitoring information to compute the most efficient
routing decisions.
Where recent SDN proposals - specifically OpenFlow [3]
- introduce programming interfaces to enable controllers to
execute fine-grained TE, no complete OpenFlow-based monitoring proposal has yet been implemented. We claim that
the absence of an online and accurate monitoring system
prevents the development of envisioned TE-capable OpenFlow
controllers. Given the fact that OpenFlow presents interfaces
that enable controllers to query for statistics and inject packets
into the network, we have designed and implemented such
a granular monitoring system capable of providing TE controllers with the online monitoring measurements they need.
In this paper we present OpenNetMon, a POX OpenFlow
controller module enabling accurate monitoring of per-flow
throughput, packet loss and delay metrics. OpenNetMon1 is
capable of monitoring online per-flow throughput, delay and
packet loss in order to aid TE.
The remainder of this paper is structured as follows: In
section II, we first discuss existing measuring methods and
monitoring techniques used by ISPs. Section III summarizes
OpenFlow and its specific options that our implementation
uses, as well as previous work in the field of measuring
traffic in OpenFlow networks. Our proposal OpenNetMon
is presented in section IV and experimentally evaluated in
section V. Section VI discusses implementation specific details
regarding the design of our network controller components.
Finally, section VII concludes this paper.
II. M ONITORING
Traditionally, many different monitoring techniques are used
in computer networks. The main type of measurement methods
those techniques rely on, and the trade-offs they bring are
discussed in the following two subsections. Traditionally,
every measurement technique requires a separate hardware
installation or software configuration, making it a tedious and
expensive task to implement. However, OpenFlow provides
1 OpenNetMon is published as open-source software at our GitHub repository [4].
the interfaces necessary to implement most of the discussed
methods without the need of customization. Subsection II-C
summarizes several techniques ISPs use to monitor their
networks.
A. Active vs. passive methods
Network measurement methods are roughly divided into
two groups, passive and active methods. Passive measurement
methods measure network traffic by observation, without injecting additional traffic in the form of probe packets. The
advantage of passive measurements is that they do not generate
additional network overhead, and thus do not influence network performance. Unfortunately, passive measurements rely
on installing in-network traffic monitors, which is not feasible
for all networks and require large investments.
Active measurements on the other hand inject additional
packets into the network, monitoring their behavior. For example, the popular application ping uses ICMP packets to reliably
determine end-to-end connection status and compute a path’s
round-trip time.
Both active and passive measurement schemes are useful to
monitor network traffic and to collect statistics. However, one
needs to carefully decide which type of measurement to use.
On the one hand, active measurements introduce additional
network load affecting the network and therefore influence
the accuracy of the measurements themselves. On the other
hand, passive measurements require synchronization between
observation beacons placed within the network, complicating
the monitoring process. Subsection II-C discusses both passive
and active measurement techniques that are often used by ISPs.
B. Application-layer and network-layer measurements
Often network measurements are performed on different
OSI layers. Where measurements on the application layer are
preferred to accurately measure application performance, ISPs
often do not have access to end-user devices and therefore use
network layer measurements. Network layer measurements use
infrastructure components (such as routers and switches) to
obtain statistics. This approach is not considered sufficient,
as the measurement granularity is often limited to port based
counters. It lacks the ability to differ between different applications and traffic flows. In our proposal in section IV we use
the fact that OpenFlow enabled switches and routers keep perflow statistics to determine end-to-end network performance.
C. Current measurement deployments
The Simple Network Management Protocol (SNMP) [5] is
one of the most used protocols to monitor network status.
Among others, SNMP can be used to request per-interface
port-counters and overall node statistics from a switch. Being developed in 1988, it is implemented in most network
devices. Monitoring using SNMP is achieved by regularly
polling the switch, though switch efficiency may degrade with
frequent polling due to CPU overhead. Although vendors are
free to implement their own SNMP counters, most switches
are limited to counters that aggregate traffic for the whole
switch and each of its interfaces, disabling insight into flowlevel statistics necessary for fine-grained Traffic Engineering.
Therefore, we do not consider SNMP to be suitable for flowbased monitoring.
NetFlow [6] presents an example of scalable passive flowbased monitoring. It collects samples of traffic and estimates
overall flow statistics based on these samples, which is considered sufficiently accurate for long-term statistics. NetFlow uses
a 1-out-of-n random sampling, meaning it stores every n-th
packet, and assumes the collected packets to be representative
for all traffic passing through the collector. Every configurable
time interval, the router sends the collected flow statistics to
a centralized unit for further aggregation. One of the major
problems of packet-sampling is the fact that small flows
are underrepresented, if noticed at all. Additionally, multiple
monitoring nodes along a path may sample exactly the same
packet and therewith over-represent a certain traffic group,
decreasing accuracy. cSamp [7] solves these problems by using
flow sampling instead of packet sampling and deploys hashbased coordination to prevent duplicate sampling of packets.
Skitter [8], a CAIDA project that analyzed the Internet
topology and performance using active probing, used geographically distributed beacons to perform traceroutes at a
large scale. Its probe packets contain timestamps to compute RTT and estimate delays between measurement beacons.
Where Skitter is suitable to generate a rough estimate of
overall network delay, it does not calculate per-flow delays, as
not all paths are traversed unless a very high density of beacons
is installed. Furthermore, this method introduces additional
inaccuracy due to the addition and subtraction of previously
existing uncertainty margins.
Measuring packet delay using passive measurements is a
little bit more complex. IPMON [9] presents a solution that
captures the header of each TCP/IP packet, timestamps it
and sends it to a central server for further analysis. Multiple
monitoring units need to be installed to retrieve network-wide
statistics. Where the technique is very accurate (in the order of
microseconds), additional network overhead is generated due
to the necessary communication with the central server. Furthermore, accuracy is dependent on accurate synchronization
of the clocks of the monitoring units.
III. BACKGROUND AND R ELATED W ORK
Although SDN is not restricted to OpenFlow, other control
plane decoupling mechanisms existed before OpenFlow, OpenFlow is often considered the standard communication protocol
to configure and monitor switches in SDNs. OpenFlow capable
switches connect to a central controller, such as POX [10]
or Floodlight [11]. The controller can both preconfigure the
switch with forwarding rules as well as it can reactively respond to requests from switches, which are sent when a packet
matching none of the existing rules enters the network. Besides
managing the forwarding plane, the OpenFlow protocol is also
capable of requesting per-flow counter statistics and injecting
packets into the network, a feature which we use in our
proposal presented in section IV.
Timer Controller
FlowRemoved
FlowMod
Unmatched packet
Controller
FlowStatsReply
FlowStatsReq
Controller
PacketOut
Controller
PacketIn
Controller
Packet Sent
Switch
(a) The first packet of a
new connection arrives.
Switch
(b) The installation of
forwarding rules.
Switch
(c) Retransmitting the
captured packet.
Fig. 1: The three-step installation procedure of a new flow.
More specifically, OpenFlow capable switches send a PacketIn message to the controller when a new, currently unmatched connection or packet arrives. The controller responds
with installing a path using one or more Flow Table Modification messages (FlowMod) and instructs the switch to
resend the packet using a PacketOut message. The FlowMod
message indicates idle and hard timeout durations and whether
the controller should be notified of such a removal with a
FlowRemoved message. Figure 1 gives a schematic overview
of the message exchange during flow setup. Using the PacketIn
and FlowRemoved messages a controller can determine which
active flows exist. Furthermore, the FlowRemoved message
contains the duration, packet and byte count of the recently
removed flow enabling the controller to keep statistics on past
flows. Our proposal in section IV uses this information in
combination with periodically queried Flow Statistics Request
(StatsRequest) messages, as shown in figure 2, to obtain
information of running flows and regularly injects packets into
the network to monitor end-to-end path delay.
OpenFlow’s openness to switch and per-flow statistics has
already been picked up by recent research proposals. OpenTM
[12], for example, estimates a Traffic Matrix (TM) by keeping
track of statistics for each flow. The application queries
switches on regular intervals and stores statistics in order to
derive the TM. The paper presents experiments on several
polling algorithms and compares them for accuracy. Where
polling solely all paths’ last switches gives the most accurate
results, other polling schemes, such as selecting a switch round
robin, by the least load, or (non-) uniform random selection
give only slightly less accurate results with at most 2.3 %
deviation from the most accurate last-switch selection scheme.
From the alternative polling schemes, the non-uniform random
selection with a preference to switches in the end of the
path behaves most accurate compared to last-switch polling,
followed by the uniform random selection and round-robin
selection of switches, while the least-loaded switch ends last
still having an accuracy of approximately +0.4 Mbps over 86
Mbps. However, since OpenTM is limited to generating TMs
for offline use and does not capture packet loss and delay, we
consider it incomplete for online monitoring of flows.
OpenSAFE [13] focuses on distributing traffic to monitoring
applications. It uses the fact that every new flow request passes
Switch
Switch
(a) While a flow is active, the controller can
- f.e. using a timer or
other event - query the
switch to retrieve flow
specific statistics.
(b) The end of a flow is
announced by sending
a FlowRemoved packet
to the controller.
Fig. 2: While a flow is active the controller and switch can
exchange messages concerning the state of the flow.
through the network’s OpenFlow controller. The controller
forwards the creation of new flows to multiple traffic monitoring systems, which record the traffic and analyze it with
an Intrusion Detection System (IDS). OpenSAFE, however,
requires hardware investments to perform the actual monitoring, while we introduce a mechanism that reuses existing
OpenFlow commands to retrieve the aforementioned metrics.
Others suggest to design a new protocol, parallel to OpenFlow, in order to achieve monitoring in SDNs. OpenSketch
[14], for example, proposes such a SDN based monitoring
architecture. A new SDN protocol, however, requires an upgrade or replacement of all network nodes, a large investment
ISPs will be reluctant to make. Furthermore, standardization
of a new protocol has shown to be a long and tedious task.
Since OpenFlow is already gaining popularity in datacenter
environments and is increasingly being implemented in commodity switches, a solution using OpenFlow requires less
investment from ISPs to implement and does not require
standardization by a larger community. Therefore, we consider
an OpenFlow compatible monitoring solution, such as our
solution OpenNetMon, more likely to succeed.
IV. O PEN N ET M ON
In this section, we present our monitoring solution OpenNetMon, written as a module for the OpenFlow controller POX
[10]. OpenNetMon continuously monitors all flows between
predefined link-destination pairs on throughput, packet loss
and delay. We believe such a granular and real-time monitoring
system to be essential for Traffic Engineering (TE) purposes.
In the following subsections, we will first discuss how our
implementation monitors throughput and how we determine
the right polling algorithm and frequency, followed by our
implementation to measure packet loss and delay. Where
one might argue that measuring throughput in OpenFlow
SDNs is not new, albeit that we implement it specifically for
monitoring instead of Traffic Matrix generation, we are the
first to combine it with active per-flow measurements on packet
loss and delay.
9
Available bandwidth (Mbit/s)
Available bandwidth (Mbit/s)
80
70
60
50
40
30
20
8
7
6
5
4
3
2
0
50
100
150
200
250
300
350
400
Time (s)
10
100
200
300
400
500
600
700
800
900
Time (s)
Actual available bandwidth
Broadcasted available bandwidth
Fig. 3: Available bandwidth on a 100 Mbps WAN link [15].
Fig. 4: Available and advertised bandwidth of a HD video
flow.
A. Polling for throughput
can vary between 1 and 9 Mbps2 . While dampened through
multiplexing, this behavior is even visible on aggregated links,
as can be seen in the available bandwidth measurement of a
15 minute packet-level trace of a 100 Mbps Japanese WAN
link shown in figure 3. In order to facilitate efficient traffic
engineering and run networks at high utilization to save costs
as discussed in section I, accurate information about the
current throughput per link and flow is needed.
While a number of different link-state update policies has
been proposed in the past decades [16], our experimental
measurements indicate that policies based on absolute or
relative change as well as class-based or timer policies do not
capture the dynamics of today’s network traffic at a sufficiently
detailed level to serve as an input for flow scheduling. Figure 4
contrasts the difference between the actual bandwidth on a 10
Mbps access network link and the bandwidth as estimated by a
relative change policy: as the stream rapidly changes demands,
the flow’s throughput is either grossly under- or overestimated
by the network, thereby either oversubscribing and wasting
resources, or potentially harming flows. This behavior is
the result of current link-state update policies disseminating
information based on recent but still historical measurements,
in an attempt to balance either excessively high update rates or
tolerate outdated information. While this particular trace may
in principle be better approximated by tuning the update rate or
choosing a different link-state update policy, the fundamental
issue exists across all existing approaches: figure 5 shows the
average relative estimation error as a function of update policy
and update frequency.
While reducing the staleness, more periodic updates however do not necessarily provide better flow information, as the
dynamics of a complex flow characteristic as shown in figure 3
or 4 cannot be easily approached by a static reporting interval
without using infinitesimal time intervals and their prohibitive
overhead costs. To avoid this issue, the proposed OpenNetMon
To determine throughput for each flow, OpenNetMon regularly queries switches to retrieve Flow Statistics using the
messages described in section III. With each query, our module
receives the amount of bytes sent and the duration of each
flow, enabling it to calculate the effective throughput for each
flow. Since each flow between any node pair may get different
paths assigned by the controller, OpenNetMon polls on regular
intervals for every distinct assigned path between every node
pair that is designated to be monitored.
Even though polling each path’s switch randomly or in
round robin is considered most efficient and still sufficiently
accurate [12], we poll each path’s last switch. First, the
round robin switch selection becomes more complex in larger
networks with multiple flows. When more flows exist, nonedge switches will be polled more frequently degrading efficiency. Furthermore, non-edge switches typically have a higher
number of flows to maintain, making the query for flow
statistics more expensive. Second, to compute the packet loss
in subsection IV-B, we periodically query and compare the
packet counters from the first and last switch of each path.
As this query also returns the byte and duration counters
necessary for throughput computation, we decided to combine
these queries and solely sample each path’s last switch for
means of throughput computation.
While in most routing discovery mechanisms link-state information is exchanged both when the topology changes and in
regular time intervals to guarantee synchronization, the arrival
rate of flows can vary greatly. As we will briefly show below
it is hence necessary to monitor flow behavior adaptively, by
increasing the polling intervals when flows arrive or change
their usage characteristics and decrease the polling interval
when flow statistics converge to a stable behavior.
The bursty consumption of information as well as the coding
and compression of content during transfer results in highly
fluctuating traffic demands of flows, where, for instance,
the required momentary bandwidth for a HD video stream
2 An example being shown in figure 4. The drastic difference springs
from the interleaf of fast- and slow-moving scenes and the resulting varying
compression efficiency of media codecs.
Average relative error (%)
90
80
70
60
50
40
30
20
10
0
5
10
100
Average time between updates (s)
Periodic
Absolute
Relative
Equal-class
Exp-class
Fig. 6: Experiment testbed topology. The measured traffic
flows from Server to Client.
Fig. 5: Average relative bandwidth estimation error.
uses an adaptive flow characterization increasing its sampling
rate when new flows are admitted or flow statistics change for
higher resolution and back-off in static environments when
little new information was obtained.
The adaptive nature of OpenNetMon might also be beneficial in avoiding excessive route flapping when flows are
reallocated based on a new fine-grained view of the state of
the network. For a discussion of path stability in a dynamic
network we refer to [17].
B. Packet loss and delay
Per-flow packet loss can be estimated by polling each
switch’s port statistics, assuming a linear relation to link packet
loss and the throughput rate of each flow. However, this linear
relation to flow throughput does not hold when traffic gets
queued based on QoS parameters or prioritization. Instead,
we calculate per-flow packet loss by polling flow statistics
from the first and last switch of each path. By subtracting the
increase of the source switch packet counter with the increase
of the packet counter of the destination switch, we obtain an
accurate measurement3 of the packet loss over the past sample.
Path delay, however, is more difficult to measure. Measuring
delay in a non-evasive, passive manner - meaning that no additional packets are sent through the network - is infeasible in
OpenFlow due to the fact that it is impossible to have switches
tag samples of packets with timestamps, nor is it possible to
let switches duplicate and send predictable samples of packets
to the controller to have their inter-arrival times compared.
Therefore, we use OpenFlow’s capabilities to inject packets
into the network. At every monitored path, we regularly inject
packets at the first switch, such that that probe packet travels
exactly the same path, and have the last switch send it back to
the controller. The controller estimates the complete path delay
by calculating the difference between the packet’s departure
and arrival times, subtracting with the estimated latency from
the switch-to-controller delays. The switch-to-controller delay
is estimated by determining its RTT by injecting packets which
are immediately returned to the controller, dividing the RTT
3 Given
no fragmentation occurs within the scope of the OpenFlow network.
by two to account for the bidirectionality of the answer
giving
tdelay = tarrival − tsent − 12 (RT Ts1 + RT Ts2 ) .
The experiments on delay in section V show that using
the control plane to inject and retrieve probe packets, using
OpenFlow PacketIn and PacketOut messages, yields inaccurate
results introduced by software scheduling in the switches’
control planes. To ensure measurement accuracy we connect a
separate VLAN, exclusively used to transport probe packets,
from the controller to all switch data planes directly. This
method ensures we omit the switches their control plane
software which results in a higher accuracy.
To have the measurement accuracy and the packet overhead
match the size of each flow, we inject packets for each path
with a rate relative to the underlying sum of flow throughput.
Meaning, the higher the number of packets per second of all
flows from node A to B over a certain path C, the more packets
we send to accurately determine packet loss. On average, we
send one monitoring packet every measuring round. Although
this gives an overhead at first sight, the monitoring packet is
an arbitrary small Ethernet frame of 72 bytes (minimum frame
size including preamble) that is forwarded along the path based
on a MAC address pair identifying its path and has a packet
identifier as payload. Compared to a default MTU of 1500
(which is even larger in jumbo frames), resulting in frames of
1526 bytes without 802.1Q VLAN tagging, we believe that
such a small overhead is a reasonable penalty for the gained
knowledge.
V. E XPERIMENTAL E VALUATION
In this section we evaluate our implementation of OpenNetMon by experiments on a physical testbed. Our testbed
consists of two Intel Xeon Quad Core servers running stock
Ubuntu Server 12.04.2 LTS with 1 Gbps NICs connected to
four Intel Xeon Quad Core servers running stock Ubuntu
Server 13.04 functioning as OpenFlow compatible switches
using Open vSwitch. The network is controlled by an identical
server running the POX OpenFlow controller as shown in
figure 6. All hosts are connected to their switch using 1 Gbps
Ethernet connections, thus we assume plenty of bandwidth
locally. Inter-switch connections, however, are limited to 100
6
0.10
OpenNetMon
Tcpstat
3
Average Packet Loss
Configured Packet Loss
Packet loss
Throughput (Bps)
4x10
2
1
0.08
0.06
0.04
0
0
100
200
300
400
500
0
Time (s)
100
200
300
400
500
Time (s)
Fig. 7: Bandwidth measurements of the flow between the
client and server hosts, performed by both the OpenNetMon
monitoring module and Tcpstat on the receiving node.
Fig. 8: Packet loss measurements of the flow between the client
and server hosts performed by the OpenNetMon monitoring
module, compared to the configured values using NetEm.
Mbps. The delay between switches 1-2 and 3-4 equals 1 ms,
while the delay between switches 2-3 equals 5 ms to emulate
a WAN connection. Furthermore, the packet loss between all
switches equals 1 %, resulting in an average packet loss a
little less than 3 %. Delay and packet loss is introduced using
NetEm [18]. Using this topology we intend to imitate a small
private WAN, controlled by a single OpenFlow controller.
Throughout, we use a video stream to model traffic. Due to
its bursty nature of traffic, we have chosen a H.264 encoded
movie that is streamed from server to client. Figure 7 shows
the throughput between our server and client measured by
our implementation of OpenNetMon compared to Tcpstat.
Furthermore, figure 8 shows packet loss compared to the
configured packet loss. Finally, figure 9 presents the delay
measured in our network.
The measurements shown in figure 7 represent the throughput measurements performed by Tcpstat and OpenNetMon, on
average they only differ with 16 KB/s (1.2 %), which shows
that most of the transmitted traffic is taken into account by
the measurements. The standard deviation, however, shows to
be 17.8 % which appears to be quite a significant inaccuracy
at first sight. This inaccuracy is mainly introduced by a lack
of synchronization between the two measurement setups. Due
to the fact that we were unable to synchronize the start of
the minimal 1-second buckets, traffic that is categorized in
one bucket in one measurement is categorized in two adjacent
buckets in the other. In combination with the highly bursty nature of our traffic, this leads to the elevated deviation. However,
the high accuracy of the average shows an appropriate level
of preciseness from OpenNetMon’s measurements. In fact, we
selected highly deviating traffic to prove our implementation
in a worst-case measurement scenario, therefore, we claim our
results are more reliable than a scenario with traffic of less
bursty nature.
The throughput measurements in figure 7 furthermore show
incidental spikes, followed or preceded by sudden drops.
The spikes are introduced due to the fact that the switches’
flow counter update frequency and OpenNetMon’s polling
frequency match too closely, due to which binning problems
occur. In short, it occurs that our system requests the counter
statistics shortly before the counter has been updated in one
round, while it is already updated in the adjacent round.
Although the difference is evened out on the long run, both
bins have values that are equally but opposite deviating from
the expected value, contributing to the standard deviation.
The described binning problem cannot be solved by either
decreasing or increasing the polling frequency, in the best case
the error margin is smaller but still existent. Instead, both ends
need to implement update and polling frequencies based on
the system clock, opposed to using the popular sleep function
which introduces a slight drift due to delay introduced by
the operating system scheduler and the polling and updating
process consuming time to execute. Using the system clock
to time update and polling ensures synchronization between
the two systems’ sampling bins. Furthermore, the switch
needs to implement a system to mutually exclude4 access
to the counter, guaranteeing a flow counter cannot be read
until all its properties are updated and vice versa. Another,
ideal, solution is to extend OpenFlow to allow flow counter
updates to be sent to the controller at a configurable interval
by subscription. However, since this requires updating both
the OpenFlow specification and switch firmware, we do not
consider it feasible within a short time frame.
As packet loss within one time sample may not represent
overall packet loss behavior, figure 8 shows the running average of packet loss as calculated by computing the difference
between the packet counters of the first and last switch on a
path. Although the running packet loss is not very accurate, the
measurements give a good enough estimation to detect service
degration. For more accurate flow packet loss estimates one
can reside to interpolation from port counter statistics.
Figure 9 shows delay measurements taken by (1) OpenNetMon using the control plane to send and retrieve probe packets,
(2) OpenNetMon using a separate VLAN connection to the
data plane to send and retrieve probe packets and (3) delay
measurements as experienced by the end-user application to
4 Generally
known as “mutex locks”.
30
User Application
Data Plane
16
Control Plane
14
12
Delay (ms)
Delay (ms)
20
10
0
10
8
6
4
2
-10
0
0
100
200
300
400
500
600
Control Plane
Data Plane
User Application
Time (s)
Fig. 9: Delay measurements on the path from server to client,
as measured by (a) the user application, (b) OpenNetMon
using the OpenFlow control plane and (c) OpenNetMon connected to the data plane using a separate VLAN.
verify measurement results, computed by a stream’s RTT. The
figure shows that using the OpenFlow control plane to send
and retrieve timing related probe packets introduces a large
deviation in measurements, furthermore, the measured average
is far below the expected value of 7 ms introduced by the addition of link delays as presented in figure 6. The measurements
using exclusively data plane operations, however, resemble the
delay experienced by the end-user application so closely that
a difference between the two is hardly identifiable.
These experiences are confirmed by the category plot in
figure 10, showing an average of 4.91 ms with a 95 %
confidence interval of 11.0 ms for the control plane based
measurements. Where the average value already differs more
than 30 % with the expected value, a confidence interval
1.5 times larger than the expected value is infeasible for
practical use. The data plane based measurements, however,
do show an accurate estimation of 7.16±0.104, which matches
closely to the slightly larger end-user application experience
of 7.31 ± 0.059 ms. The application delay is slightly larger
due to the link delays from switch to end-hosts that cannot be
monitored by OpenNetMon.
These results show that the control plane is unsuitable to
use as a medium for time-accurate delay measurements, as
response times introduced by the software fluctuate too much.
However, we were able to obtain accurate results by connecting the controller to the data plane using a VLAN configured
exclusively to forward probe packets from controller to the
network.
VI. I MPLEMENTATION D ETAILS ,
The implementation of OpenNetMon is published open
source and can be found at our GitHub web page [4]. Our
main intention to share it as open source is to enable other
researchers and industry to perform experiments with it, use it
as an approach to gain input parameters for fine-grained Traffic
Engineering, and - when applicable - extend it to their use.
While considered as a single module, technically OpenNetMon
consists of two module components implemented in the POX
Fig. 10: Category plot showing the averages and 95 % confidence intervals of the measurements from figure 9.
OpenFlow controller. The forwarding component is responsible for the reservation and installation of paths, while the
monitoring component is responsible for the actual monitoring.
Both components rely on the POX Discovery module to learn
network topology and updates.
Like some of the components shipped with POX, the
forwarding component learns the location of nodes within
the network and configures paths between those nodes by
installing per-flow forwarding rules on the switches. However,
we have implemented some of the specific details different
from the other POX forwarding modules on which we will
elaborate further. One could refer to this as a small guide to
building one’s own forwarding module.
1) OpenNetMon does not precalculate paths, it computes
them online when they are needed. In a multipath
environment (e.g. see [19]) not all flows from node
A to B necessarily follow the same path, by means
of load-balancing or Traffic Engineering it might be
preferred to use multiple distinct paths between any
two nodes. In order to support monitoring multipath
networks, we decided to implement a forwarding module
which may compute and choose from multiple paths
from any node A to B. Especially to support online
fine-grained Traffic Engineering, which may compute
paths based on multiple metrics using the SAMCRA
[20] routing algorithm, we decided to implement this
using online path calculations.
2) We install per-flow forwarding rules on all necessary switches immediately. We found that the modules
shipped with many controllers configured paths switchby-switch. Meaning that once an unmatched packet is
received, the controller configures specific forwarding
rules on that switch, resends that packet, and then
receives an identical packet from the next switch on
the path. This process iterates until all switches are
configured. Our forwarding module, however, installs
the appropriate forwarding rules on all switches along
the path from node A to B, then resends the original
packet from the last switch on the path to the destination
instead.
3) We flood broadcast messages and unicast messages
with an unknown destination on all edge ports of all
switches immediately. We found that packets which were
classified to be flooded, either due to their broadcast or
multicast nature or due to the fact that their destination
MAC address location was still unknown, were flooded
switch-by-switch equally to the approach mentioned
in the previous item. In the end, each switch in the
spanning-tree contacts the controller with an identical
packet while the action of that packet remains the same.
Furthermore, if in the meantime the destination of a
previously unknown unicast message was learned, this
resulted in the forwarding module installing an invalid
path from that specific switch to the destination switch.
To reduce communication overhead when a packet
arrives that needs to be flooded, our implementation
contacts all switches and floods on all edge ports.
4) We only “learn” MAC addresses on edge ports to prevent
learning invalid switch-port locations for hosts.
The forwarding component sends an event to the monitoring
component when a new flow, with possibly a new distinct
path, has been installed. Upon this action, the monitoring
component will add the edge switches to the list iterated
by the adaptive timer. At each timer interval the monitoring
component requests flow-counters from all flow destination
and source switches. The flow-counters contain the packet
counter, byte counter and duration of each flow. By storing
statistics from the previous round, the delta of those counters
is determined to calculate per-flow throughput and packet loss.
VII. C ONCLUSION
In this paper, we have presented OpenNetMon, a POX
OpenFlow controller module monitoring per-flow QoS metrics
to enable fine-grained Traffic Engineering. By polling flow
source and destination switches at an adaptive rate, we obtain
accurate results while minimizing the network and switch CPU
overhead. The per-flow throughput and packet loss is derived
from the queried flow counters. Delay, on the contrary, is
measured by injecting probe packets directly into switch data
planes, traveling the same paths, meaning nodes, links and
buffers, and thus determining a realistic end-to-end delay for
each flow. We have published the implemented Python-code
of our proposal as open source to enable further research
and collaboration in the area of QoS in Software-Defined
Networks.
We have performed experiments on a hardware testbed
simulating a small inter-office network, while loading it with
traffic of highly bursty nature. The experimental measurements
verify the accuracy of the measured throughput and delay for
monitoring, while the packet loss gives a good estimate of
possible service degration.
Based on the work in [21], we further suggest to remove the
overhead introduced by microflows, by categorizing them into
one greater stream until recognized as an elephant flow. This
prevents potential overloading of the controller by insignificant
but possibly numerous flows. In future work, we intend to use
OpenNetMon as an input generator for a responsive real-time
QoS controller that recomputes and redistributes paths.
ACKNOWLEDGMENT
We thank SURFnet and Vassil Gourov for insightful discussions on network monitoring with OpenFlow.
R EFERENCES
[1] Q. Zhao, Z. Ge, J. Wang, and J. Xu, “Robust traffic matrix estimation
with imperfect information: making use of multiple data sources,” in
ACM SIGMETRICS Performance Evaluation Review, vol. 34, no. 1.
ACM, 2006, pp. 133–144.
[2] C. Doerr, R. Gavrila, F. A. Kuipers, and P. Trimintzios, “Good practices
in resilient internet interconnection,” ENISA Report, Jun. 2012.
[3] N. McKeown, T. Anderson, H. Balakrishnan, G. Parulkar, L. Peterson,
J. Rexford, S. Shenker, and J. Turner, “Openflow: enabling innovation in
campus networks,” ACM SIGCOMM Computer Communication Review,
vol. 38, no. 2, pp. 69–74, 2008.
[4] N. L. M. van Adrichem, C. Doerr, and F. A. Kuipers.
(2013, Sep.) Tudelftnas/sdn-opennetmon. [Online]. Available:
https://github.com/TUDelftNAS/SDN-OpenNetMon
[5] J. Case, M. Fedor, M. Schoffstall, and J. Davin, “Simple Network Management Protocol (SNMP),” RFC 1157 (Historic), Internet Engineering
Task Force, May 1990.
[6] B. Claise, “Cisco Systems NetFlow Services Export Version 9,” RFC
3954 (Informational), Internet Engineering Task Force, Oct. 2004.
[7] V. Sekar, M. K. Reiter, W. Willinger, H. Zhang, R. R. Kompella, and
D. G. Andersen, “csamp: A system for network-wide flow monitoring.”
in NSDI, 2008, pp. 233–246.
[8] B. Huffaker, D. Plummer, D. Moore, and K. Claffy, “Topology discovery
by active probing,” in Applications and the Internet (SAINT) Workshops,
2002. Proceedings. 2002 Symposium on. IEEE, 2002, pp. 90–96.
[9] C. Fraleigh, S. Moon, B. Lyles, C. Cotton, M. Khan, D. Moll, R. Rockell,
T. Seely, and C. Diot, “Packet-level traffic measurements from the sprint
ip backbone,” Network, IEEE, vol. 17, no. 6, pp. 6–16, 2003.
[10] M. McCauley. (2013, Aug.) About pox. [Online]. Available:
http://www.noxrepo.org/pox/about-pox/
[11] B. S. Networks. (2013, Aug.) Floodlight openflow controller. [Online].
Available: http://www.projectfloodlight.org/floodlight/
[12] A. Tootoonchian, M. Ghobadi, and Y. Ganjali, “Opentm: traffic matrix
estimator for openflow networks,” in Passive and Active Measurement.
Springer, 2010, pp. 201–210.
[13] J. R. Ballard, I. Rae, and A. Akella, “Extensible and scalable network
monitoring using opensafe,” Proc. INM/WREN, 2010.
[14] M. Yu, L. Jose, and R. Miao, “Software defined traffic measurement with
opensketch,” in Proceedings 10th USENIX Symposium on Networked
Systems Design and Implementation, NSDI, vol. 13, 2013.
[15] W. M. WorkingGroup. (2013, Sep.) Mawi working group traffic archive.
[Online]. Available: http://mawi.wide.ad.jp/mawi/
[16] B. Fu, F. A. Kuipers, and P. Van Mieghem, “To update network
state or not?” in Telecommunication Networking Workshop on QoS
in Multiservice IP Networks, 2008. IT-NEWS 2008. 4th International.
IEEE, 2008, pp. 229–234.
[17] F. A. Kuipers, H. Wang, and P. Van Mieghem, “The stability of paths
in a dynamic network,” in Proceedings of the 2005 ACM conference
on Emerging network experiment and technology. ACM, 2005, pp.
105–114.
[18] S. Hemminger, “Network emulation with netem,” in Linux Conf Au.
Citeseer, 2005, pp. 18–23.
[19] R. van der Pol, M. Bredel, A. Barczyk, B. Overeinder, N. L. M.
van Adrichem, and F. A. Kuipers, “Experiences with MPTCP in an
intercontinental multipathed OpenFlow network,” in Proceedings of the
29th Trans European Research and Education Networking Conference,
D. Foster, Ed. TERENA, August 2013.
[20] P. Van Mieghem and F. A. Kuipers, “Concepts of exact QoS routing
algorithms,” Networking, IEEE/ACM Transactions on, vol. 12, no. 5,
pp. 851–864, 2004.
[21] A. R. Curtis, J. C. Mogul, J. Tourrilhes, P. Yalagandula, P. Sharma, and
S. Banerjee, “Devoflow: Scaling flow management for high-performance
networks,” in ACM SIGCOMM Computer Communication Review,
vol. 41, no. 4. ACM, 2011, pp. 254–265.
PayLess: A Low Cost Network Monitoring
Framework for Software Defined Networks
Shihabur Rahman Chowdhury, Md. Faizul Bari, Reaz Ahmed, and Raouf Boutaba
David R. Cheriton School of Computer Science, University of Waterloo
{sr2chowdhury | mfbari | r5ahmed | rboutaba}@uwaterloo.ca
Abstract—Software Defined Networking promises to simplify
network management tasks by separating the control plane (a
central controller) from the data plane (switches). OpenFlow has
emerged as the de facto standard for communication between
the controller and switches. Apart from providing flow control
and communication interfaces, OpenFlow provides a flow level
statistics collection mechanism from the data plane. It exposes
a high level interface for per flow and aggregate statistics
collection. Network applications can use this high level interface
to monitor network status without being concerned about the
low level details. In order to keep the switch design simple,
this statistics collection mechanism is implemented as a pullbased service, i.e. network applications and in turn the controller
has to periodically query the switches about flow statistics. The
frequency of polling the switches determines monitoring accuracy
and network overhead. In this paper, we focus on this trade-off
between monitoring accuracy, timeliness and network overhead.
We propose PayLess – a monitoring framework for SDN. PayLess
provides a flexible RESTful API for flow statistics collection
at different aggregation levels. It uses an adaptive statistics
collection algorithm that delivers highly accurate information
in real-time without incurring significant network overhead. We
utilize the Floodlight controller’s API to implement the proposed
monitoring framework. The effectiveness of our solution is
demonstrated through emulations in Mininet.
I. I NTRODUCTION
Monitoring is crucial to network management. Management
applications require accurate and timely statistics on network
resources at different aggregation levels. Yet, the network
overhead for statistics collection should be minimal. Accurate
and timely statistics is essential for many network management tasks, like load balancing, traffic engineering, enforcing
Service Level Agreement (SLA), accounting and intrusion
detection. Management applications may need to monitor
network resources at different aggregation levels. For example,
an ISP’s billing system would require monthly upstream and
downstream usage data for each user, an SLA enforcement
application may require per queue packet drop rate at ingress
and egress switches to ensure bounds on packet drops, a load
balancing application may require a switch’s per port traffic
per unit time.
A well designed network monitoring framework should
provide the management applications with a wide selection of
network metrics to monitor at different levels of aggregation,
accuracy and timeliness. Ideally, it is the responsibility of the
monitoring framework to select and poll the network resources
unless otherwise specified by the management applications.
The monitoring framework should accumulate, process and
deliver the monitored data at requested aggregation level and
frequency, without introducing too much monitoring overhead
into the system.
Although accurate and timely monitoring is essential for
seamless network management, contemporary solutions for
monitoring IP networks are ad-hoc in nature and hard to implement. Monitoring methods in IP networks can be classified as
direct and sampling based [1], [6], [19]. Direct measurement
methods incur significant network overhead, while sampling
based methods overcome this problem by sacrificing accuracy.
Moreover, different network equipment vendors have proprietary technologies to collect statistics about the traffic [1],
[19], [20]. The lack of openness and interoperability between
these methods and technologies have made the traffic statistics
collection a complex task in traditional IP networks.
More recently, Software Defined Networking (SDN) has
emerged with the promise to facilitate network programmability and ease the management tasks. SDN proposes to decouple
control plane from data plane. Data plane functionality of
packet forwarding is built into switching fabric, whereas the
control plane functionality of controlling network devices is
placed in a logically centralized software component called
controller. The control plane provides a programatic interface
for developing management programs, as opposed to providing
a configuration interface for tuning network properties. From
a management point of view, this added programmability
opens the opportunity to reduce the complexity of distributed
configuration and ease the network management tasks [15].
The OpenFlow [17] protocol has been accepted as the de
facto interface between the control and data planes. OpenFlow provides per flow1 statistics collection primitives at the
controller. The controller can poll a switch to collect statics
on the active flows. Alternatively, it can request a switch to
push flow statistics (upon flow timeout) at a specific frequency.
The controller has a global view of the network. Sophisticated
and effective monitoring solutions can be developed using
these capabilities of an OpenFlow Controller. However, in
the current scenario, a network management application for
SDN, would be a part of the control plane, rather than being
independent of it. This is due to the heterogeneity in the
controller technologies, and the absence of a uniform abstract
1A
flow is identified by a ordered set of Layer 2-4 header fields
view of the network resources.
In this paper, we propose PayLess, a network monitoring
framework for SDN. PayLess offers a number of advantages
towards developing network management applications on top
of the SDN controller platform. First, PayLess provides an
abstract view of the network and an uniform way to request
statistics about the resources. Second, PayLess itself is developed as a collection of pluggable components. Interaction
between these components are abstracted by well-defined
interfaces. Hence, one can develop custom components and
plug into the PayLess framework. Highly variable tasks, like
data aggregation level and sampling method, can be easily
customized in PayLess. We also study the resource-accuracy
trade-off issue in network monitoring and propose a variable
frequency adaptive statistics collection scheduling algorithm.
The rest of this paper is organized as follows. We begin with
a discussion of some existing IP network monitoring tools,
OpenFlow inspired monitoring tools, and variable rate adaptive
data collection methods used in sensor and IP networks
(Section II). Then we present the architecture of PayLess
(Section III) followed by a presentation of our proposed flow
statistics collection scheduling algorithm (Section IV). The
next section describes the implementation of a link utilization
monitoring application using the proposed algorithm (Section V). We evaluate and compare the performance of our link
utilization monitoring application with that of FlowSense [23]
through simulations using Mininet (Section VI). Finally, we
conclude this paper and point out some future directions of
our work (Section VII).
II. R ELATED W ORKS
There exists a number of flow based network monitoring
tools for traditional IP networks. NetFlow [1] from Cisco is the
most prevalent one. NetFlow probes are attached to a switch
as special modules. These probes collect either complete or
sampled traffic statistics, and send them to a central collector [20]. NetFlow version 9 has been adopted to be a common
and universal standard by IP Flow Information Export (IPFIX)
IETF working group, so that non-Cisco devices can send data
to NetFlow collectors. NetFlow provides information such as
source and destination IP address, port number, byte count,
etc. It supports different technologies like multi-cast, IPSec,
and MPLS. Another flow sampling method is sFlow [6], which
was introduced and maintained by InMon as an open standard.
It uses time-based sampling for capturing traffic information.
Another proprietary flow sampling method is JFlow [19],
developed by the Juniper Networks. JFlow is quite similar to
NetFlow. JFlow provides detailed information about each flow
by applying statistical sampling just like NetFlow and sFlow.
Except for sFlow, NetFlow and JFlow are both proprietary
solutions and incur a large up-front licensing and setup cost
to be deployed in a network. sFlow is less expensive to deploy,
but it is not widely adopted by the vendors.
Recently a good number of network monitoring tools based
on OpenFlow have been proposed. OpenTM [21] is one such
approach. It proposes several heuristics to choose an optimal
set of switches to be monitored for each flow. After a switch
has been selected it is continuously polled for collecting
flow level statistics. Instead of continuously polling a switch,
PayLess offers an adaptive scheduling algorithm for polling
that achieves the same level of accuracy as continuous polling
with much less communication overhead. In [13] the authors
have motivated the importance of identifying large traffic
aggregates in a network and proposed a monitoring framework
utilizing secondary controllers to identify and monitor such
aggregates using a small set of rules that changes dynamically
with traffic load. This work differs significantly from PayLess.
Whereas PayLess’s target is to monitor all flows in a network,
this work monitors only large aggregate flows. FlowSense [23]
proposes a passive push based monitoring method where
FlowRemoved messages are used to estimate per flow link
utilization. While communication overhead for FlowSense is
quite low, its estimation is quite far from the actual value and
it works well only when there is a large number of small
duration flows. FlowSense cannot capture traffic bursts if they
do not coincide with another flow’s expiry.
There has been an everlasting trade-off between statistics
collection accuracy and resource usage for monitoring in IP
networks. Monitoring in SDN also needs to make a tradeoff between resource overhead and measurement accuracy
as discussed by the authors in [18]. Variable rate adaptive
sampling techniques have been proposed in different contexts to improve the resource consumption while providing
satisfactory levels of accuracy of collected data. Variable
rate sampling techniques to save resource while achieving
a higher accuracy rate have been extensively discussed in
the literature in the context sensor networks [12], [9], [16],
[14], [7], [22]. The main focus of these sampling techniques
has been to effectively collect data using the sensor while
trying to minimize the sensor’s energy consumption, which
is often a scarce resource for the sensors. Adaptive sampling
techniques have also been studied in the context of traditional
IP networks [11], [8]. However, to the best of our knowledge
adaptive sampling for monitoring SDN have not been explored
yet.
III. S YSTEM D ESCRIPTION
A. PayLess Architecture
Fig. 1 shows the software stack for a typical SDN setup
along with our monitoring framework. OpenFlow controllers
(e.g., NOX [10], POX [5], Floodlight [2], etc.) provide a
platform to write custom network applications that are oblivious to the complexity and heterogeneity of the underlying
network. An OpenFlow controller provides a programming
interface, usually refereed to as the Northbound API, to
the network applications. Network applications can obtain
an abstract view of the network through this API. It also
provides interfaces for controlling traffic flows and collecting
statistics at different aggregation levels (e.g., flow, packet,
port, etc.). The required statistics collection granularity varies
from application to application. Some applications require per
flow statistics, while for others, aggregate statistics is required.
For example, an ISP’s user billing application would expect
to get usage data for all traffic passing though the user’s
home router. Unfortunately, neither the OpenFlow API nor
the available controller implementations (e.g., NOX, POX
and Floodlight) support these aggregation levels. Moreover,
augmenting a controller’s implementation with monitoring
functionality will greatly increase design complexity. Hence,
a separate layer for abstracting monitoring complexity from
the network applications and the controller implementation is
required.
To this end, we propose PayLess: a low-cost efficient
network statistics collection framework. PayLess is built on
top of an OpenFlow controller’s northbound API and provides
a high-level RESTful API. The monitoring framework takes
care of the translation of high level monitoring requirements
expressed by the applications. It also hides the details of
statistics collection and storage management. The network
monitoring applications, built on top of this framework, will
use the RESTful API provided by PayLess and will remain
shielded from the underlying low-level details.
App Development Framework
L2/L3/L4
Forwarding
Firewall
...
Monitoring
Apps
•
•
a statistics collection scheduling algorithm for our framework. However, the scheduler is well isolated from the
other components in our framework. One can develop
customized scheduling algorithm for statistics collection
and seamlessly integrate withing the PayLess framework.
Switch Selector: We have to identify and select one (or
more) switches for statistics collection, when a statistics
collection event is scheduled. This component determines
the set of switches to poll for obtaining the required
statistics at the schedules time stamps. For example, to
collect statistics about a flow, it is sufficient to query
the ingress switch only, and it is possible to determine
the statistics for the intermediate switches by simple
calculations. Authors in [21] have discussed a number
of heuristics for switch selection in the context of traffic
matrix calculation in SDN.
Aggregator & Data Store: This module is responsible
for collecting raw data from the selected switches and
storing these raw data in the data store. This module
aggregates the collected raw-data to compute monitoring
information at requested aggregation levels. The data
store is an abstraction of a persistent storage system. It
can range from regular files to relational databases to
key-value stores.
PayLess API
Monitoring Framework (PayLess)
Network monitoring applications
(written in any programming language)
Northbound API
Control Plane (Floodlight / NOX / POX etc.)
Intrusion
Detection
System
Link Usage
Monitor
User Billing
Differentiated
Qos
Management
....
OpenFlow Protocol
PayLess
RESTful API
OpenFlow Enabled
Switch Network
Request
Interpreter
Fig. 1.
Scheduler
Switch
Selector
Aggregator
SDN Software Stack
Data Store
We elaborate the monitoring framework (PayLess) portion
from Fig. 1 and show its components in Fig. 2. These
component are explained in detail below:
• Request Interpreter: This component is responsible for
translating the high level primitives expressed by the
applications to flow level primitives. For example, a
user billing application may request usage of a user
by specifying the user’s identity (e.g., email address or
registration number). This component is responsible for
interacting with other modules to translate this high level
identifier to network level primitives.
• Scheduler: The scheduler component schedules polling
of switches in the network for gathering statistics. OpenFlow enabled switches can provide per flow statistics, per
queue statistics as well as per port aggregate statistics.
The scheduler determines which type of statistics to
poll, based on the nature of request it received from an
application. The time-stamps of polling is determined by
a scheduling algorithm. In the next section, we describe
Monitoring Framework
Northbound API
Fig. 2.
PayLess Network Monitoring Framework
B. Monitoring API
PayLess provides an RESTful API for rapid development of
network monitoring applications. Any programming language
that can used to access this API. A network application
can express high level primitives in its own context to
be monitored and get the collected data from the PayLess
data store at different aggregation levels. Here, we provide
a few examples to illustrate how network applications can
access this API. Every network application needs to create a
MonitoringRequest (Fig. 3) object and register it with
PayLess. The MonitoringRequest object contains the
following information about a monitoring task:
{"MonitoringRequest": {
"Type": "["performance" | "security" | "failure" | ... ]",
"Metrics": [
{"performance": ["latency", "jitter", "throughput", "packet-drop", ...]},
{"security": ["IDS-alerts", "ACL-violations", "Firewall-alerts", ...]},
{"failure": ["MTBF", "MTTR"]}
],
"Entity": [""],
"AggregationLevel": ["flow" | "table" | "port" | "switch" | "user" | "custom": "uri_to_script"],
"Priority": ["real-time", "medium", "low", custom: "monitoring-frequency"],
"Monitor" : ["direct", "adaptive", "random-sampling", "optimized", "custom": "uri_to_script"],
"Logging": ["default", "custom": ""]
}}
Fig. 3.
•
•
•
•
•
•
•
MonitoringRequest object
Type: the network application needs to specify what type
of metrics it wants to be monitored e.g., performance,
security, fault-tolerance, etc.
Metrics: for each selected monitoring type, the network
application needs to provide the metrics that should be
monitored and logged. Performance metrics may include
delay, latency, jitter, throughput, etc. For security monitoring, metrics may include IDS-alerts, firewall-alerts,
ACL-violations etc. for a specific switch, port, or user.
Failure metrics can be mean-time-before-failure or meantime-to-repair for a switch, link, or flow table.
Entity: this is an optional parameter specifies the network
entities that need to be monitored. In PayLess, network
users, switches, switch ports, flow-tables, traffic flows,
etc. can be uniquely identified and monitored. Network
application needs to specify which entities it wants to
monitor.
Aggregation Level: network applications must specify
the aggregation level (e.g., flow, port, user, switch etc.) for
statistics collection. PayLess provides a set of predefined
aggregation levels (Fig. 3), as well as the option to
provide a script to specify custom aggregation levels.
Priority: PayLess provides the option to set priority
levels for each metric to be monitored. We have three
pre-defined priority levels: real-time, medium, and low.
Alternatively, an application can specify a custom polling
frequency. PayLess framework is responsible for selecting
the appropriate polling frequencies for the pre-defined
priorities.
Monitor: This parameter specifies the monitoring
method, for example, direct, adaptive, random-sampling,
or optimized. The default monitoring method is optimized, in which case the PayLess framework selects the
appropriate monitoring method for balancing between
accuracy, timeliness, and network overhead. Apart from
the predefined sampling methods, an application may
provide a link to a customized monitoring method.
Logging: A network application can optionally provide a
LogFormat object to the framework for customizing the
output format. If no such object is provided then PayLess
writes the logs in its default format.
The MonitoringRequest object is specified using
JSON. Attributes of this object along with some possible
values are shown in Fig. 3. A network application registers
a MonitoringRequest object through PayLess’s RESTful
API. After the registration is successful, PayLess provisions
monitoring resources for capturing the requested statistics and
places them in the data store. In response to a monitoring
request PayLess returns a data access-id to the network
application. The network application uses this access-id to
retrieve collected data from the data store.
For example, an ISP’s network application for user billing
may specify the MonitoringRequest object as shown
in Fig. 4. Here, the application wants to monitoring performance metrics: throughput, and packet-drops for particular
users with a low priority using direct monitoring technique
and log the collected data in PayLess’s default format.
{"MonitoringRequest": {
"Type": "["performance"]",
"Metrics": [
{"performance": [
"throughput",
"packet-drop",
]},
],
"Entity": ["user": ""],
"AggregationLevel": ["user"],
"Priority": ["medium"],
"Monitor" : ["direct"],
"Logging": ["default"]
}}
Fig. 4.
MonitoringRequest for user billing application
Another example will be a real-time media streaming service that needs to provide differentiated QoS to the user.
This application needs flow-level real-time monitoring data
to make optimal routing decisions. A possible sample for the
MonitoringRequest object is shown in Fig. 5.
PayLess also provides API functions for listing, updating, and deleting MonitoringRequest objects. Table I
provides a few example API URIs and their parameters for
illustration purpose. The first URIs provides the basic CRUD
functionality for the MonitorRequest object. The fifth URI
RESTful API URI
/payless/object/monitor_request/register
/payless/object/monitor_request/update
/payless/object/monitor_request/list
/payless/object/monitor_request/delete
/payless/log/retrieve
Parameter(s)
data=
id=&data=
id=
id=
access-id=
TABLE I
PAY L ESS REST FUL API
{"MonitoringRequest": {
"Type": "["performance"]",
"Metrics": [
{"performance": [
"throughput",
"latency",
"jitter",
"packet-drop",
]},
],
"Entity": ["flow": ""],
"AggregationLevel": ["flow"],
"Priority": ["real-time"],
"Monitor" : ["adaptive"],
"Logging": ["default"]
}}
Fig. 5.
MonitoringRequest for differentiated QoS
is used for accessing collected data from the data store.
IV. A N A DAPTIVE M ONITORING M ETHOD
In this section, we present an adaptive monitoring algorithm
that can be used to monitor network resources. Our goal is to
achieve accurate and timely statistics, while incurring little
network overhead. We assume that the underlying switch to
controller communication is performed using the OpenFlow
protocol. Therefore, before diving into the details of the
algorithm, we present a brief overview of the OpenFlow
messages that are used in our framework.
OpenFlow identifies a flow using the fields obtained from
layer 2, layer 3 and layer 4 headers of a packet. When a
switch receives a flow that does not match with any rules in
its forwarding table, it sends a PacketIn message to the controller. The controller installs the necessary forwarding rules in
the switches by sending a FlowMod message. The controller
can specify an idle timeout for a forwarding rule. This refers
to the inactivity period, after which a forwarding rule (and
eventually the associated flow) is evicted from the switch.
When a flow is evicted the switch sends a FlowRemoved
message to the controller. This message contains the duration
of the flow as well as the number of bytes matching this
flow entry in the switch. Flowsense [23] proposes to monitor
link utilization in zero cost by tracking the PacketIn and
FlowRemoved messages only. However, this method has
large average delay between consecutive statistics retrieval.
It also does not perform well in monitoring traffic spikes.
In addition to these messages, the controller can send a
FlowStatisticsRequest message to the switch to query
about a specific flow. The switch sends the duration and byte
count for that flow in a FlowStatisticsReply message
to the controller.
An obvious approach to collect flow statistics is to poll
the switches periodically each constant interval of time by
sending the FlowStatisticsRequest message. A high
frequency (i.e., low polling interval) of polling will generate
highly accurate statistics. However, this will induce significant
monitoring overhead in the network. To strike a balance
between statistics collection accuracy and incurred network
overhead, we propose a variable frequency flow statistics
collection algorithm.
We propose that when the controller receives a PacketIn
message, it will add a new flow entry to an active
flow table along with an initial statistics collection timeout, τ milliseconds. If the flow expires within τ milliseconds, the controller will receive its statistics in a
FlowRemoved message. Otherwise, in response to the timeout event (i.e., after τ milliseconds), the controller will send a
FlowStatisticsRequest message to the corresponding
switch to collect statistics about that flow. If the collected
data for that flow does not significantly change within this
time period, i.e., the difference between the previous and
current byte count against that flow is not above a threshold,
say ∆1 , the timeout for that flow is multiplied by a small
constant, say α. For a flow with low packet rate, this process
may be repeated until a maximum timeout value of Tmax
is reached. On the other hand, if the difference in the old
and new data becomes larger than another threshold ∆2 , the
scheduling timeout of that flow is divided by another constant
β. For a heavy flow, this process may be repeated until a
minimum timeout value of Tmin is reached. The rationale
behind this timeout adjustment is that we maintain a higher
polling frequency for flows that significantly contribute to
link utilization, and we maintain a lower polling frequency
for flows that do not significantly contribute towards link
utilization at that moment. If their contribution increases,
the scheduling timeout will adjust according to the proposed
algorithm to adapt the polling frequency with the increase in
traffic.
We optimize this algorithm further by batching
FlowStatisticsRequest messages together for flows
with same timeout. This will reduce the spread of monitoring
traffic in the network without affecting the effectiveness of
polling with a variable frequency. The pseudocode of this
algorithm is shown in Algorithm 1.
Algorithm 1 FlowStatisticsCollectionScheduling(Event e)
globals: active f lows //Currently Active Flows
schedule table //Associative table of active flows
// indexed by poll frequency
U // Utilization Statistics. Output of this algorithm
if e is Initialization event then
active f lows ← φ, schedule table ← φ, U ← φ
end if
if e is a PacketIn event then
f ← he.switch, e.port, Tmin , 0i
schedule table[Tmin ] ← schedule table[Tmin ] ∪ f
else if e is timeout τ in schedule table then
for all flows f ∈ schedule table[τ ] do
send a FlowStatisticsRequest to f.switch
end for
else if e is a FlowStatisticsReply event for flow f
then
dif f byte count ← e.byte count − f.byte count
dif f duration ← e.duration − f.duration
checkpoint ← current time stamp
U [f.port][f.switch][checkpoint] ← hdif f byte count,
dif f durationi
if dif f byte count < ∆1 then
f.τ ← min(f.τ α, Tmax )
Move f to schedule table[f.τ ]
else if dif f byte count > ∆2 then
f.τ ← max(f.τ /β, Tmin )
Move f to schedule table[f.τ ]
end if
end if
V. I MPLEMENTATION : L INK U TILIZATION M ONITORING
As a concrete use case of our proposed framework and
the monitoring algorithm, we have implemented a prototype
link utilization monitoring application on Floodlight controller
platform. We have chosen Floodlight as the controller platform
for its highly modular design and the rich set of APIs to
perform operations on the underlying OpenFlow network. The
source code of the implementation is available in github [4].
It is worth mentioning that our prototype implementation is
intended to perform experiments and to show the effectiveness
of our algorithm. Hence, we have made the following simplifying assumption about flow identification and matching without
any loss of generality. Since we are monitoring link utilization,
it is sufficient for us to identify the flows by their source and
destination IP addresses. We performed the experiments using
iperf [3] in UDP mode. The underlying network also had some
DHCP traffic, which also uses UDP. We filtered out the DHCP
traffic while adding the flows to active flow table by looking
at the destination UDP port numbers2 . It is worth noting that
all the components of our proposed monitoring framework are
not in place yet. Therefore, we resorted to implementing the
2 DHCP uses destination port 67 and 68 for DHCP requests and replies,
respectively
link utilization monitoring application as a floodlight module.
We intercepted the PacketIn and FlowRemoved messages to keep track of flow installations and removals from
the switches, respectively. We also maintained a hash table
indexed by the schedule timeout value. Each bucket with
timeout τ , contains a list of active flows that need to be polled
every τ milliseconds. Each of the bucket in the hashtable is
also assigned a worker thread that wakes up every τ milliseconds and sends a FlowStatisticsRequest message
to the switches corresponding to the flows in its bucket.
The FlowStatisticsReply messages are received asynchronously by the monitoring module. The latter creates a
measurement checkpoint for each reply message. The contribution of a flow is calculated by dividing its differential byte
count from the previous checkpoint by the differential time
duration from the previous checkpoint. The monitoring module
examines the measurement checkpoints of the corresponding
link and updates the utilization at previous checkpoints if necessary. The active flow entries are moved around the hashtable
buckets with lower or higher timeout values depending on the
change in byte count from previous measurement checkpoint.
Currently, we have a basic REST API, which provides an
interface to get the link statistics (in JSON format) of all
the links in the network. However, our future objective is
to provide a REST API for allowing external applications
to register a particular flow for monitoring and obtaining the
statistics.
Although the current implementation makes some assumption about flow identification and matching, this does not
reduce the generality of our proposed algorithm. Our long term
goal is to have a full functional implementation of the PayLess
framework for efficient flow statistics collection. Developing
network monitoring applications will be greatly simplified by
the statistics exposed by our framework. It is worth mentioning
that the proposed scheduling algorithm lies at the core of the
scheduler component of this framework, and no assumption
about the algorithm’s implementation were made in this prototype. The only assumptions made here corresponds to the
implementation of link utilization monitoring application that
uses our framework.
VI. E VALUATION
In this section, we present the performance of a demo
application for monitoring link utilization. This application is
developed using the PayLess framework. We have also implemented Flowsense and compared it to PayLess, since both
target the same use case. We have also implemented a baseline
scenario, where the controller periodically polls the switches
at a constant interval to gather link utilization information.
We have used Mininet to simulate a network consisting of
hosts and OpenFlow switches. Details on the experimental
setup is provided in Section VI-A. Section VI-B explains
the evaluation metrics. Finally, the results are presented in
Section VI-C.
T = 0s
4
10 12 14
17
23 25
28 30
33 35
38
48
53
T = 60s
(h1,h8,
10Mbps)
(h1,h8, 10Mbps)
(h2,h7, 20Mbps)
(h2,h7, 20Mbps)
(h2,h7, 50Mbps) (h2,h7, 50Mbps)
(h2,h7, 50Mbps)
(h3,h6, 20Mbps)
Timing Diagram of Experiment Traffic
A. Experiment Setup
We have used a 3-level tree topology as shown in Fig. 7
for this evaluation. UDP flows for a total duration of 100s
between hosts were generated using iperf. Fig. 6 is the timing
diagram showing the start time, throughput and the end time
for each flow. We have set the idle timeout of the active
flows in a switche to 5s. We have also deliberately introduced
pauses of different durations between the flows in the traffic
to experiment with different scenarios. Pauses less than the
soft timeout were placed between 28th and 30th second, and
also between 33 and 35 seconds to observe how the proposed
scheduling algorithm and the Flowsense react to sudden traffic
spikes. The minimum and maximum polling interval for our
scheduling algorithm was set to 500ms and 5s, respectively.
For the constant polling case, a polling interval of 1s was used.
The parameters ∆1 and ∆2 described in Section IV were set to
100MB. Finally, we have set α and β described in Section IV
to 2 and 6, respectively. β was set to a higher value to quickly
react and adapt to any change in traffic.
experiment with different values of minimum polling interval
(Tmin ) and show its effect on the trade-off between accuracy
and monitoring overhead.
Overhead: We compute overhead in terms of the number
of FlowStatisticsRequest messages sent from the controller. We compute the overhead at timeout expiration events
when a number of flows with the same timeout are queried
for statistics.
C. Results
80
Flowsense
Payless
Periodic Polling
70
Link Utilization (Mbps)
Fig. 6.
60
50
40
30
20
10
0
0
10
Sw-0
Fig. 8.
40
50
60
Utilization Measurement
Sw-2
Sw-3
Sw-4
h2
30
Time (second)
Sw-1
h1
20
h3
Fig. 7.
Sw-5
h4
h5
Sw-6
h6
h7
h8
Topology for Experiment
B. Evaluation Metrics
Utilization: Link utilization is measured as the instantaneous throughput obtained from that link and is measured in
units of Mbps. We report the utilization of the link between
switches Sw-0 and Sw-1 (Fig. 7). According to the traffic
mix, this link is part of all the flows and is most heavily used. It
also exhibits a good amount of variation in utilization. We also
1) Utilization: Fig. 8 shows the utilization of Sw0-Sw1
link over simulation time, measured using three different techniques. The baseline scenario, i.e., periodic polling, which has
the most resemblance with the traffic in Fig. 6. Flowsense fails
to capture the traffic spikes because of the large granularity of
its measurement. The traffic pauses less than the soft timeout
value cause Flowsense to report less than the actual utilization.
In contrast, our proposed algorithm very closely follows the
utilization pattern obtained from periodic polling. Although it
did not succeed to fully capture the first spike in the traffic, it
quickly adjusted itself to successfully capture the next traffic
spike.
2) Overhead: Fig. 9 shows the messaging overhead of
the baseline scenario and our proposed algorithm. Since
Flowsense does not send FlowStatisticsRequest messages, therefore it has zero messaging overhead, hence not
shown in the figure. The fixed polling method polls all the
3) Effect of Minimum Polling Frequency, Tmin : As explained in Algorithm 1, our scheduling algorithm adopts to the
traffic pattern. For, a rapidly changing traffic spike, the poling
frequency sharply decreases and reaches Tmin . In Fig. 10, we
present the impact of Tmin on monitoring accuracy. Evidently,
the monitoring data is very accurate for Tmin = 250ms and
it gradually degrades with higher values of Tmin . However,
monitoring accuracy comes at the cost of network overhead as
presented in Fig. 11. This figure presents the root-mean-square
(RMS) error in monitoring accuracy along side the messaging
overhead for different values of Tmin . This parameter can
be adjusted to trade-off accuracy with messaging overhead,
depending on the application requirements.
25
20
15
10
5
0
0
10
20
30
40
50
60
Time (second)
Fig. 9.
Messaging Overhead
active flows after the fixed timeout expires. This causes a
large number of messages to be injected in the network at the
query time. On the other hand, our proposed algorithm reduces
the spike of these messages by assigning different timeouts to
flows and spreading the messages over time. It is also evident
in Fig. 9 that our algorithm has more query points across the
timeline, but at each time line it sends out less messages in
the network to get statistics about flows. In some cases, our
algorithm sends out 50% less messages than that of periodic
polling method.
Although Flowsense has zero measurement overhead, it is
much less accuracy compared to our adaptive scheduling algorithm. In addition, the monitoring traffic incurred by PayLess is
very low, only 6.6 messages per second on average, compared
to 13.5 messages per second on average for periodic polling. In
summary, the proposed algorithm for scheduling flow statistics
can achieve an accuracy close to constant periodic polling
method, while having a reduced messaging overhead.
250
100
Message Overhead
Measurement Error
200
80
150
60
100
40
50
20
0
Measurement error (RMS)
Payless
Periodic Polling
Number of messages/Minute
Monitoring Overhead (OpenFlow Messages)
30
0
250
Fig. 11.
500
1000
Tmin (ms)
2000
Overhead and measurement error
Link Utilization (Mbps)
VII. C ONCLUSION AND F UTURE W ORK
Actual Utilization
300
250
200
150
100
50
0
300
250
200
150
100
50
0
300
250
200
150
100
50
0
300
250
200
150
100
50
0
T-min = 250ms
T-min = 500ms
T-min = 1000ms
0
20
40
60
80
Time (second)
Fig. 10.
Effect of Tmin on Measured Utilization
100
In this paper, we have introduced PayLess – a flexible and
extendable monitoring framework for SDN. To the best of
our knowledge, PayLess is the only monitoring framework
for SDN. Almost every aspect of monitoring can be specified
using PayLess’s generic RESTful API. Moreover, the core
components in PayLess framework can be replaced by custom
implementations without affecting the other components. To
demonstrate the effectiveness of PayLess framework, we have
presented an adaptive scheduling algorithm for flow statistics
collection. We implemented a concrete use case of monitoring
link utilization using the proposed algorithm. We have evaluated and compared its performance with that of Flowsense
and a periodic polling method. We found that the proposed
algorithm can achieve higher accuracy of statistics collection
than FlowSence. Yet, the incurred messaging overhead is only
50% of the overhead in an equivalent periodic poling strategy.
Our long term goal along this work is to provide an opensource, community driven monitoring framework for SDN.
This should provide a full-fledged abstraction layer on top
of the SDN control platform for seamless network monitoring
application development.
R EFERENCES
[1] Cisco NetFlow site reference. http://www.cisco.com/en/US/products/ps6601/
products white paper0900aecd80406232.shtml.
[2] Floodlight openflow controller. http://www.projectfloodlight.org/floodlight/.
[3] Iperf: TCP/UDP Bandwidth Measurement Tool. http://iperf.fr/.
[4] “Payless” source code. http://github.com/srcvirus/floodlight.
[5] POX OpenFlow Controller. https://github.com/noxrepo/pox.
[6] Traffic Monitoring using sFlow. http://www.sflow.org/.
[7] C. Alippi, G. Anastasi, M. Di Francesco, and M. Roveri. An adaptive
sampling algorithm for effective energy management in wireless sensor
networks with energy-hungry sensors. Instrumentation and Measurement, IEEE Transactions on, 59(2):335–344, 2010.
[8] G. Androulidakis, V. Chatzigiannakis, and S. Papavassiliou. Network
anomaly detection and classification via opportunistic sampling. Network, IEEE, 23(1):6–12, 2009.
[9] B. Gedik, L. Liu, and P. Yu. Asap: An adaptive sampling approach to
data collection in sensor networks. Parallel and Distributed Systems,
IEEE Transactions on, 18(12):1766–1783, 2007.
[10] N. Gude, T. Koponen, J. Pettit, B. Pfaff, M. Casado, N. McKeown,
and S. Shenker. NOX: Towards an operating system for networks.
SIGCOMM Comput. Commun. Rev., 38(3):105–110.
[11] E. Hernandez, M. Chidester, and A. George. Adaptive sampling for
network management. Journal of Network and Systems Management,
9(4):409–434, 2001.
[12] A. Jain and E. Y. Chang. Adaptive sampling for sensor networks. In
Proceeedings of the 1st international workshop on Data management for
sensor networks: in conjunction with VLDB 2004, pages 10–16. ACM,
2004.
[13] L. Jose, M. Yu, and J. Rexford. Online measurement of large traffic
aggregates on commodity switches. In Proc. of the USENIX HotICE
workshop, 2011.
[14] J. Kho, A. Rogers, and N. R. Jennings. Decentralized control of adaptive
sampling in wireless sensor networks. ACM Transactions on Sensor
Networks (TOSN), 5(3):19, 2009.
[15] H. Kim and N. Feamster. Improving network management with software
defined networking. Communications Magazine, IEEE, 51(2):114–119,
2013.
[16] A. D. Marbini and L. E. Sacks. Adaptive sampling mechanisms in sensor
networks. In London Communications Symposium, 2003.
[17] N. McKeown, T. Anderson, H. Balakrishnan, G. Parulkar, L. Peterson,
J. Rexford, S. Shenker, and J. Turner. Openflow: enabling innovation
in campus networks. SIGCOMM Comput. Commun. Rev., 38(2):69–74,
2008.
[18] M. Moshref, M. Yu, and R. Govindan. Resource/Accuracy Tradeoffs
in Software-Defined Measurement. In Proceedings of HotSDN 2013,
August 2013. to appear.
[19] A. C. Myers. JFlow: Practical mostly-static information flow control.
In Proceedings of the 26th ACM SIGPLAN-SIGACT symposium on
Principles of programming languages, pages 228–241. ACM, 1999.
[20] C. Systems.
Cisco CNS NetFlow Collection Engine.
http://www.cisco.com/en/US/products/sw/netmgtsw/ps1964/index.html.
[21] A. Tootoonchian, M. Ghobadi, and Y. Ganjali. OpenTM: traffic matrix
estimator for OpenFlow networks. In Passive and Active Measurement,
pages 201–210. Springer, 2010.
[22] R. Willett, A. Martin, and R. Nowak. Backcasting: adaptive sampling
for sensor networks. In Information Processing in Sensor Networks,
2004. IPSN 2004. Third International Symposium on, pages 124–133,
2004.
[23] C. Yu, C. Lumezanu, Y. Zhang, V. Singh, G. Jiang, and H. V.
Madhyastha. FlowSense: Monitoring Network Utilization with Zero
Measurement Cost. In Passive and Active Measurement, pages 31–41.
Springer, 2013.
FlowCover: Low-cost Flow Monitoring Scheme in
Software Defined Networks
Zhiyang Su, Ting Wang, Yu Xia, Mounir Hamdi
arXiv:1710.05697v1 [cs.NI] 16 Oct 2017
Hong Kong University of Science and Technology
{zsuab, twangah, rainsia, hamdi}@cse.ust.hk
Abstract—Network monitoring and measurement are crucial
in network management to facilitate quality of service routing
and performance evaluation. Software Defined Networking (SDN)
makes network management easier by separating the control
plane and data plane. Network monitoring in SDN is lightweight as operators only need to install a monitoring module into
the controller. Active monitoring techniques usually introduce
too many overheads into the network. The state-of-the-art approaches utilize sampling method, aggregation flow statistics and
passive measurement techniques to reduce overheads. However,
little work in literature has focus on reducing the communication
cost of network monitoring. Moreover, most of the existing
approaches select the polling switch nodes by sub-optimal local
heuristics. Inspired by the visibility and central control of SDN,
we propose FlowCover, a low-cost high-accuracy monitoring
scheme to support various network management tasks. We
leverage the global view of the network topology and active
flows to minimize the communication cost by formulating the
problem as a weighted set cover, which is proved to be NPhard. Heuristics are presented to obtain the polling scheme
efficiently and handle flow changes practically. We build a
simulator to evaluate the performance of FlowCover. Extensive
experiment results show that FlowCover reduces roughly 50%
communication cost without loss of accuracy in most cases.
I. I NTRODUCTION
Monitoring resource utilization is a common task in network
management. Recently, as the rapid development of software
defined networking (SDN), network management becomes
easier and easier. A typical SDN based network consists of
many switches and a logically centralized controller which
monitors the whole network state and chooses routing paths.
The separation of the control plane and data plane makes it
possible to track the state of each flow in the control plane.
Low-cost, timely and accurate flow statistics collection is crucial for different management tasks such as traffic engineering,
accounting and intelligent routing.
There are two ways to measure the network performance:
active or passive techniques. Active measurement obtains the
network state by injecting probe packets into the network.
Active measurement is flexible since you can measure what
you want. It estimates the network performance by tracking
how the probe packets are treated in the network. The accuracy
is closely related to the probe frequency in general. However,
the measurement packets will disturb the network, especially
when sending measurement traffic with high frequency. In
contrast to active measurement, passive measurement provides
detailed information about the nodes being measured. For
example, Simple Network Monitoring Protocol (SNMP) and
NetFlow [1] are widely used in network management. Passive
measurement imposes low or even zero overheads to the
network, however, it requires full access to the network devices
such as routers and switches. Besides, full access to these
devices raises privacy and security issues. As a result, these
limitations impede the usage of passive measurements in
practice.
The flexibility of SDN yields both opportunities and challenges to monitor the network. Traditional network monitoring
techniques such as NetFlow [1] and sFlow [2] support various
kinds of measurement tasks, but the measurement and deployment cost are typically high. For example, the deployment of
NetFlow consists of setting up collector, analyzer and other
services. In contrast, monitoring flow statistics in SDN is
relatively light-weight and easy to implement: the central controller maintains the global view of the network, and is able to
poll flow statistics from any switch at any time. Furthermore,
the boundary between active and passive measurement in SDN
is blurred. The controller proactively polls flow statistics and
learns active flows by passively receiving notifications from
the switches (ofp packet in and ofp flow removed message).
The challenge is that all the monitoring traffic has to be
forwarded to the controller which is likely to result in a
bandwidth bottleneck. The situation becomes worse for inband SDN deployment when monitoring and routing traffic
are sharing bandwidth along the same link.
The existing pull-based measurement approaches such as
OpenTM [3] utilize many switch selection heuristics to gather
the flow statistics. It generates a single query for each sourcedestination pair to obtain the traffic matrix. If the number of
active flows is large, the extra communication cost for each
flow cannot be neglected. In order to reduce the monitoring
overheads in SDN, FlowSense [4] is proposed to infer the
network utilization by passively capturing and analyzing the
flow arrival and expiration messages. However, FlowSense
calculates the link utilization only at discrete points in time
after the flow expires. This limitation cannot meet the realtime requirement, neither can the accuracy of the results be
guaranteed. We argue that the existing approaches are suboptimal as they lack global optimization to choose the polling
switches. On the other hand, how to reduce the network
consumption for measurement traffic is not well studied by
far.
To address the aforementioned issues, we propose FlowCover, a low-cost high-accuracy scheme that collects the flow
H2
S2
f1
S3
f2
f5
H1
Total communication cost (KB)
f1
f2
f4
f5
f1
f2
f3
f4
f5
f6
f1
f2
f3
S1
S4
S5
f3
f4
f6
H3
S6
f3
f5
f6
H4
7000
Aggregation method
Per-flow method
6500
6000
5500
5000
4500
4000
3500
3000
0
10
20
30
40
50
60
70
80
The number of "polling all" switches
H5
Fig. 1. Motivation Example. There are six switches and five hosts in the
network. Five active flows are plotted in different colors: f 1 : H1 − H2; f 2 :
H1 − H3; f 3 : H1 − H4; f 4 : H2 − H4; f 5 : H2 − H5; f 6 : H4 − H5.
Each of the switches only holds the partial view of all the flows. The partial
view of each switch is given in the rectangles.
statistics across the network in a timely fashion. Our approach
significantly reduces the communication cost of monitoring by
aggregating the polling requests and replies. We leverage the
global view of SDN to optimize the monitoring strategies. The
polling scheme is dynamically changed with real-time traffic
across the network. To the best of our knowledge, this is the
first work to global optimize the SDN monitoring problem
formally.
The primary contributions of our approach are listed below:
• We provide a general framework to facilitate various
monitoring tasks such as link utilization, traffic matrix
estimation, anomaly detection, etc.
• We introduce a globally optimized flow statistics collection scheme. Our approaches select target switches by the
view of all active flows instead of on a per-flow basis.
• Extensive experimental results show that FlowCover reduces roughly 50% monitoring overheads without loss of
accuracy in most of the time.
The rest of this paper is structured as follows. Section II
illustrates the motivation of FlowCover by an example. Section III presents the architecture of FlowCover and formulates
the problem. Section IV elaborates the performance of FlowCover by simulation results. Finally, Section V summarizes
related work and Section VI concludes the paper.
II. M OTIVATION
OpenFlow [5] is an implementation of SDN. Currently,
OpenFlow-based SDN is widely used in both industry and
academia. OpenFlow is the de facto standard communication
interface between the control plane and data plane. It is an
application-layer protocol which contains Ethernet header, IP
header and TCP header. According to the OpenFlow specification 1.0 [6], the message body of an individual flow statistics
request and reply message has a minimum length of 56 bytes
and 108 bytes respectively (at least 1 flow). Therefore, the
minimum length of flow statistics request and reply message
on wire are 122 bytes and 174 bytes respectively.
Note that the request and reply message are of almost
the same length, hence it is promising to design polling
Fig. 2. The number of “polling all” switches vs. total communication cost in
a random graph with 100 switches and 20000 active flows in the network.
schemes to reduce the monitoring overheads, especially in
the scenarios with high polling frequency. The key insight
is that we aggregate the request and reply messages by
optimizing the selection of polling switches. The strategy is
to intelligently poll a small number of switches which cover
a large ratio of flows to minimize the monitoring overheads.
For simplicity but without loss of generality, we consider outof-band deployment of the control network in this paper. An
example is shown in Figure 1 to illustrate the problem.
A naive approach to obtain the whole flow statistics is to
query one of the switches along the path for each flow and
merge the results. However, according to the aforementioned
analysis of the length of flow statistics request and reply
messages, this strategy imposes too many overheads to the
network as it collects statistics on a per-flow basis: repeated
request and reply message headers. In order to poll all flow
statistics with minimum communication cost, we can design
a globally optimized strategy to reduce the request messages
and aggregate the reply messages.
OpenFlow specification [6] defines a match structure to
identify one flow entry or a group of flow entries. A match
structure consists of many fields to match against flows such
as input switch port, Ethernet source/destination address and
source/destination IP. However, it is impractical to select an
arbitrary number of flows with “segmented” fields due to the
limited expression of a single match structure. For instance,
consider the four flows passing S3, assume the source and
destination of these flows are: f 1 : (H1, H2); f 2 : (H1, H3);
f 4 : (H2, H4); f 5 : (H5, H2). Notice that H1, H2, H3,
H4 belong to different subnets, it is impossible to construct a
single match structure to match both f 2 and f 4 at the same
time. As a result, the polling method is either polling a single
flow entry with an exact match structure or polling all flow
entries from the switch. In this example, the optimal solution
is querying S3 and S6, with communication cost of Copt =
122+472+122+376 = 1092 bytes. Compared with the cost of
the naive approach Cper-flow = (122+174)∗6 = 1776 bytes, we
save about 38.5% of the communication cost. We have much
more performance gain in practice as the number of flows and
the network scale are much larger than this simple example.
In high-accuracy monitoring systems that require high polling
frequency, such optimization is of great importance to reduce
the monitoring overheads.
Actually, polling flow statistics from one switch as much
as possible is a sort of aggregation technique to save the
communication cost. However, if this “polling all” strategy is
employed excessively, it brings extra overheads due to repeated
gathering the same flow statistics from different switches. To
further explore the problem, we use a simple greedy algorithm
which chooses the switches that cover the most number of
uncovered flows to collect all the flow statistics. Figure 2
illustrates the trend of total communication cost as the number
of “polling all” switches varies from 0 to 80. The dashed
line is the total communication cost of per-flow method for
comparison. For aggregation method, there has been a steady
fall before the number of “polling all” switches reaches 30.
After reaching the bottom, the total communication cost rises
gradually until all the active flows have been covered. In
summary, our target is to design an efficient model to generate
a cost-effective polling scheme that reaches the lowest point
of the total communication cost.
III. S YSTEM D ESIGN AND P ROBLEM F ORMULATION
In this section, we first give an overview of FlowCover
and describe its architecture. The problem formulation and its
corresponding heuristics are presented thereafter. A practical
algorithm to handle the flow changes is proposed as well.
A. Architecture
Basically, the monitoring task in SDN is accomplished by
the controller which is connected to all the switches via a secure channel. The secure channel is usually a TCP connection
between the controller and the switch. The controller collects
the real-time flow statistics from the corresponding switches,
and merges the raw data to provide interfaces for upper-layer
applications.
We elaborate the architecture of FlowCover in Figure 3.
In general, there are three layers in FlowCover: OpenFlow
Network Layer, FlowCover Core Layer and Monitoring Applications Layer. The OpenFlow Network Layer consists of
underlying low-level network devices and keeps connections
between the controller and the switches. The FlowCover Core
Layer is the heart of the monitoring framework. The flow
event handler receives the flow arrive/expire messages from
switches and forwards them to the routing module and flow
state tracker. While the routing module calculates the routing
path in terms of the policy defined by the administrator, the
flow state tracker maintains the active flows in the network
in real-time. The routing module and the flow state tracker
report the active flow sets and their corresponding routing
paths to the polling scheme optimizer respectively. Based on
the above information, the polling scheme optimizer computes
a cost-effective polling scheme and forwards it to the flow
stat collector. The flow stat collector takes the responsibility
to poll the flow statistics from the switches and handle the
reply. Finally, the flow stat aggregator gathers the raw flow
statistics and provides interfaces for the upper monitoring applications. The Monitoring Applications Layer is a collection
Monitoring
Applications
Link
utilization
Traffic
Matrix
Anomaly
Detection
...
Flow Stat Aggregator
Controller
Flow Stat
Col...
Purchase answer to see full
attachment