read research paper and give an overview

Anonymous
timer Asked: Oct 18th, 2018
account_balance_wallet $20

Question Description

Basically what, why, how, and result of the experiments

what is the paper about? what is the purpose of the experiment? what were done? how was it done? what was the result?

Please use figures if necessary for explanation. Feel free to compair the experiments with each other, just remember to reference it:

one paper is also here: https://journals.plos.org/plosone/article?id=10.13...

see attatchment for more papers


Unformatted Attachment Preview

OpenNetMon: Network Monitoring in OpenFlow Software-Defined Networks Niels L. M. van Adrichem, Christian Doerr and Fernando A. Kuipers Network Architectures and Services, Delft University of Technology Mekelweg 4, 2628 CD Delft, The Netherlands {N.L.M.vanAdrichem, C.Doerr, F.A.Kuipers}@tudelft.nl Abstract—We present OpenNetMon, an approach and opensource software implementation to monitor per-flow metrics, especially throughput, delay and packet loss, in OpenFlow networks. Currently, ISPs over-provision capacity in order to meet QoS demands from customers. Software-Defined Networking and OpenFlow allow for better network control and flexibility in the pursuit of operating networks as efficiently as possible. Where OpenFlow provides interfaces to implement fine-grained Traffic Engineering (TE), OpenNetMon provides the monitoring necessary to determine whether end-to-end QoS parameters are actually met and delivers the input for TE approaches to compute appropriate paths. OpenNetMon polls edge switches, i.e. switches with flow end-points attached, at an adaptive rate that increases when flow rates differ between samples and decreases when flows stabilize to minimize the number of queries. The adaptive rate reduces network and switch CPU overhead while optimizing measurement accuracy. We show that not only local links serving variable bit-rate video streams, but also aggregated WAN links benefit from an adaptive polling rate to obtain accurate measurements. Furthermore, we verify throughput, delay and packet loss measurements for bursty scenarios in our experiment testbed. I. I NTRODUCTION Recently, Software-Defined Networking (SDN) has attracted the interest of both research and industry. As SDN offers interfaces to implement fine-grained network management, monitoring and control, it is considered a key element to implement QoS and network optimization algorithms. As such, SDN has received a lot of attention from an academic perspective, enabling researchers to perform experiments which were previously difficult or too expensive to perform. Additionally, industry is already adopting vendor-independent network management protocols such as OpenFlow to configure and monitor their networks. A key requirement for network management in order to reach QoS agreements and traffic engineering is accurate traffic monitoring. In the past decade, network monitoring has been an active field of research, particularly because it is difficult to retrieve online and accurate measurements in IP networks due to the large number and volume of traffic flows and the complexity of deploying a measurement infrastructure [1]. Many flow-based measurement techniques consume too much resources (bandwidth, CPU) due to the fine-grained monitoring demands, while other monitoring solutions require large investments in hardware deployment and configuration 978-1-4799-0913-1/14/$31.00 c 2014 IEEE management. Instead, Internet Service Providers (ISPs) overprovision their network capacity to meet QoS constraints [2]. Nonetheless, over-provisioning conflicts with operating a network as efficient as possible and does not facilitate finegrained Traffic Engineering (TE). TE in turn, needs granular real-time monitoring information to compute the most efficient routing decisions. Where recent SDN proposals - specifically OpenFlow [3] - introduce programming interfaces to enable controllers to execute fine-grained TE, no complete OpenFlow-based monitoring proposal has yet been implemented. We claim that the absence of an online and accurate monitoring system prevents the development of envisioned TE-capable OpenFlow controllers. Given the fact that OpenFlow presents interfaces that enable controllers to query for statistics and inject packets into the network, we have designed and implemented such a granular monitoring system capable of providing TE controllers with the online monitoring measurements they need. In this paper we present OpenNetMon, a POX OpenFlow controller module enabling accurate monitoring of per-flow throughput, packet loss and delay metrics. OpenNetMon1 is capable of monitoring online per-flow throughput, delay and packet loss in order to aid TE. The remainder of this paper is structured as follows: In section II, we first discuss existing measuring methods and monitoring techniques used by ISPs. Section III summarizes OpenFlow and its specific options that our implementation uses, as well as previous work in the field of measuring traffic in OpenFlow networks. Our proposal OpenNetMon is presented in section IV and experimentally evaluated in section V. Section VI discusses implementation specific details regarding the design of our network controller components. Finally, section VII concludes this paper. II. M ONITORING Traditionally, many different monitoring techniques are used in computer networks. The main type of measurement methods those techniques rely on, and the trade-offs they bring are discussed in the following two subsections. Traditionally, every measurement technique requires a separate hardware installation or software configuration, making it a tedious and expensive task to implement. However, OpenFlow provides 1 OpenNetMon is published as open-source software at our GitHub repository [4]. the interfaces necessary to implement most of the discussed methods without the need of customization. Subsection II-C summarizes several techniques ISPs use to monitor their networks. A. Active vs. passive methods Network measurement methods are roughly divided into two groups, passive and active methods. Passive measurement methods measure network traffic by observation, without injecting additional traffic in the form of probe packets. The advantage of passive measurements is that they do not generate additional network overhead, and thus do not influence network performance. Unfortunately, passive measurements rely on installing in-network traffic monitors, which is not feasible for all networks and require large investments. Active measurements on the other hand inject additional packets into the network, monitoring their behavior. For example, the popular application ping uses ICMP packets to reliably determine end-to-end connection status and compute a path’s round-trip time. Both active and passive measurement schemes are useful to monitor network traffic and to collect statistics. However, one needs to carefully decide which type of measurement to use. On the one hand, active measurements introduce additional network load affecting the network and therefore influence the accuracy of the measurements themselves. On the other hand, passive measurements require synchronization between observation beacons placed within the network, complicating the monitoring process. Subsection II-C discusses both passive and active measurement techniques that are often used by ISPs. B. Application-layer and network-layer measurements Often network measurements are performed on different OSI layers. Where measurements on the application layer are preferred to accurately measure application performance, ISPs often do not have access to end-user devices and therefore use network layer measurements. Network layer measurements use infrastructure components (such as routers and switches) to obtain statistics. This approach is not considered sufficient, as the measurement granularity is often limited to port based counters. It lacks the ability to differ between different applications and traffic flows. In our proposal in section IV we use the fact that OpenFlow enabled switches and routers keep perflow statistics to determine end-to-end network performance. C. Current measurement deployments The Simple Network Management Protocol (SNMP) [5] is one of the most used protocols to monitor network status. Among others, SNMP can be used to request per-interface port-counters and overall node statistics from a switch. Being developed in 1988, it is implemented in most network devices. Monitoring using SNMP is achieved by regularly polling the switch, though switch efficiency may degrade with frequent polling due to CPU overhead. Although vendors are free to implement their own SNMP counters, most switches are limited to counters that aggregate traffic for the whole switch and each of its interfaces, disabling insight into flowlevel statistics necessary for fine-grained Traffic Engineering. Therefore, we do not consider SNMP to be suitable for flowbased monitoring. NetFlow [6] presents an example of scalable passive flowbased monitoring. It collects samples of traffic and estimates overall flow statistics based on these samples, which is considered sufficiently accurate for long-term statistics. NetFlow uses a 1-out-of-n random sampling, meaning it stores every n-th packet, and assumes the collected packets to be representative for all traffic passing through the collector. Every configurable time interval, the router sends the collected flow statistics to a centralized unit for further aggregation. One of the major problems of packet-sampling is the fact that small flows are underrepresented, if noticed at all. Additionally, multiple monitoring nodes along a path may sample exactly the same packet and therewith over-represent a certain traffic group, decreasing accuracy. cSamp [7] solves these problems by using flow sampling instead of packet sampling and deploys hashbased coordination to prevent duplicate sampling of packets. Skitter [8], a CAIDA project that analyzed the Internet topology and performance using active probing, used geographically distributed beacons to perform traceroutes at a large scale. Its probe packets contain timestamps to compute RTT and estimate delays between measurement beacons. Where Skitter is suitable to generate a rough estimate of overall network delay, it does not calculate per-flow delays, as not all paths are traversed unless a very high density of beacons is installed. Furthermore, this method introduces additional inaccuracy due to the addition and subtraction of previously existing uncertainty margins. Measuring packet delay using passive measurements is a little bit more complex. IPMON [9] presents a solution that captures the header of each TCP/IP packet, timestamps it and sends it to a central server for further analysis. Multiple monitoring units need to be installed to retrieve network-wide statistics. Where the technique is very accurate (in the order of microseconds), additional network overhead is generated due to the necessary communication with the central server. Furthermore, accuracy is dependent on accurate synchronization of the clocks of the monitoring units. III. BACKGROUND AND R ELATED W ORK Although SDN is not restricted to OpenFlow, other control plane decoupling mechanisms existed before OpenFlow, OpenFlow is often considered the standard communication protocol to configure and monitor switches in SDNs. OpenFlow capable switches connect to a central controller, such as POX [10] or Floodlight [11]. The controller can both preconfigure the switch with forwarding rules as well as it can reactively respond to requests from switches, which are sent when a packet matching none of the existing rules enters the network. Besides managing the forwarding plane, the OpenFlow protocol is also capable of requesting per-flow counter statistics and injecting packets into the network, a feature which we use in our proposal presented in section IV. Timer Controller FlowRemoved FlowMod Unmatched packet Controller FlowStatsReply FlowStatsReq Controller PacketOut Controller PacketIn Controller Packet Sent Switch (a) The first packet of a new connection arrives. Switch (b) The installation of forwarding rules. Switch (c) Retransmitting the captured packet. Fig. 1: The three-step installation procedure of a new flow. More specifically, OpenFlow capable switches send a PacketIn message to the controller when a new, currently unmatched connection or packet arrives. The controller responds with installing a path using one or more Flow Table Modification messages (FlowMod) and instructs the switch to resend the packet using a PacketOut message. The FlowMod message indicates idle and hard timeout durations and whether the controller should be notified of such a removal with a FlowRemoved message. Figure 1 gives a schematic overview of the message exchange during flow setup. Using the PacketIn and FlowRemoved messages a controller can determine which active flows exist. Furthermore, the FlowRemoved message contains the duration, packet and byte count of the recently removed flow enabling the controller to keep statistics on past flows. Our proposal in section IV uses this information in combination with periodically queried Flow Statistics Request (StatsRequest) messages, as shown in figure 2, to obtain information of running flows and regularly injects packets into the network to monitor end-to-end path delay. OpenFlow’s openness to switch and per-flow statistics has already been picked up by recent research proposals. OpenTM [12], for example, estimates a Traffic Matrix (TM) by keeping track of statistics for each flow. The application queries switches on regular intervals and stores statistics in order to derive the TM. The paper presents experiments on several polling algorithms and compares them for accuracy. Where polling solely all paths’ last switches gives the most accurate results, other polling schemes, such as selecting a switch round robin, by the least load, or (non-) uniform random selection give only slightly less accurate results with at most 2.3 % deviation from the most accurate last-switch selection scheme. From the alternative polling schemes, the non-uniform random selection with a preference to switches in the end of the path behaves most accurate compared to last-switch polling, followed by the uniform random selection and round-robin selection of switches, while the least-loaded switch ends last still having an accuracy of approximately +0.4 Mbps over 86 Mbps. However, since OpenTM is limited to generating TMs for offline use and does not capture packet loss and delay, we consider it incomplete for online monitoring of flows. OpenSAFE [13] focuses on distributing traffic to monitoring applications. It uses the fact that every new flow request passes Switch Switch (a) While a flow is active, the controller can - f.e. using a timer or other event - query the switch to retrieve flow specific statistics. (b) The end of a flow is announced by sending a FlowRemoved packet to the controller. Fig. 2: While a flow is active the controller and switch can exchange messages concerning the state of the flow. through the network’s OpenFlow controller. The controller forwards the creation of new flows to multiple traffic monitoring systems, which record the traffic and analyze it with an Intrusion Detection System (IDS). OpenSAFE, however, requires hardware investments to perform the actual monitoring, while we introduce a mechanism that reuses existing OpenFlow commands to retrieve the aforementioned metrics. Others suggest to design a new protocol, parallel to OpenFlow, in order to achieve monitoring in SDNs. OpenSketch [14], for example, proposes such a SDN based monitoring architecture. A new SDN protocol, however, requires an upgrade or replacement of all network nodes, a large investment ISPs will be reluctant to make. Furthermore, standardization of a new protocol has shown to be a long and tedious task. Since OpenFlow is already gaining popularity in datacenter environments and is increasingly being implemented in commodity switches, a solution using OpenFlow requires less investment from ISPs to implement and does not require standardization by a larger community. Therefore, we consider an OpenFlow compatible monitoring solution, such as our solution OpenNetMon, more likely to succeed. IV. O PEN N ET M ON In this section, we present our monitoring solution OpenNetMon, written as a module for the OpenFlow controller POX [10]. OpenNetMon continuously monitors all flows between predefined link-destination pairs on throughput, packet loss and delay. We believe such a granular and real-time monitoring system to be essential for Traffic Engineering (TE) purposes. In the following subsections, we will first discuss how our implementation monitors throughput and how we determine the right polling algorithm and frequency, followed by our implementation to measure packet loss and delay. Where one might argue that measuring throughput in OpenFlow SDNs is not new, albeit that we implement it specifically for monitoring instead of Traffic Matrix generation, we are the first to combine it with active per-flow measurements on packet loss and delay. 9 Available bandwidth (Mbit/s) Available bandwidth (Mbit/s) 80 70 60 50 40 30 20 8 7 6 5 4 3 2 0 50 100 150 200 250 300 350 400 Time (s) 10 100 200 300 400 500 600 700 800 900 Time (s) Actual available bandwidth Broadcasted available bandwidth Fig. 3: Available bandwidth on a 100 Mbps WAN link [15]. Fig. 4: Available and advertised bandwidth of a HD video flow. A. Polling for throughput can vary between 1 and 9 Mbps2 . While dampened through multiplexing, this behavior is even visible on aggregated links, as can be seen in the available bandwidth measurement of a 15 minute packet-level trace of a 100 Mbps Japanese WAN link shown in figure 3. In order to facilitate efficient traffic engineering and run networks at high utilization to save costs as discussed in section I, accurate information about the current throughput per link and flow is needed. While a number of different link-state update policies has been proposed in the past decades [16], our experimental measurements indicate that policies based on absolute or relative change as well as class-based or timer policies do not capture the dynamics of today’s network traffic at a sufficiently detailed level to serve as an input for flow scheduling. Figure 4 contrasts the difference between the actual bandwidth on a 10 Mbps access network link and the bandwidth as estimated by a relative change policy: as the stream rapidly changes demands, the flow’s throughput is either grossly under- or overestimated by the network, thereby either oversubscribing and wasting resources, or potentially harming flows. This behavior is the result of current link-state update policies disseminating information based on recent but still historical measurements, in an attempt to balance either excessively high update rates or tolerate outdated information. While this particular trace may in principle be better approximated by tuning the update rate or choosing a different link-state update policy, the fundamental issue exists across all existing approaches: figure 5 shows the average relative estimation error as a function of update policy and update frequency. While reducing the staleness, more periodic updates however do not necessarily provide better flow information, as the dynamics of a complex flow characteristic as shown in figure 3 or 4 cannot be easily approached by a static reporting interval without using infinitesimal time intervals and their prohibitive overhead costs. To avoid this issue, the proposed OpenNetMon To determine throughput for each flow, OpenNetMon regularly queries switches to retrieve Flow Statistics using the messages described in section III. With each query, our module receives the amount of bytes sent and the duration of each flow, enabling it to calculate the effective throughput for each flow. Since each flow between any node pair may get different paths assigned by the controller, OpenNetMon polls on regular intervals for every distinct assigned path between every node pair that is designated to be monitored. Even though polling each path’s switch randomly or in round robin is considered most efficient and still sufficiently accurate [ ...
Purchase answer to see full attachment

Tutor Answer

ProfOscar
School: Duke University

...

flag Report DMCA
Review

Anonymous
awesome work thanks

Similar Questions
Hot Questions
Related Tags
Study Guides

Brown University





1271 Tutors

California Institute of Technology




2131 Tutors

Carnegie Mellon University




982 Tutors

Columbia University





1256 Tutors

Dartmouth University





2113 Tutors

Emory University





2279 Tutors

Harvard University





599 Tutors

Massachusetts Institute of Technology



2319 Tutors

New York University





1645 Tutors

Notre Dam University





1911 Tutors

Oklahoma University





2122 Tutors

Pennsylvania State University





932 Tutors

Princeton University





1211 Tutors

Stanford University





983 Tutors

University of California





1282 Tutors

Oxford University





123 Tutors

Yale University





2325 Tutors