Normal TCP/IP operation is for the routing system to select a best path that remains stable for some time, and for TCP to adjust to the properties of this path to optimize throughput. A multipath TCP would be able to either use capacity on multiple paths, or dynamically find the best performing path, and therefore reach higher throughput. By adapting to the properties of several paths through the usual congestion control algorithms, a multipath TCP shifts its traffic to less congested paths, leaving more capacity available for traffic that can't move to another path on more congested paths. And when a path fails, this can be detected and worked around by TCP much more quickly than by waiting for the routing system to repair the failure.
Seguir leyendo arrow_right_altExisting Swarm-based Peer-to-Peer Streaming (SPS) applications rely on a randomly connected overlays among peers, which tend to generate a significant amount of costly inter-ISP traffic. To reduce such traffic, localization of overlay connectivity within each ISP has received a great deal of attention as a promising approach for reducing the volume of inter-ISP traffic.
Seguir leyendo arrow_right_altThis talk gives an overview on how different overlay networks can share information among them and some recent results based on an implementation tested on an emulation environment are given.
Seguir leyendo arrow_right_altA substantial amount of work has recently gone into localizing BitTorrent traffic within an ISP in order to avoid excessive and often times unnecessary transit costs. Several architectures and systems have been proposed and the initial results from specific ISPs and a few torrents have been encouraging. In this work we attempt to deepen and scale our understanding of locality and its potential. Looking at specific ISPs, we consider tens of thousands of concurrent torrents, and thus capture ISP-wide implications that cannot be appreciated by looking at only a handful of torrents. Secondly, we go beyond individual case studies and present results for the top 100 ISPs in terms of number of users represented in our dataset of up to 40K torrents involving more than 3.9M concurrent peers and more than 20M in the course of a day spread in 11K ASes. We develop scalable methodologies hat permit us to process this huge dataset and answer questions such as: What is the minimum and the maximum transit traffic reduction across hundreds of ISPs? What are the win-win boundaries for ISPs and their users? What is the maximum amount of transit traffic that can be localized without requiring fine-grained control of inter-AS overlay connections? What is the impact to transit traffic from upgrades of residential broadband speeds?
Seguir leyendo arrow_right_altThis talk addresses the problem of providing throughput guarantees in heterogeneous wireless mesh networks. As a first step, it proposes a novel model to represent the capacity region of a wireless link that, by linearizing this region, has the fundamental property of being very simple while providing a good approximation to the entire region. In a second step, this model is mapped to two of the most prominent wireless technologies nowadays, namely Wireless LAN and WiMAX. The last step addresses the issue of finding optimal routing strategies, which is done by solving an optimization problem subject to the constraints imposed by the linearized capacity region. The performance of the proposed approach has been compared against traditional routing metrics in mesh networks, such as ETT and ETX, and shown to overperform them by approximately a factor of 2.
Seguir leyendo arrow_right_altMany delay-based congestion protocols have been proposed. Some recent studies have questioned the validity of congestion prediction at end hosts Based on measurement studies. In this talk, we show that end-host based delay prediction can be more accurate than previously characterized. We propose PERT (Probabilistic Early Response TCP) to mitigate the uncertainties in end-host based congestion prediction. PERT emulates the behavior of AQM/ECN in the end hosts' response to congestion.
Seguir leyendo arrow_right_altThe Theory of Computing is almost a hundred year old now. Its roots can actually be traced to the Entscheidungsproblem posed by David Hilbert in 1928. Another concrete theory in Physics called the Quantum Mechanics had its inceptions almost two hundred years ago (with Thomas Young's Double Slit Experiment in 1803) but actually started in the late 19th century. Today, Quantum Mechanics is proven to be the most successful theory of Physics. So the natural question to ask was that while the statement of Church-Turing Thesis is seen to be a statement of physics (laws of nature), then it should be compatible with the theory of Quantum Mechanics!
Seguir leyendo arrow_right_altExisting cloud computing platforms offer virtually unlimited compute resources (virtual machines, bandwidth, storage, etc.) that can be used on demand. Such on-demand model offers significant elasticity to the customers in terms when and where they use the resources. The existing pricing model, however, is pay-as-you-go which in turn can lead to unpredictable costs to the cloud customers. This talk will discuss two adaptive approaches for resource control under a fixed budget: Distributed Rate Limiting (DRL) and Temporal Rate Limiting (TRL). DRL is a fully decentralized mechanism for resource control over a distributed cloud service, that splits the available budget among the participating nodes subject to the load each node experiences. TRL in contrast, splits the budget over a time period, to optimize the performance of the customer with demand pattern that varies in time.
Seguir leyendo arrow_right_altExisting cloud computing platforms offer virtually unlimited compute resources (virtual machines, bandwidth, storage, etc.) that can be used on demand. Such on-demand model offers significant elasticity to the customers in terms when and where they use the resources. The existing pricing model, however, is pay-as-you-go which in turn can lead to unpredictable costs to the cloud customers. This talk will discuss two adaptive approaches for resource control under a fixed budget: Distributed Rate Limiting (DRL) and Temporal Rate Limiting (TRL). DRL is a fully decentralized mechanism for resource control over a distributed cloud service, that splits the available budget among the participating nodes subject to the load each node experiences. TRL in contrast, splits the budget over a time period, to optimize the performance of the customer with demand pattern that varies in time.
Seguir leyendo arrow_right_altA lot of attention has been given to multihop wireless networks lately. This attention has motivated an increase in the number of 802.11-based deployments, both indoor and outdoor, used to perform measurement studies to analyze WLAN performance by means of wireless sniffers that passively capture transmitted frames. In this talk talk we will introduce some of the major issues that systems researchers have to address when performing such measurements: i) on one hand, the testbed itself requires a significant amount of resources during both its deployment and its maintenance, and they require a "calibration" phase before running the experiments given that as off-the-shelf devices have recently been shown to deviate from the expected behavior--in this talk we summarize a few lessons learned from the deployment of a 28-node wireless testbed; ii) on the other hand, little attention has been given to the fidelity of an individual device, i.e., the ability of a given sniffer to capture all frames that could have been captured by a more faithful device. We assess this fidelity by running controlled experiments, and show that it varies significantly across sniffers, both quantitatively and qualitatively.
Seguir leyendo arrow_right_alt
Comentarios recientes