Technical Program

03 June 2009

Registration Opens

04 June 2009

Registration and Breakfast
0830 - 0845
Welcome Address
0845 - 1000
Keynote Session

Discussion Lead: Dongyan Xu
Experiences Building Telepresence Systems
Keynote Speaker: Prof. Henry Fuchs
Federico Gil Professor of Computer Science
Adjunct Professor of Biomedical Engineering
University of North Carolina at Chapel-Hill
1000 - 1030
Coffee Break

1030 - 1200
Session 1: New Applications and Services
Discussion Lead: Wei Tsang Ooi (National University of Singapore)
Scribe: Saurabh Ratti (University of Ottawa)

Presented by:
Wanmin Wu
(University of Illinois at Urbana-Champaign)

Wanmin Wu (University of Illinois at Urbana-Champaign)
Zhenyu Yang (Florida International University)
Klara Nahrstedt (University of Illinois at Urbana-Champaign)
Multi-stream/multi-site 3D video collaborative systems are promising as they enable remote users to interact in a 3D virtual space with a sense of co-presence. However, the decentralized content dissemination remains a challenge. In this work, we explore approaches to construct adaptive overlay based on the users' visual interest in the collaborative space. Particularly, we consider the practical challenge that the user's interest might change dynamically. We propose, compare, and evaluate three algorithms to handle the view dynamics. With extensive experiments, we demonstrate that an algorithm that exploits view locality can achieve efficient bandwidth utilization, high topology stability, and great scalability.

Presented by:
Song Lin
(Tsinghua University)

Song Lin (Tsinghua University)
Zhiguo Gao (IBM China Research Laboratory)
Ke Xu (Tsinghua University)
In recent years, web based online map applications have been getting more and more popular, such as Google Maps, Yahoo Maps. Many new Web 2.0 techniques such as mash-up and AJAX were adopted in these map applications to improve user experiences. But few researches have been done on traffic analysis of the Web 2.0 based online map applications. In this paper, we introduced our research on features of online map applications that previous studies hadn't cover. In our research, we captured map application related HTTP traffic in a campus network while not violating user privacy. We introduced the traffic overview, mash-up and web caching characteristics of four map web sites (Google Maps, Yahoo Maps, Sogou Maps and Baidu Maps). For the first time, the mash-up characteristics of Google map traffic were analyzed using a new method proposed in this paper. The same method could be applied to other mash-up analysis works. These results can help us optimize the future web application designs and CDN based accelerating solution designs.

Presented by:
Yuen Feng
(University of Toronto)

Ye Sun (Hong Kong University of Science & Technology)
Fangming Liu (Hong Kong University of Science & Technology)
Bo Li (Hong Kong University of Science & Technology)
Baochun Li (University of Toronto)
Peer-assisted online storage and distribution systems have recently enjoyed large-scale deployment gaining increased popularity for multimedia content sharing in the Internet. Such systems typically deploy dedicated servers while effectively leveraging peer bandwidth in a complementary fashion, in order to guarantee adequate levels of service quality and minimize server cost. In this paper, motivated by our recent empirical study on a real-world system, FS2You, we develop a mathematical model to characterize and understand peer-assisted online storage systems serving multiple files of different popularity. Specifically, we examine and compare representative server bandwidth allocation strategies, and investigate the critical performance metrics and factors. We demonstrate that different server strategies may lead to remarkably different service qualities in terms of average downloading times, peer satisfaction levels and service quality differentiation. In particular, the current server strategy in FS2You is able to offer system-wide average downloading times comparable to the theoretical bound derived from our model.
1200 - 1330

1330 - 1500
Session 2: P2P Streaming I
Discussion Lead: Ketan Mayer-Patel (University of North Carolina at Chapel Hill)
Scribe: Remo Meier (ETH Zurich)

Presented by:
Hassan Shojania
(University of Toronto)

Xuanjia Qiu (Sun Yat-Sen University)
Chuan Wu (The University of Hong Kong)
Xiaola Lin (Sun Yat-Sen University)
Francis C. M. Lau (The University of Hong Kong)
A fundamental challenge in peer-to-peer (P2P) Video-on-Demand (VoD) streaming is to quickly locate new supplying peers whenever a VCR command is issued, in order to achieve smooth viewing experiences. For most existing commercial systems which resort to tracking servers for such neighbor discovery, the increasing scale of P2P VoD systems has brought heavy load onto the dedicated servers. To avoid overloading the servers and achieve instant neighbor discovery over the self-organizing P2P overlay, we design a novel method of organizing peers watching the same video, that constitutes a light-weighted indexing structure to support efficient streaming and fast neighbor discovery at the same time. InstantLeap achieves an O(1) neighbor discovery efficiency upon any playback ``leaps'' across the media stream in streaming overlays of any sizes, with a low messaging cost for the overlay maintenance. We support our design with rigorous analysis and extensive simulations.

Presented by:
Nazanin Magharei
(University of Oregon)

Nazanin Magharei (University of Oregon)
Reza Rejaie (University of Oregon)
In Swarm-based Peer-to-Peer Streaming (SPS) mechanisms, participating peers form a randomly connected mesh over which they incorporate swarm-like content delivery. In practice, a subset of participating peers may form clusters in the overlay due to various reasons such as localization of connectivity within edge ISPs. Despite the commonly held assumptions, the appearance of such clusters could significantly degrade the delivered quality to participating peers in SPS mechanisms. This paper examines the effect of overlay clustering on the performance of SPS mechanisms for live content. Leveraging the notion of two-phase content delivery in SPS mechanisms, we illustrate the effect of overlay clustering on content delivery. We propose the Overlay Monitoring and Repair (OMR) mechanism as a distributed and scalable approach to maintain proper overlay connectivity in SPS mechanisms. The key idea is to use delivered quality to individual peers as an indication of poor connectivity from other regions of the overlay. OMR employs a probabilistic approach to ensure an adequate number of properly-positioned peers reacts to detected clustering in the overlay without any coordination. Reacting peers rewire a small number of carefully-selected connections in the overlay to significantly improve the performance of content delivery. Our preliminary evaluations demonstrate that OMR mechanism can achieve its goals.

Presented by:
Michela Meo
(Politecnico di Torino)

Richard John Lobb (University of Canterbury)
Ana Paula Couto da Silva (Federal University of Juiz de Fora)
Emilio Leonardi (Politecnico di Torino)
Marco Mellia (Politecnico di Torino)
Michela Meo (Politecnico di Torino)
In this paper, we propose a simple and fully distributed mechanism for constructing and maintaining the overlay topology in mesh-based P2P-TV systems. Our algorithm optimizes the topology to better exploit large bandwidth peers, so that they are automatically moved close to the source. This improves the chunk delivery delay so that all peers benefit, not just the high bandwidth ones. A key property of the proposed scheme is its ability to indirectly estimate the upload bandwidth of peers without explicitly knowing or measuring it. Simulation results show that our scheme significantly outperforms overlays with homogeneous properties, achieving up to 50% performance improvement. Moreover, the algorithm is robust to both parameter setting and changing conditions, e.g., peer churning.
1500 - 1530
Coffee Break

1530 - 1730
Session 3: OS and End Systems
Discussion Lead: Kevin Almeroth (UC Santa Barbara)
Scribe: Ishan Vaishnavi (Centrum voor Wiskunde en Informatica)

Presented by:
Hassan Shojania
(University of Toronto)

Hassan Shojania (University of Toronto)
Baochun Li (University of Toronto)
In multi-hop wireless networks, random network coding represents the general design principle of transmitting random linear combinations of blocks in the same ``batch'' to downstream relays or receivers. It has been recognized that random network coding in multi-hop wireless networks may improve unicast throughput in scenarios when multiple paths are simultaneously utilized between the source and the destination. However, the computational complexity of random network coding, and its energy consumption implications, may potentially limit its applicability and practicality in mobile devices. In this paper, we present our real-world implementation of random network coding on the Apple iPhone and iPod Touch mobile platforms, and offer an in-depth investigation with respect to the difficulties towards such an implementation, the limitations of the ARM processor and the hardware platform, as well as our hand-tuning efforts to maximize coding performance on the iPhone platform. With our implementation deployed on both the iPhone 3G and the second-generation iPod Touch, we report its coding performance, energy consumption rates, as well as CPU usage with multimedia streaming.

Presented by:
Padmanabhan Pillai
(Intel Research Pittsburg)

Padmanabhan S. Pillai (Intel Research Pittsburgh)
Lily B. Mummert (Intel Research Pittsburgh)
Steven W. Schlosser (Intel Research Pittsburgh)
Rahul Sukthankar (Intel Research Pittsburgh)
Casey J. Helfrich (Intel Research Pittsburgh)
A critical problem in implementing interactive perception applications is the considerable computational cost of current computer vision and machine learning algorithms, which typically run one to two orders of magnitude too slowly to be used interactively. Fortunately, many of these algorithms exhibit coarse-grained task and data parallelism that can be exploited across machines. The SLIPstream project focuses on building a highly-parallel runtime system called Sprout that can harness the computing power of a cluster to execute perception applications with low latency. This paper makes the case for using clusters for perception applications, describes the architecture of the Sprout runtime, and presents two compute-intensive yet interactive applications.

Presented by:
Philip Frey
(IBM Research GmbH)

Philip W. Frey (IBM Research GmbH)
Andreas Hasler (IBM Research GmbH)
Bernard Metzler (IBM Research GmbH)
Gustavo Alonso (ETH Zurich)
Internet usage has changed dramatically in the past few years. Content is no longer dominated by static websites, but comprises an increasing number of multimedia streams. With the widespread availability of broadband connections, the quality of the media provided by video-on-demand as well as streaming services increases constantly. Even though today most videos are still encoded with a rather low bit rate, large Internet service providers already foresee high definition media becoming the predominant format in the near future. However, a larger number of clients requesting media at high bit rates poses a challenge for the server infrastructure. Conventional stream dissemination methods, such as RTP over UDP or HTTP over TCP, result in high server loads due to excessive local data copy, context switching, and interrupt processing overhead. In this paper, we illustrate and discuss this problem in detail through extensive experiments with existing solutions. We then present a new approach based on zero-copy protocol stack implementations in software as well as dedicated RDMA hardware. Our performance experiments indicate that these optimizations allow servers to scale better and remove most of the overhead caused by current approaches.

Presented by:
Damien Le Moal
(Hitachi Ltd.)

Damien Le Moal (Hitachi Ltd.)
Donald Molaro (Hitachi Global Storage Technologies, San Jose Research Center)
Jorge Campello (Hitachi Global Storage Technologies, San Jose Research Center)
Hard-disk drive power consumption reduction methods focus mainly on increasing the amount of time the disk is in standby mode (disk spun down) by implementing aggressive data readahead and caching at the operating system and/or application level. However, these methods cannot be applied efficiently to systems with limited memory and high bit-rate requirements such as digital video recorders handling high-definition video. In this paper, we introduce the Audio/Video File System (AVFS), composed of a file system and a disk I/O scheduler. Compared to traditional methods, the proposed scheduler reduces seek overhead by processing real-time requests to video files using batches built dynamically depending on the requests deadlines. Evaluation results show an important reduction in disk utilization rates and a reduction of up to 20 % of the disk power consumption with only 4 MB of data buffer per video stream.

05 June 2009

Registration and Breakfast
0830 - 1000
Session 4: Virtual Environments and Games
Discussion Lead: Kuan-Ta Chen (Academia Sinica)
Scribe: Pengpeng Ni (Simula Research Lab and University of Oslo)

Presented by:
Saurabh Ratti
(University of Ottawa)

Mohsen Ghaffari (Sharif University of Technology)
Behnoosh Hariri (Sharif University of Technology and University of Ottawa)
Shervin Shirmohammadi (Sharif University of Technology and University of Ottawa)
This article proposes a new distributed architecture for update message exchange in massively multi-user virtual environments (MMVE). MMVE applications require delivery of updates among various locations in the virtual environment. The proposed architecture here exploits the location addressing of geometrical routing in order to alleviate the need for IP-specific queries. However, the use of geometrical routing requires careful choice of overlay to achieve high performance in terms of minimizing the delay. At the same time, the MMVE is dynamic, in sense that users are constantly moving in the 3D virtual space. As such, our architecture uses a distributed topology control scheme that aims at maintaining the requires QoS to best support the greedy geometrical routing, despite user mobility or churn. We will further prove the functionality and performance of the proposed scheme through both theory and simulation.

Presented by:
John Miller
(Microsoft Research and University of Cambridge)

John L. Miller (Microsoft Research and University of Cambridge)
Jon Crowcroft (University of Cambridge)
Peer-to-peer distributed virtual environments (DVE's) distribute state tracking and state transitions. Many DVE's - such as online games - require ways to fairly determine the outcome of probabilistic events. While trivial when a trusted third party is involved, resolving these actions fairly between adversaries without a trusted third party is much more difficult. This paper proposes the Pairwise Random Protocol (PRP), which uses secure coin flipping to enable adversaries to fairly determine the result of a probabilistic event without a trusted third party. Three different variations of PRP are presented, and the time impact and network overhead are examined. We conclude that PRP enables DVE's to distribute the work of determining probabilistic events between adversaries without loss of security or fairness, and with acceptable overhead.

Presented by:
Ke Liang
(National University of Singapore)

Ke Liang (National University of Singapore)
Roger Zimmermann (National University of Singapore)
In recent years, integrated spatialized voice services have become an appealing application for networked virtual environments (NVE), e.g., Second Life. With a spatialized voice service, people can identify who is talking if there are several participants in the vicinity. The key challenge in a spatialized audio streaming application is how to disseminate audio streams while observing bandwidth limits of end-user computers and tight latency constraints, and it can be modeled as NP-complete problem. In this paper, we propose a heuristic algorithm called CTA for spatialized audio streaming over NVEs in a peer-to-peer manner. The proposed algorithm was applied to real avatar mobility traces collected from Second Life, and the simulation results demonstrate that (a) CTA can achieve a high ratio of successful to candidate receivers and (b) CTA enables most of the successful receivers to enjoy minimum latency.
1000 - 1030
Coffee Break

1030 - 1200
Session 5: Security
Discussion Lead: Klara Nahrstedt (UIUC)
Scribe: Liang Ke (National University of Singapore)

Presented by:
Mohamad Hefeeda
(Simon Fraser University)

Kianoosh Mokhtarian (Simon Fraser University)
Mohamed Hefeeda (Simon Fraser University)
We investigate the problem of securing the delivery of scalable video streams so that receivers can ensure the authenticity (originality and integrity) of the video. Our focus is on recent scalable video coding techniques, e.g., H.264/SVC, that can provide three scalability types at the same time: temporal, spatial, and quality (or PSNR). This three-dimensional scalability offers a great flexibility that enables customizing video streams for a wide range of heterogeneous receivers and network conditions. This flexibility, however, is not supported by current stream authentication schemes in the literature. We propose an efficient authentication scheme that accounts for the full scalability of video streams: it enables verification of all possible substreams that can be extracted and decoded from the original stream. Our evaluation study shows that the proposed authentication scheme is robust against packet losses, adds low communication and computation overheads, and is suitable for live streaming systems as it has short delay.

Presented by:
Tony Thomas
(Nanyang Technological University)

Tony Thomas (Nanyang Technological University)
Sabu Emmanuel (Nanyang Technological University)
Amitabha Das (Nanyang Technological University)
Mohan S. Kankanhalli (National University of Singapore)
For scalability of business, multiparty multilevel digital rights management (DRM) architecture, where a multimedia content is delivered by an owner to a consumer through several levels of distributors has been suggested as an alternative to the traditional two party (buyer-seller) DRM architecture. A combination of cryptographic and watermarking techniques are usually used for secure content delivery and protecting the rights of seller and buyer in the two party DRM architecture. In a multiparty multilevel DRM architecture the cryptographic and watermarking mechanism need to ensure the secure delivery of the content as well as the security concerns of the owner, multiple levels of the distributors and the consumer. In this paper, we propose a mechanism which takes care of the above security issues, for delivering multimedia content through multiparty multilevel DRM architecture.

Presented by:
Philip Branch
(Swinburne University of Technology)

Philip A. Branch (Swinburne University of Technology)
Amiel Heyde (Swinburne University of Technology)
Grenville J. Armitage (Swinburne University of Technology)
In this paper we present results of experimental work using machine learning techniques to rapidly identify Skype traffic. We show that Skype traffic can be identified by observing 5 seconds of a Skype traffic flow, with recall and precision better than 98%. We found the most effective features for classification were characteristic packet lengths less than 80 bytes, statistics of packet lengths greater than 80 bytes and inter-packet arrival times. Our classifiers do not rely on observing any particular part of a flow. We also report on the performance of classifiers built using combinations of two of these features and of each feature in isolation.
1200 - 1330

1330 - 1500
Session 6: Understanding and Improving User Experience
Discussion Lead: Mohamed Hefeeda (Simon Fraser University)
Scribe: Wei Cheng (National University of Singapore)

Presented by:
Kuan-Ta Chen
(Academia Sinica, Taiwan)

Chen-Chi Wu (National Taiwan University)
Kuan-Ta Chen (Academia Sinica)
Chun-Ying Huang (National Taiwan Ocean University)
Chin-Laung Lei (National Taiwan University)
VoIP playout buffer dimensioning has long been a challenging optimization problem, as the buffer size must maintain a balance between conversational interactivity and speech quality. The conversational quality may be affected by a number of factors, some of which may change over time. Although a great deal of research effort has been expended in trying to solve the problem, how the research results are applied in practice is unclear. In this paper, we investigate the playout buffer dimensioning algorithms applied in three popular VoIP applications, namely, Skype, Google Talk, and MSN Messenger. We conduct experiments to assess how the applications adjust their playout buffer sizes. Using an objective QoE (Quality of Experience) metric, we show that Google Talk and MSN Messenger do not adjust their respective buffer sizes appropriately, while Skype does not adjust its buffer at all. In other words, they could provide better QoE to users by improving their buffer dimensioning algorithms. Moreover, none of the applications adapts its buffer size to the network loss rate, which should also be considered to ensure optimal QoE provisioning.

Presented by:
Pengpeng Ni
(Simula Research Lab and University of Oslo)

Pengpeng Ni (Simula Research Laboratory and University of Oslo)
Alexander Eichhorn (Simula Research Laboratory)
Carsten Griwodz (Simula Research Laboratory and University of Oslo)
Pål Halvorsen (Simula Research Laboratory and University of Oslo)
Scalable video is an attractive option for adapting the bandwidth consumption of streaming video to the available bandwidth. Fine-grained scalability can adapt most closely to the available bandwidth, but this comes at the cost of a high compression penalty. In the context of VoD streaming to mobile end systems, we have therefore explored whether a similar adaptation to the available bandwidth can be achieved by performing layer switching in coarse-grained scalable videos. In this approach, enhancement layers of a video stream are switched on and off to achieve any desired longer-term bandwidth. We performed user studies to evaluate the idea, and came to the far-from-obvious conclusion that layer switching is viable way for bit-rate savings and fine-grained bitrate adaptation even for rather short times between layer switches.

Presented by:
Ishan Vaishnavi
(Centrum voor Wiskunde en Informatica)

Ishan Vaishnavi (Centrum voor Wiskunde en Informatica)
Dick C. A. Bulterman (Centrum voor Wiskunde en Informatica)
This paper presents a new scheduling algorithm for real time network delivery of packets over Diffserv networks for delay sensitive applications. We name the networks that support this algorithm as Estimated Service (Estserv) networks. These networks, for real time packets, estimate the probability of the packet meeting its deadline and schedule it according to this estimation. This paper validates, given this estimation mechanism, the better performance of the scheduling algorithm over traditional solutions. We show that using Estserv for delay sensitive applications, we can provide out-of-band scheduling, save bandwidth on packets with expired deadlines and handle bursts without loosing the scalability of Diffserv. We show with the help of an implementation in the Linux kernel's ip-forwarding part, that, given the estimation value, Estserv performs better than Diffserv in terms of deadlines, while still saving bandwidth.
1500 - 1530
Coffee Break

1530 - 1700
Session 7: P2P Streaming II
Discussion Lead: Dongyan Xu (Purdue University)
Scribe: Amir Hassan Rasti Ekbatani (University of Oregon)

Presented by:
Yih-Farn Robin Chen
(AT&T Laboratories - Research)

Yih-Farn Robin Chen (AT&T Laboratories - Research)
Rittwik Jana (AT&T Laboratories - Research)
Daniel Stern (AT&T Laboratories - Research)
Bin Wei (AT&T Laboratories - Research)
Mike Yang (AT&T Laboratories - Research)
Hailong Sun (Beihang University)
P2P file transfers and streaming have already seen a tremendous growth in Internet applications. With the rapid growth of IPTV, the need to efficiently disseminate large volumes of Video-on-Demand (VoD) content has prompted IPTV service providers to consider peer-assisted VoD content delivery. This paper describes Zebroid, a VoD solution that uses IPTV operational data on an on-going basis to determine how to pre-position popular content in customer set-top boxes during idle hours to allow these peers to assist the VoD server in content delivery during peak hours. Latest VoD request distribution, set-top box availability, and capacity data on network components are all taken into consideration in determining the parameters used in the striping algorithm of Zebroid. We show both by simulation and emulation on a realistic IPTV testbed that the VoD server load can be significantly reduced by more than 50-80% during peak hours by using Zebroid.

Presented by:
Remo Meier
(ETH Zurich)

Thomas Locher (ETH Zurich)
Remo Meier (ETH Zurich)
Roger Wattenhofer (ETH Zurich)
Stefan Schmid (TU Munich)
Data dissemination in decentralized networks is often realized by using some form of swarming technique. Swarming enables nodes to gather dynamically in order to fulfill a certain task collaboratively and to exchange resources (typically pieces of files or packets of a multimedia data stream). As in most distributed systems, swarming applications face the problem that the nodes in a network have heterogeneous capabilities or act selfishly. We investigate the problem of efficient live data dissemination (e.g., TV streams) in swarms. The live streams should be distributed in such a way that only nodes with sufficiently large contributions to the system are able to fully receive it -- even in the presence of freeloading nodes or nodes that upload substantially less than required to sustain the multimedia stream. In contrast, uncooperative nodes cannot properly receive the data stream as they are unable to fill their data buffers in time, incentivizing a fair sharing of resources. If the number of selfish nodes increases, our emulation results reveal that the situation steadily deteriorates for them, while obedient nodes continue to receive virtually all packets in time.

Presented by:
Lisong Xu
(University of Nebraska-Lincoln)

Miao Wang (University of Nebraska-Lincoln)
Lisong Xu (University of Nebraska-Lincoln)
Byrav Ramamurthy (University of Nebraska-Lincoln)
Most of the literature on peer-to-peer (P2P) live streaming focuses on how to provide best-effort streaming quality by efficiently using the system bandwidth; however, there is no guarantee about the provided streaming quality. This paper considers how to provide statistically guaranteed streaming quality to a P2P live streaming system. We study a class of admission control algorithms which statistically guarantee that a P2P live streaming system has sufficient overall bandwidth. Our results show that there is a tradeoff between the user blocking rate and user-behavior insensitivity (i.e., whether the system performance is insensitive to the fine statistics of user behaviors). We also find that the system performance is more sensitive to the distribution change of user inter-arrival times than to that of user lifetimes.
1700 - 1715
Concluding Remarks

This website is mirrored at and