Jaime Llorca

New York University, USA

Jaime Llorca

New York University, USA

Biography

Jaime Llorca is a Research Associate Professor with the New York University Tandon School of Engineering, Brooklyn, NY. He previously held a Senior Research Scientist position with the Network Algorithms Group at Nokia Bell Labs, NJ, a Communication Networks Research Scientist position with the End-to-End Networking Group at Alcatel-Lucent Bell Labs, NJ, and a post-doctoral position with the Center for Networking of Infrastructure Sensors, College Park, MD. He received M.S. and Ph.D. degrees in Electrical and Computer Engineering from the University of Maryland, College Park. His research interests are in the field of network algorithms, optimization, and control, with applications to next generation communication networks, distributed cloud networking, and content distribution. He has authored more than 100 peer-reviewed publications and 20 patents. He is a recipient of the 2007 IEEE ISSNIP Best Paper Award, the 2016 IEEE ICC Best Paper Award, and the 2015 Jimmy H.C. Lin Award for Innovation. He currently serves as an Associate Editor for the IEEE/ACM Transactions on Networking.

Keynote: “End-to-End Service Optimization and Control in Next-Generation Cloud-Integrated Networks

Abstract: Distributed cloud networking builds on software-defined networking and network function virtualization to enable the deployment of a wide range of services in the form of elastic software functions instantiated over general purpose hardware at distributed cloud locations and interconnected by a programmable network fabric. Operators can then configure end-to-end network slices within a common physical infrastructure, each tailored to services and applications with different requirements. Such an attractive scenario will enable the efficient delivery of a new breed to resource- and interaction-intensive applications (e.g., industrial automation, autonomous transportation, augmented reality) that will transform the way we live, work, and interact with the physical world. The main challenge is to jointly orchestrate the allocation of storage, computation, and communication resources in order to meet end-user QoE requirements while minimizing the use of the shared physical infrastructure. This keynote presents an overview of the challenges and opportunities in the integrated evolution of networks and clouds towards a highly distributed universal computing fabric and the most promising algorithmic approaches for the optimization and control of next generation services over cloud-integrated networks.

Tutorial: “Cloud Network Flow: Understanding Information Flow in Tomorrow’s Cloud-Integrated Networks”

Abstract: The convergence of IP networks and IT clouds into highly distributed cloud-integrated networks will form a ubiquitous shared computing infrastructure that can host content and applications close to information sources and end users, providing rapid response, analysis, and delivery of augmented information in real time. Next-generation resource-intensive and latency-sensitive services (e.g., automation, telepresence, augmented reality) can then be deployed in the form of multiple software functions instantiated at different cloud locations and elastically scaled in order to meet changing service requirements. The cloud service distribution problem is to find the placement of service functions, the routing of network flows, and the associated allocation of cloud and network resources that meets service demands while minimizing the use of the physical infrastructure. This tutorial will review state-of-the-art results for the design of cloud service optimization and control algorithms and will present a novel cloud-network flow model that allows jointly optimizing the allocation of cloud and network resources for services with arbitrary functional relationships and any combination of unicast and multicast flows. Efficient algorithms for both static and dynamic versions of the problem will be described. Illustrative results in the context of NFV, IoT, and real-time stream processing services will be used to illustrate the significant efficiency improvements that can be obtained via the joint end-to-end optimization of real-time computation services over distributed cloud-integrated networks.

All sessions by Jaime Llorca

Phd School – December 3 – Morning

03 Dec 2020
09:30 - 13:30

This site uses cookies. Click Accept to agree. To know more: Cookie Policy

This site uses cookies to provide the best browsing experience possible. By continuing to use this site without changing your cookie settings or by clicking on "Accept", you allow their use.

Close