Modern-day communication networks are prominent examples of highly complex
large-scale networked systems that are of critical importance to society. Because of their vital interest, these networks need to be designed to achieve consistently high levels of performance and reliability, and yet be cost-effective to operate. This involves huge challenges, especially since communication networks are subject to inherent uncertainty and random variation in demand and supply.
These topics will be discussed by four very prominent researchers. Peter Glynn (Stanford) will deliver the keynote lecture, while the other lectures are given by Nelly Litvak (University of Twente), Laurent Massoulie (Microsoft Research - INRIA Joint Centre) and Mark Squillante (IBM Thomas J. Watson Research Center).
Stanford University, California
Top-down Statistical Modeling for Queues
Queueing theory has produced an extensive set of limit theorems that provide insight into the behavior of queues in various asymptotic regimes. One key insight from this literature has been largely ignored over the years. In particular, the theory also identifies the key time scales and statistical parameters of the traffic that affect performance in these regimes. This suggests that statistical analysis of such traffic should therefore focus on those time scales and on accurately measuring those statistical parameters. In many real-world settings, this means that one should focus statistical attention on measuring (for example) long time-scale autocorrelation structure, and that fine-scale structure (like marginals of inter-arrival times) can largely be ignored. We illustrate this “top-down” approach in the context of analysis of high-intensity arrivals associated with call center data sets. Our talk will also discuss a suite of statistical tools that we are building that leverage this top-down perspective. These ideas (joint work with Harsha Honnappa, Xiaowei Zhang, and Zeyu Zheng)
Twente University, Twente
Average nearest neighbor degree and finite size effects in scale-free networks
Dependencies between the degree of a node and its neighbors, known as degree-degree correlations, or network assortativity, affect many important properties of networks, e.g. their robustness to attacks and spreading processes. In applications, standard methods such as Pearson’s correlation coefficient are often used to measure degree-degree correlations. Despite their popularity, such measures are misleading when the degrees follow a power law distribution with infinite variance, which is a realistic assumption in many real-life networks, e.g., on-line social networks and the World Wide Web. In this talk I will focus on a commonly used correlation measure – the average nearest neighbor degree (ANND) of a node of degree k, as a function of k. I will discuss convergence properties of the ANND as the graph size goes to infinity, and its limitations.
In particular, consider the configuration model (CM), where nodes are connected in a (multi-)graph at random. In the case of infinite variance degrees, we prove that ANND fails to converge to a constant but obeys a stable-law CLT. We propose an alternative correlation measure, the average nearest neighbor rank (ANNR), and prove its point-wise convergence to a deterministic function for a general class of random graphs with finite expectation of the degrees.
We next turn to the erased configuration model (ECM), which is a simple graph obtained as follows: first, all nodes are connected at random and then multiple-edges are merged and self-loops are removed. Physics literature often mentions `finite-size effects’ or `structural negative correlations’ that arise in simple graphs because large nodes can have only a limited number of large neighbors. We prove that most of the convergence results for the ANNR in CM remain to hold in the ECM, but we do observe interesting finite-side effects for very large degrees. I will devote part of the talk to numerical results and open questions.
(joint work with Pim van der Hoorn and Dong Yao)
Microsoft Research - INRIA Joint Centre, Paris
Rapid Mixing of Local Graph Dynamics
Graph dynamics arise naturally in many contexts. For instance in peer-to-peer networks, a participating peer may replace an existing connection with one neighbour by a new connection with a neighbour's neighbour. Several such local rewiring rules have been proposed to ensure that peer-to-peer networks achieve good connectivity properties (e.g. high expansion) in equilibrium. However it has remained an open question whether there existed such rules that also led to fast convergence to equilibrium.
In this work we provide an affirmative answer: We exhibit a local rewiring rule that converges to equilibrium after each participating node has undergone only a number of rewirings that is poly-logarithmic in the system size. The proof involves consideration of the whole isoperimetric profile of the graph, and may be of independent interest.
IBM Thomas J. Wartson Research Center, Yorktown Heights
Optimization and Optimal Control of Networks under Uncertainty
The proliferation of data and the advances in statistical and machine learning me
thods create tremendous opportunities for optimization and optimal control under uncertainty to address fundamental problems arising in the highly complex large-scale systems of modern-day communication networks. We consider a general approach, building on data and on statistical/machine learning and moving from stochastic models of uncertainty to o
ptimization and/or optimal control around these stochastic models, that provides a mathematical foundation for the optimal design and control of highly complex large-scale networked systems. A few different examples will be presented to illustrate instances of our general approach, each involving general classes of fundamental problems in networked systems. This includes one case in which we devise a simulation-optimization framework that yields optimal resource allocations across large-scale stochastic networks in a very efficient and effective manner, and another case in which we derive an opti
mal control policy that renders easily and efficiently implementable algorithms for governing dynamic resource allocations over time. Computational experiments demonstrate the significant benefits (e.g., high performance, high reliability, cost-effectiveness) of our approach over previously pu
blished results. This research is part of a recently formed Center for Optimization under Uncertainty Research (COUR, pronounced "kor") across all of IBM Research (including joint works with A.B. Dieker, S. Ghosh and X. Gao, Y. Lu, M. Sharma, J.W. Bosman).