Smart communicating devices with their sensing, computing and control capabilities promise to make our society more intelligent, energy-efficient, safe and secure. To achieve these goals the main challenge is to exploit these powerful, but unstructured, computational...
Smart communicating devices with their sensing, computing and control capabilities promise to make our society more intelligent, energy-efficient, safe and secure. To achieve these goals the main challenge is to exploit these powerful, but unstructured, computational resources. The most natural, and widely implemented, centralized computation model is not applicable to this extremely complex system in which processors are spatially distributed and interact through asynchronous and unreliable communication. Thus, a novel peer-to-peer distributed computational model is emerging as a new opportunity in which a problem is solved cooperatively by peers, rather than by a unique provider that knows and owns all data. Optimization is a mathematical framework involving several research areas ranging from Engineering, Computer Science and Mathematics to Economics and Social Science. In particular, estimation, learning decision and control problems arising in modern scenarios from these areas need to face this computation revolution. A novel peer-to-peer distributed optimization setting is emerging in which processors with local memory, computation and communication capabilities cooperatively solve large-scale, structured optimization problems, without relying on any central coordinator.
The OPT4SMART project has a twofold ambitious goal: (i) to set-up a novel, comprehensive methodological framework to solve distributed optimization problems in peer-to-peer networks, and (ii) to provide numerical methods and toolboxes, based on this framework, to solve distributed estimation, learning, decision and control problems in complex cyber-physical networks. We aim at pursuing this twofold objective by developing interdisciplinary methodologies arising from a synergic combination of optimization, control-systems, and graph theories. Specifically, the project addresses four main open challenges, whose solution represents a breakthrough for the treatment of distributed optimization over networks. Cyber-physical networks are intrinsically characterized by devices communicating with different time-scales, appearing and disappearing from the network, and by possibly unreliable communication channels. Thus, distributed optimization algorithms have to be designed by explicitly dealing with asynchronous and (possibly) unreliable communication. Most of the attention in the distributed optimization literature has been devoted to real-valued convex problems. In this project we will consider more general classes of optimization problems as nonconvex, mixed-integer and combinatorial problems, arising in many estimation, learning, decision and control applications. This century is without doubt the age of massive data. Scientists and analysts agree that today’s real challenge is to get full value from this amount of available information. Optimization is clearly a significant part of this process, so that very-large scale and big-data optimization problems need to be solved. Finally, many problems arising in cyber-physical networks have to deal with a dynamic scenario, in which data change with time and a real-time computation is repeatedly needed. Thus, distributed algorithms (possibly) addressing also the previous challenges, e.g., on communication and problem size, need to be designed for these dynamic optimization set-ups.
The activities of OPT4SMART have focused, as planned, on the development of distributed optimization methods taking into account communication limitations (mainly asynchronous protocols), general set-ups beyond classical real-value convex problems, large-scale and big-data problems, and set-ups for dynamic optimization. Moreover, estimation, learning, decision and control problems involving distributed optimization have been investigated.
As for the development of a distributed optimization framework, we can group the most relevant contributions into four main parts.
A first set of contributions deals with the design of distributed algorithms for networks with asynchronous communication, thus going beyond existing methods typically tailored for synchronous networks (often with fixed graphs). We have considered a class of “cost-coupled†optimization problems in which the cost function is the sum of terms all depending on a common decision variable.
Distributed algorithms based on dual decomposition have been proposed for fairly general problems including both composite cost functions and local constraints, and, in particular, for networks with asynchronous communications. Specifically, in the asynchronous version each node performs the computation only when triggered by its local timer or by a message from a neighbor, so that no common, central clock is needed. Moreover, for the proposed algorithms no common parameter needs to be set centrally; each node has its own step-size that can be chosen locally. A key distinctive feature of the algorithm analysis is the combination of duality theory, randomized coordinate-descent methods, and properties of the proximal operator when applied to conjugate functions.
In order to deal with optimization problems with nonconvex cost function and (local) constraints, a distributed asynchronous method of multipliers (ASYMM), has been proposed in collaboration with a group of the University of Siena. In the proposed scheme nodes perform, only when awake, either a descent step on a local augmented Lagrangian or an ascent step on the local multiplier vector. The key element, allowing nodes to decide when to switch from the descent step to the ascent one, is an asynchronous distributed logic-AND algorithm. The algorithm has been shown to effectively solve widely investigated problems arising in estimation and learning.
Primal methods have also been proposed in collaboration with groups of the University of Padova and Lulea University of Technology. Specifically, Newton-based distributed algorithms have been designed for unreliable asynchronous networks in which packet losses are explicitly taken into account. Under mild conditions on the (asynchronous) node updates, link failures, connectivity of the communication graph, step-size and smoothness of the cost function, we show that the resulting distributed optimization algorithm renders the global solution locally exponentially stable.
A second set of contributions regards (cost-coupled) big-data, possibly nonconvex, problems in which the dimension of the decision variable is “huge†and possibly depending on the network size. As a result, performing local computations on the whole decision variable would be too costly, and broadcasting to neighbors the entire solution estimate would incur in an unaffordable communication overhead. In a first batch of works, a special class of partitioned big-data problems has been considered in which each local cost function is coupled only with variables of the agent neighbors. Although special, we have shown how this class of problems models interesting estimation and decision problems. Asynchronous dual-decomposition based algorithms have been customized and extended in order to solve (convex) partitioned big-data problems. Then a class of asynchronous primal algorithms, based on successive convexifications, has been proposed for nonconvex problems. In a second batch of works, in collaboration with a gr
Several significant scientific contributions have already been given to the state of the art. In particular, OPT4SMART has proposed solutions to address the four main challenges identified in the proposal and reported in the previous sections.
Asynchronous distributed optimization algorithms have been proposed that go beyond, or extend, algorithms available in the literature (that were suited for synchronous communications). Moreover, some of the proposed algorithms also work under unreliable communication (including for example packet losses or link failures). All these (asynchronous) methods have contributed to significantly improve the applicability of existing methodologies to real scenarios. Indeed, handling asynchronous (and possibly unreliable) communication is a fundamental requirement when dealing with concrete application scenarios in unstructured cyber-physical network systems.
More general problems than the (smooth) real-valued convex problems available in the literature have been addressed. In order to address the challenges coming from applications in modern cyber-physical networks, nonsmooth, nonconvex, mixed-integer and uncertain problems have been addressed. Moreover, distributed methods have been proposed for big-data optimization problems. Proposing and addressing a distributed big-data optimization set-up is already an important contribution per se. Indeed, while big-data optimization has received significant attention in recent years in the area of centralized and parallel optimization, almost no contributions were available in the distributed literature. The interest in the parallel optimization literature, as well as important scenarios identified in the project, are clear indications of the relevance of this framework and, thus, of the proposed methodologies. In some works, some of the project challenges have been addressed simultaneously, e.g., distributed methods for nonconvex big-data optimization problems have been proposed. In other contributions distributed algorithms working in directed, asynchronous and unreliable networks have been proposed for general classes of problems as, e.g., mixed-integer (possibly uncertain) ones.
From the point of view of concrete application scenarios in estimation, learning, decision and control, we are pursuing the ambitious goal of providing both general theoretical frameworks and numerical tools (based on distributed optimization methods) to address and solve problems of interest in modern cyber-physical network systems. Important results have been already obtained.
We have set-up a general probabilistic estimation and learning framework, based on a generalized Empirical Bayesian approach, which allows agents in a network to estimate, learn, or classify local unknown quantities by properly fusing all the network information. As opposed to the available estimation and learning literature concentrating on the achievement of consensus on a common estimate of unknown parameters, in our framework estimating common parameters is only a block of a more general and hierarchical scheme in which agents are able to estimate or learn local unknowns by borrowing strength from the ensemble. This general framework has been already successfully developed to traffic estimation problems, interaction-based learning problems (which are extremely relevant in several, modern cyber-physical and social network contexts), and estimation of spatial fields (which are relevant in several applications involving spatially-distributed smart sensors).
Moreover, we have started to propose a set of numerical methods and toolboxes, relying on distributed optimization, to address resource allocation decision problems and other control problems involving coupling constraints, which play a key role in cooperative decision and control scenarios as, e.g., in the context of smart grids and robotic networks. The key distinctive feature of these methods is that they are designed by explicitly taking int
More info: http://www.opt4smart.eu/.