Distributed and decentralized optimization becomes increasingly relevant. One reason is the increasing amount of multi-agent systems such as energy systems, smart factories, or machine learning environments with distributed data, which require solving optimization problems efficiently without full data exchange, with convergence guarantees, and with fast convergence. Many of the mentioned applications have in common that they are governed by nonlinear models, which appear in the constraints of optimization problems. These nonlinear constraints make optimization problems non-convex and thus classical distributed algorithms are often not guaranteed to converge.
In the present talk, we show how to distribute the computation of Newton steps, which can be used to design distributed second-order optimization algorithms. These algorithms have the advantage that they come with fast local convergence guarantees for non-convex problems. In particular, we derive a distributed interior point method and a distributed sequential quadratic programming variant, and we showcase their practical applicability in power systems, sensor networks, and model predictive control.
Zoom Link: https://argonne.zoomgov.com/j/1604979841
See all upcoming talks at http://wordpress.cels.anl.gov/lans-seminars/