Several applications give rise to optimization problems that must be solved to global optimality, including worst-case uncertainty analysis and modeling of thermodynamic equilibria. Deterministic methods for global optimization proceed by generating successively tighter upper and lower bounds on the unknown optimal objective value, and their convergence rates are typically limited by the quality of these bounds.
This presentation describes recent progress in automatically constructing effective bounds on nonconvex systems for this purpose, including generation of differentiable convex underestimators, effective incorporation of subgradients, and nontrivial extensions to dynamic systems. This being an informal seminar, some of this work is preliminary. Enroute, we will also stumble across some small results in convex analysis that may surprise you.