Past Event: Babuška Forum
Qiang Liu, Professor, Department of Computer Science, UT Austin
10 – 11AM
Friday Nov 19, 2021
POB 6.304 & Zoom
** This seminar will be presented LIVE in person in POB 6.304 and streamed live via Zoom.**
Although machine learning (ML) and deep learning (DL) have been typically conceptualized as unconstrained optimization of a single objective function, most practical ML/DL tasks actually involve trading off two or more objective functions and constraints. For example, constrained optimization is needed to impose trustworthy and safety constraints on ML models designed for human-related, high-stakes tasks, in addition to minimizing the typical data fitness loss. Multi-objective optimization is needed to systematically explore the Pareto front of different conflicting objective functions in multi-task learning. Lexicographic optimization is useful when we want to incorporate secondary auxiliary losses without hurting the optimization of the main objective function. Unfortunately, efficient and off-the-shelf tools for these challenging optimization tasks are still largely missing, especially for solving the challenging non-convex problems in deep learning. This is in sharp contrast with standard unconstrained optimization for which off-the-shelf tools have been well developed with variants of (stochastic) gradient descent, for both DL and traditional ML tasks. In this talk, we will introduce a family of simple yet efficient algorithms for constrained, lexicographic, and multi-objective optimization; these methods strikes to be natural extensions of gradient descent, based on iteratively finding best local descent directions that minimize the main loss as fast as possible while respecting the special constraints imposed by the specific problems.
Qiang Liu is an assistant professor of computer science at UT Austin. His research interests are in machine learning, approximate inference, reinforcement learning, and deep learning.