Deep learning (DL) is causing revolutions in computer perception, signal restoration/reconstruction, signal synthesis, natural language understanding, and process control. DL is increasingly used to provide approximate solutions to PDE and non-linear optimization problems, with many applications in cosmology, material science, high-energy physics, and various applications of fluid dynamics. But one of the most direct impacts of DL on the scientific computing community has been to provide flexible software platforms for numerical problems, such as PyTorch and TensorFlow, with built-in support for multi-dimensional arrays, GPU, parallelism, and automatic differentiation.
While DL has become a new tool in the toolbox of the applied mathematician, DL heavily relies on the tools of applied mathematics, such as large-scale non-linear, non-convex optimization. But our understanding of the landscape of the objective function and the convergence properties in DL systems is still very superficial.
One of the key questions in Machine Learning today is how to learn predictive models of the world in a self-supervised manner, a bit like humans and animals. A class of methods that can predict high-dimensional signals, such as video, under uncertainty will be presented. It is based on capturing dependencies between variable by shaping an energy function.
Yann LeCun, Facebook, U.S.