This event has been canceled.
Bio: Jayaraman J. Thiagarajan is a computer scientist in the Center for Applied Scientific Computing at the Lawrence Livermore National Laboratory (California, USA). He received his Ph.D. degree in electrical engineering from Arizona State University in 2013. His research interests are in the areas of machine learning, computer vision, natural language modeling, data analysis and signal processing. He has published over 100 peer-reviewed conference and journal articles and has co-authored two books and multiple book chapters. He leads DoE and Office of Science projects on AI for science, robust and reliable deep learning, healthcare AI and explainable machine learning. He serves on the applied math visioning committee of the Applied Scientific Computing Research program.
Abstract: With the promise of AI being of crucial scientific and commercial value in the coming years, it is imperative to better understand and re-evaluate our objectives towards building intelligent machines. This is particularly important from the context of problems in high-impact applications such as science and healthcare, where there is growing interest in integrating machine learning tools inside decision workflows, with the hope the models are reliable and meaningful. At the core, it is imperative to understand and incorporate constraints from the real-world for the continued success of machine learning (ML) solutions. This encompasses a myriad of challenges in terms of data and the environments that the models operate in. Building models that respect one or more of these requirements often demands rethinking of the ML solutions. While in some cases, the limitations and prior knowledge about the environment can be posed as constraints to the learning, in other scenarios there is a need to redesign components of the underlying computational engine, for example, deep neural networks, to meet the application requirements. For example, known physical laws about the environment can be utilized in the form of constraints, whereas in order to endow models with uncertainty estimates one needs to design probabilistic variants of the solutions. This talk will broadly discuss key research directions towards making machine learning models survive challenges in the real-world.