In August 2026, I will join the Oden Institute and ASE/EM faculty at UT. Prospective Ph.D. students and postdocs are welcome to contact me by email.
Klarman Fellow
Department of Mathematics
Cornell University
I am joining UT Austin as an Assistant Professor in August 2026. My research group has several openings at the Ph.D. and postdoc level.
Welcome! I am a Klarman Fellow in the Department of Mathematics at Cornell University, where I am hosted by Prof. Alex Townsend and Prof. Yunan Yang. Broadly, my research interests lie at the intersection of computational mathematics and statistics. Using rigorous analysis and domain-specific insight, I develop novel data-driven machine learning methods for high- or infinite-dimensional problems, establish theoretical guarantees on the reliability and trustworthiness of these methods, and apply the methods in the physical and information sciences. My work blends operator learning with ideas from inverse problems, generative modeling, and uncertainty quantification. A current focus of my research centers on data science tasks formulated in the space of probability distributions.
Previously, I was an NSF Postdoctoral Fellow in the Department of Mathematics at MIT. I received my Ph.D. from Caltech in 2024, where I was fortunate to be advised by Prof. Andrew M. Stuart and supported by the Amazon AI4Science Fellows Program and an NSF Graduate Research Fellowship. My doctoral dissertation was awarded two "best thesis" prizes, one in applied mathematics and another in engineering. I obtained my M.Sc. from Caltech in 2020 and my B.Sc. (Mathematics), B.S.M.E., and B.S.A.E. degrees from Oklahoma State University in 2018.
nnelsen [at] cornell [dot] edu
2026/02 (new): I will deliver invited talks in the Analysis Seminar at the University of Oklahoma and in a Special Applied Mathematics Seminar at the University of Washington.
2026/01 (new): I gave a talk in the Joint Seminar on Inverse Problems and Learning Theory. Thanks for the invitation!
2025/12: Our survey chapter on "Operator learning meets inverse problems" has been accepted in the Handbook of Numerical Analysis, Vol. 27: Machine Learning Solutions for Inverse Problems. I am excited to present this and recent work on approximating EIT in my plenary talk at the Inverse Days 2025 conference in Helsinki, Finland.
2025/11: Neural operators are universal, but can they approximate data-to-parameter solution maps of nonlinear inverse problems? In a new preprint on the extension and neural operator approximation of the electrical impedance tomography inverse map, we provide an affirmative answer both theoretically and numerically, even when the measurements are noisy. This is joint work with Maarten de Hoop, Nikola Kovachki, and Matti Lassas.
2025/10: Our new work studies bilevel optimization for learning hyperparameters. We develop an efficient Gauss–Newton-based bilevel scheme for hyperparameter optimization in scientific machine learning. It enjoys closed-form inner updates that avoid repeated PDE solves and enhance the accuracy, robustness, and scalability of Gaussian process models for solving nonlinear PDEs and inverse problems. This is joint work with Houman Owhadi, Andrew Stuart, Xianjin Yang, and Zongren Zou from Caltech.