News Archive
2024
2024/12: I will deliver an invited talk at the D3S3: Data-Driven and Differentiable Simulations, Surrogates, and Solvers Workshop at NeurIPS '24 in Vancouver, British Columbia, Canada.
2024/10 [In the media]: I was featured in an Okstate alumni highlight story about my research career and future plans. Thanks for the write-up!
2024/09: I am attending a workshop on the Statistical Aspects of Non-Linear Inverse Problems at the University of Cambridge.
2024/08: I am happy to announce that Andrew Stuart and I received a 2024 SIGEST Award from SIAM for our 2021 paper on operator learning using random features. This award recognizes an exceptional paper of general interest published in the SIAM Journal on Scientific Computing in the last few years. An expanded version of the article is now published online in the SIGEST section in SIAM Review.
2024/08: I had a productive research visit at the University of Washington Department of Applied Mathematics. Thanks to Bamdad Hosseini for hosting!
2024/08: I am giving an invited talk on operator learning for parameter-to-observable maps at the University of Bath Machine Learning in Infinite Dimensions Workshop in Bath, UK. Our paper on this topic was accepted in the AIMS journal "Foundations of Data Science".
2024/07: Using black-box, derivative-free, particle-based optimizers, our new preprint develops a framework for Hyperparameter Optimization for Randomized Algorithms (A Case Study for Random Features). We demonstrate that random features can be a robust and practical replacement for Gaussian processes in high-dimensional regression problems. Thanks to Oliver Dunbar and Maya Mutic for the great collaboration!
2024/07: I am joining the Department of Mathematics at MIT as an NSF Mathematical Sciences Postdoctoral Research Fellow. The following year, I will join the Department of Mathematics at Cornell University as a Klarman Fellow.
2024/06: My Caltech Ph.D. thesis on the "Statistical Foundations of Operator Learning" won the W.P. Carey and Co. Prize for Best Thesis in Applied Mathematics and the Centennial Prize for the Best Thesis in MCE.
2024/06 [In the media]: I was interviewed about my research program and how it connects with the public sphere in Caltech Magazine's #SoCaltech section and also featured in SIAM's student spotlight video, where I discuss my graduate research and associated skills.
2024/05: My paper on linear operator learning is the most read journal article in SIAM/ASA JUQ as of May 2024.
2024/04: I successfully defended my Ph.D. thesis on the Statistical Foundations of Operator Learning! Thanks to everyone for the support.
2024/02: My new preprint provides "An operator learning perspective on parameter-to-observable maps" (with Daniel Z. Huang and Margaret Trautner). This work introduces and implements Fourier Neural Mappings, a principled extension of FNOs for learning maps with finite-dimensional inputs and/or outputs. For the task of predicting finite-dimensional quantities of interest (QoIs), a theoretical analysis explores the relative difficulty of full-field operator learning versus end-to-end learning of the QoIs. The accompanying code is publicly available here.
2024/02: I am presenting my spotlight work on function-valued random features in MS146: Learning High-Dimensional Functions: Approximation, Sampling, and Algorithms at the SIAM Conference on Uncertainty Quantification (UQ24) in Trieste, Italy. There, I am also co-organizing a minisymposium on Recent Advances in Scalable Active Learning and Optimal Experimental Design.
2024/02: I presented my work on the "Foundations of Data-Efficient and Uncertainty-Aware Scientific Machine Learning" at the Joint ASE/Oden Institute Seminar at UT Austin and the MAE Colloquium at Cornell University.
2024/01: I gave a talk at the Cornell Scientific Computing and Numerics (SCAN) seminar.
2023
2023/12: I am presenting a poster on our spotlight paper at NeurIPS 2023 in New Orleans, Louisiana.
2023/10: I am delivering an invited talk in MS23: Advances in V&V, Uncertainty Quantification, and Data-Driven Modeling at the Advances in Computational Mechanics (ACM 2023) conference in Austin, TX.
2023/09: Our work on Error Bounds for Learning with Vector-Valued Random Features was accepted as a NeurIPS 2023 Spotlight paper!
2023/09: I am giving an invited talk in MS05: Numerical Meet Statistical Methods in Inverse Problems at the 11th Applied Inverse Problems Conference (AIP23) to be held in Göttingen, Germany.
2023/08: I am co-organizing the minisymposium MS831: Randomization for Simplified Machine Learning - Random Features and Reservoir Computers at the 10th International Congress on Industrial and Applied Mathematics (ICIAM 2023) in Tokyo, Japan.
2023/07: I am presenting at the 17th U.S. National Congress on Computational Mechanics (USNCCM17) in Albuquerque, NM in MS416: Recent Developments in Operator Networks.
2023/06: I am participating in the INdAM Learning for Inverse Problems workshop in Rome, Italy and the BIRS workshop on Scientific Machine Learning at the Banff Centre in Alberta, Canada.
2023/05: Our new preprint establishes state-of-the-art Error Bounds for Learning with Vector-Valued Random Features (joint work with Samuel Lanthaler). The theory holds in a general infinite-dimensional setting (applying to operator learning in particular) and is developed with a matrix-free analysis. This leads to the sharpest known rates (free of log factors) for random feature ridge regression to date.
2023/05: My paper on linear operator learning was published in the SIAM/ASA Journal on Uncertainty Quantification.
2023/05: I am giving an invited talk in the Level Set Seminar at the UCLA Department of Mathematics.
2023/04: I am speaking at the Oden Institute's inaugural Workshop on Scientific Machine Learning at UT Austin, the Workshop on Establishing Benchmarks for Data-Driven Modeling of Physical Systems at USC, and the Southern California Applied Mathematics Symposium (SoCAMS) at UC Irvine.
2023/03: I have been selected as a 2022-2023 Amazon/Caltech AI4Science Fellow! The program recognizes researchers that have had a remarkable impact in artificial intelligence and machine learning, and in their application to fields beyond computer science.
2023/02: I am giving an invited talk about "Learning the Electrical Impedance Tomography Inversion Operator" in MS46: Goal-Oriented and Context-Aware Scientific Machine Learning, part of SIAM CSE23 in Amsterdam, The Netherlands. There, I am also co-organizing MS370 and MS406: Operator Learning in the Physical and Data Sciences, Parts I & II.
2022
2022/12: I participated in the International Conference on New Trends in Computational and Data Sciences at Caltech.
2022/11: Our paper on the theory of linear operator learning was accepted for publication in the SIAM/ASA Journal on Uncertainty Quantification.
2022/09: Professor Joel Tropp's course lecture notes on "Matrix Analysis" are now publicly available and include chapter III.8 that I wrote on the topic of "Operator-Valued Kernels."
2022/09: I am giving an invited talk about "Scalable Uncertainty Quantification with Random Features" in MS85: Recent Advances in Kernel Methods for Computing and Learning, part of SIAM MDS22 in San Diego, CA. There, I am also co-organizing MS81: Provable Guarantees for Learning Dynamical Systems.
2022/08: I am giving an invited virtual talk about my joint work on operator learning in MS1714: Advances in Scientific Machine Learning for High-Dimensional Many-Query Problems, part of the WCCM--APCOM in Yokohama, Japan.
2022/06: An improved version of my work on linear operator learning is now available on arXiv. In it, three fundamental principles reveal the types of linear operators, types of training data, and types of distribution shift that lead to reduced sample size requirements for supervised learning in infinite dimensions.
2022/06: I am giving an invited talk about our work on learned surrogates for parametric PDEs in MS210: Reduced-Order and Surrogate Models for Mechanics of Porous Media, part of the Engineering Mechanics Institute Conference at Johns Hopkins University, Baltimore, MD.
2022/05: I am giving an invited virtual talk about "Noisy Linear Operator Learning as an Inverse Problem" in WS3: PDE-constrained Bayesian Inverse Problems, part of the Computational Uncertainty Quantification thematic programme at the Erwin Schrödinger Institute in Vienna, Austria.
2022/05: I am giving an invited virtual talk about "Bayesian Posterior Contraction for Linear Operator Learning" at the AMS Spring Western Sectional Meeting Special Session on Mathematical Advances in Bayesian Statistical Inversion and Markov Chain Monte Carlo Sampling Algorithms.
2022/04: I am co-organizing a minisymposium on Operator Learning in PDEs, Inverse Problems, and UQ at SIAM UQ22 in Atlanta, GA, where I will also be speaking about our recent work on "Convergence Rates for Learning Linear Operators from Noisy Data".
2022/01: This year I am co-organizing the Caltech Department of Computing and Mathematical Sciences CMX Student/Postdoc Seminar.
2021
2021/11: I am virtually attending the Deep Learning and Partial Differential Equations workshop as a part of the Mathematics of Deep Learning programme at the Isaac Newton Institute for Mathematical Sciences, Cambridge UK.
2021/10: I am virtually attending the Statistical Aspects of Non-Linear Inverse Problems workshop hosted by the Banff International Research Station for Mathematical Innovation and Discovery (BIRS) as an invited participant from October 31st to November 5th.
2021/10: I am giving an invited talk at the Caltech CMX Student Seminar on some of my Ph.D. work on operator regression.
2021/09: My paper on Banach space random feature methods, joint work with A.M. Stuart, was published in the SIAM Journal on Scientific Computing.
2021/09: I am virtually attending the Deep Learning and Inverse Problems workshop as a part of the Mathematics of Deep Learning programme at the Isaac Newton Institute for Mathematical Sciences, Cambridge UK.
2021/08: My new preprint on "Convergence Rates for Learning Linear Operators from Noisy Data," joint with M.V. de Hoop, N.B. Kovachki, and A.M. Stuart, is now available. In it, we prove that a class of compact, bounded, and even unbounded operators can be stably estimated from noisy input-output pairs.
2021/07: I gave a SIAM AN21 talk on July 19th titled "Function Space Random Feature Methods for Learning Parametric PDE Solution Operators," with particular emphasis on fast learned surrogates for Bayesian inverse problems.
2021/06: I presented my forthcoming joint work on "Learning Unbounded Operators" to the Geo-Mathematical Imaging Group at Rice University on June 15th.
2021/04: I was admitted to candidacy for the Ph.D. degree.
2021/03: At SIAM CSE21, I co-organized (with Nathaniel Trask and Ravi Patel) the virtual minisymposiums "Learning Operators from Data" and "Machine Learning for Surrogate Model and Operator Discovery."
2021/01: I was invited to speak at the SIAM Annual Meeting (AN21) virtual minisymposium "Deep Learning for High-Dimensional Parametric PDEs" in July; looking forward to it!
2020
2020/12: I participated in the virtual Workshop on Mathematical Machine Learning and Applications hosted by the CCMA at Penn State.
2020/11: I virtually gave an invited talk about my work on random feature methods for parametric PDEs in the numerical analysis and machine learning reading group seminar at the Courant Institute of Mathematical Sciences, New York University.
2020/09: I virtually gave an invited talk (both live and pre-recorded) in the Kernel Methods session of the Second Symposium on Machine Learning and Dynamical Systems at The Fields Institute, Toronto, Canada.
2020/07: I participated in the virtual Learning Models from Data: Model Reduction, System Identification and Machine Learning GAMM Juniors’ Summer School on Applied Mathematics and Mechanics at the Max Planck Institute for Dynamics of Complex Technical Systems, Magdeburg, Germany, where I presented a poster. I also attended the MSML2020 online conference at Princeton University.
2020/04: I virtually attended Workshop II: PDE and Inverse Problem Methods in Machine Learning at the IPAM High-Dimensional Hamilton-Jacobi PDEs long program at UCLA, Los Angeles, CA.
2020/02: I participated in the Inverse Problems: Algorithms, Analysis and Applications workshop at Caltech, through the CMX group in the CMS department.