Research Portfolio

Himanshu Singh

AI Researcher with a background in Pure and Applied Mathematics.

I work on mechanistic interpretability, sparse and efficient learning, neural operators, graph foundation models, kernel methods, and mathematically grounded AI systems for scientific and structured domains.

Mechanistic Interpretability Sparse Learning (ℓ₀/ℓ₁) Neural Operators (FNO) Graph Foundation Models Kernel Methods Scientific AI

Current Focus

I’m interested in understanding model reasoning, extracting circuits, and building efficient learning systems grounded in mathematics.

  • Transformer circuit tracing + activation patching
  • Efficient and sparse architectures
  • Operator learning for PDEs and dynamical systems
  • Kernel methods and mathematical representations
  • Rigor, reproducibility, and research-grade code

Research & Ideas

Ongoing mathematical and conceptual work across scientific machine learning, operator learning, and AI.

Epistemological Asymmetry

Why AI systems succeed differently in empirical science and formal mathematics, and what this reveals about learning and proof.

Read Essay →

Koopman Lap-KeDMD

A kernel-based framework for sparse reconstruction and operator-theoretic analysis of dynamical systems.

View Project →

Research Themes

My work sits at the intersection of mathematical structure, scientific machine learning, and modern AI systems.

Scientific Machine Learning

Operator learning, kernel methods, graph-based representations, and data-driven modeling of dynamical systems, with an emphasis on mathematically grounded structure.

Mechanistic & Efficient AI

Mechanistic interpretability, sparse and efficient neural computation, and research-grade experimentation for modern learning systems.

Research Posters

Selected research posters shown in chronological order, highlighting work in scientific machine learning, mathematical AI, and graph-based learning for structured systems.

2024 · SIAM MDS

SIAM MDS 2024: Mathematical Machine Learning for Scientific Systems

A research poster presented at the SIAM Conference on Mathematics of Data Science, centered on scientific machine learning, sparse learning, neural operators, and mathematically grounded approaches to structured scientific data.

This poster reflects a broader research direction in scientific AI, efficient learning, and mathematical structure in modern machine learning.

SIAM MDS 2024
Scientific ML
Sparse Learning
Neural Operators
2025 · SciFM

GIFT-KASTL: Graph Foundation Models for Fracture Network Learning

A research poster on graph foundation models for scientific machine learning, exploring how fracture networks and structured physical systems can be modeled using graph-based learning architectures.

This project sits at the intersection of graph AI, scientific machine learning, and foundation-model thinking for structured scientific data.

SciFM 2025
Graph Foundation Models
Fracture Networks
Poster Artifact

Publications & Preprints

Selected papers, preprints, and public research artifacts spanning kernel methods, operator learning, and scientific machine learning.

Machine Learning Application of Generalized Gaussian Radial Basis Function and Its Reproducing Kernel Theory

Mathematics (MDPI), 2024 · Feature Paper

  • Introduces the Generalized Gaussian Radial Basis Function (GGRBF) kernel.
  • Studies the induced RKHS and orthonormal basis structure.
  • Connects theory with practical machine learning applications.

Kernel Dynamic Mode Decomposition for Sparse Reconstruction of Closable Koopman Operators

arXiv preprint · Collaborative research

  • Studies RKHS-based Koopman analysis and closability of Koopman operators.
  • Investigates Laplacian kernels in data-driven dynamical systems.
  • Applies kernel dynamic mode decomposition to spatiotemporal reconstruction.

Featured Paper

A feature paper on the Generalized Gaussian Radial Basis Function (GGRBF), connecting reproducing kernel Hilbert space theory, orthonormal bases, and machine learning applications.

Feature Paper · Mathematics 2024

Generalized Gaussian Radial Basis Function for Artificial Intelligence

This work develops the Generalized Gaussian Radial Basis Function (GGRBF), studies its reproducing kernel Hilbert space, and connects it to mathematical structures such as orthonormal bases and Hermite-type eigen-observables.

The project bridges kernel methods, RKHS theory, and machine learning applications, showing how a mathematically enriched radial basis framework can support both theoretical analysis and practical modeling.

Feature Paper
RKHS Theory
Kernel Methods
Hermite Structures

Selected Projects

Representative technical projects spanning interpretability, operator learning, sparse neural computation, and mathematical modeling for AI systems.

Mechanistic AI Interpretability

Circuit tracing, activation patching, and causal interventions for transformer models.

  • Built layer/head patching pipelines to localize causal pathways in transformer reasoning.
  • Developed metrics for clean vs corrupted behavior and normalized recovery analysis.
  • Packaged experiments for reproducibility with configs, utilities, and interpretable outputs.
TransformerLens Activation Patching Causal Tracing Evals

Learning Fourier Neural Operators (FNO)

Operator learning for PDE surrogate modeling and scientific machine learning.

  • Implemented FNO training and evaluation pipelines with configurable experiments.
  • Benchmarked generalization across physical regimes and resolution scales.
  • Focused on reproducible scientific ML with datasets, seeds, and result reporting.
Neural Operators PDEs Scientific ML PyTorch

Sparse Neural Network

Learned hard-mask sparsification using trainable mask parameters.

  • Implemented hard-threshold masking to learn sparse linear transformations.
  • Logged sparsity–accuracy tradeoffs and visualized sparse connectivity patterns.
  • Packaged training and evaluation scripts with configs for repeatable runs.
ℓ₀-style Sparsity Pruning Efficiency Visualization

Graph Foundation Models for Scientific Systems

Graph learning and foundation-model ideas for fracture networks and structured physical systems.

  • Explored graph-based learning pipelines for fracture-network representations.
  • Studied structured scientific datasets through graph foundation model perspectives.
  • Developed poster-based research artifacts for public scientific communication.
Graph AI Scientific Systems Foundation Models Research Posters

What I’m Looking For

Research and engineering environments where theory, experimentation, and carefully built systems reinforce one another.

Frontier AI Research & Engineering

Research and engineering roles focused on interpretability, reasoning, efficient learning, and scientific AI—especially environments that value deep technical work, reproducibility, and mathematically grounded modeling.

Preferred Collaboration Style

High-trust, research-driven teams where I can iterate quickly: run experiments, write clean code, document results, and ship tools that enable others. I prioritize clarity, rigor, and measurable impact.

Reproducibility

I aim for repositories that are understandable, runnable, and useful to other researchers.

Where applicable, repositories include requirements.txt, configuration files, fixed seeds, and result plots. If you encounter any issue reproducing results, please reach out.

Collaboration & Contact

Open to research collaborations, research engineering roles, and conversations around scientific machine learning, interpretability, and mathematically grounded AI.

Current Interests

I am particularly interested in collaborations involving scientific machine learning, operator learning, kernel methods, sparse representations, graph learning, and mechanistic interpretability.