Himanshu Singh

AI Researcher with a background in Pure and Applied Mathematics. I work on mechanistic interpretability, sparse/efficient learning, and neural operators for scientific AI.

Mechanistic Interpretability Sparse Learning (ℓ₀/ℓ₁) Neural Operators (FNO) Mathematical ML Foundation Models

Current Focus

I’m interested in understanding model reasoning, extracting circuits, and building efficient learning systems grounded in math.

  • Transformer circuit tracing + activation patching
  • Efficient & sparse architectures
  • Operator learning for PDEs and dynamical systems
  • Rigor, reproducibility, and research-grade code

Selected Projects

Mechanistic AI Interpretability

Circuit tracing, activation patching, and causal interventions for transformer models.

  • Built layer/head patching pipelines to localize causal pathways in transformer reasoning.
  • Developed metrics for clean vs corrupted behavior and normalized recovery analysis.
  • Packaged experiments for reproducibility (configs, utilities, and clear results).
TransformerLens Activation Patching Causal Tracing Evals

Learning Fourier Neural Operators (FNO)

Operator learning for PDE surrogate modeling (e.g., Burgers’ equation).

  • Implemented FNO training + evaluation pipelines with configurable experiments.
  • Benchmarked generalization across Reynolds regimes and grid resolutions.
  • Focused on reproducible scientific ML: datasets, seeds, and result reporting.
Neural Operators PDEs Scientific ML PyTorch

Sparse Neural Network

Learned hard-mask sparsification using trainable mask parameters.

  • Implemented hard-threshold masking to learn sparse linear transformations.
  • Logged sparsity/accuracy tradeoffs and visualized sparse connectivity.
  • Packaged training/evaluation scripts with configs for repeatable runs.
ℓ₀-style Sparsity Pruning Efficiency Visualization

Kernel Methods, Reduced-Order Models & Dynamical Systems

Reproducing kernels, sparse representations, and learning in high-dimensional dynamics.

  • Developed RKHS-based models for nonlinear approximation and operator regression.
  • Explored sparsity constraints (ℓ₀/ℓ₁) for interpretable, data-efficient learning.
  • Connected theoretical insights (generalization / information constraints) to practical pipelines.
RKHS Sparsity Optimization Scientific ML

What I’m looking for

Frontier AI Research & Engineering

Roles focused on interpretability, reasoning, efficient learning, and scientific AI—bridging theory.

Preferred Collaboration Style

High-trust, research-driven teams where I can iterate quickly: run experiments, write clean code, document results, and ship tools that enable others. I prioritize clarity, rigor, and measurable impact.

Reproducibility

I aim for “clone-and-run” projects; where applicable, repositories include: requirements.txt, configuration files, fixed seeds, and result plots. If you encounter any issue reproducing results, please reach out.