Spectral bias in equivariant GNNs
How do group-equivariant constraints reshape the frequency content a network is willing to fit? A spectral perspective on inductive bias and generalisation in geometric deep learning.
→I work on the spectral structure of equivariant networks — how symmetry, frequency, and inductive bias shape what a model can learn. Joint research with ETH and MIT, targeting NeurIPS 2026.
I am a Master's student in Electrical Engineering & Computer Science at ETH Zürich, with a confirmed visit to UCL's Gatsby Computational Neuroscience Unit. My research lives at the intersection of geometric deep learning, spectral analysis, and statistical physics.
Before research, I built facadetool.com, a proptech product now used by ~10,000 people, and published on dynamic 3D scene graphs at CVPR. I taught optimal transport under Alessio Figalli, profiled Llama 3 on a GH200, and shipped behavioural-cloning policies on a real robot arm.
I care about niche scientific contribution — ideas that are precise, beautiful, and a little unfashionable.
Joint work with Manasa Kaniselvan (Luisier group, ETH) and a Smidt-adjacent postdoc at MIT, with compute on an NVIDIA GH200 node. Started as a semester project, elevated to a full submission once the collaboration was confirmed.
How do group-equivariant constraints reshape the frequency content a network is willing to fit? A spectral perspective on inductive bias and generalisation in geometric deep learning.
→Modelling first-person video as evolving 3D scene graphs. Published at CVPR; demonstrates how relational structure, not just appearance, drives action understanding.
→An independent proptech product for building façade analysis. Designed, built, shipped, and maintained end-to-end — from CV pipeline to billing. Profitable side line that funds the rest.
→Predicting electronic-structure Hamiltonians with E(3)-equivariant networks for materials. The spectral learning dynamics observed here seeded the current NeurIPS line of work.
→A modular VLA pipeline: small quantised LLM for instruction parsing, CLIP ViT-B for grounding, MLP policy for control. Behavioural cloning on real teleoperation reaches ~76% success.
→Profiled forward-pass FFN GEMMs on an NVIDIA GH200 with Nsight Systems. Full roofline characterisation of memory- vs. compute-bound regimes across batch and sequence dimensions.
→Open to research collaborations, conversations about geometric deep learning, and the occasional good problem.