me :)

Thomas Dooms

Interpretability Researcher

Hi, I'm Thomas! I'm a second year PhD student working on mechanistic interpretability.

My academic interest lies in understanding neural networks from their weights.
I believe this to be the most promising avenue to understanding what a model has learned.
Tensor decompositions on the weights help us understand what is important to the model.

I am guided by curiosity, strive for simplicity, and try to be effective.

Outside of work, I love to go on long bike rides, game with friends, go skiing and wear shorts.