Ruth C. Fong

I am a teaching faculty member in the Department of Computer Science at Princeton University, where I teach introductory CS courses and conduct research in computer vision and machine learning with a focus on explainable AI. At Princeton, I primarily collaborate with Professor Olga Russakovsky and the Visual AI Lab.

I completed my PhD in the Visual Geometry Group at the University of Oxford, where I was advised by Andrea Vedaldi and funded by the Rhodes Trust and Open Philanthropy. Also at Oxford, I earned a Masters in Neuroscience, where I worked with Rafal Bogacz, Ben Willmore, and Nicol Harper. I received a Bachelors in Computer Science at Harvard University, where I worked with worked with David Cox and Walter Scheirer.

Email  /  CV  /  Bio  /  Google Scholar  /  GitHub

headshot
News
  • I am excited to announce that I am joining Princeton's CS department as a teaching faculty member starting July 2021.
  • My PhD thesis on "Understanding Convolutional Neural Networks" can be found here. For those with less experience, all chapters except chapters 3-6 were written with accessibility in mind.
  • We have a new report out on "Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims." webpage | arXiv
  • We just released TorchRay, a PyTorch interpretability library. In the initial release, we focus on attribution and re-implemented popular methods and benchmarks to encourage reproducible research. Resources: tutorial slides | colab notebook
  • Slides for VGG tutorial on interpretability that I gave recently can be found here.
Research

My research interests are in computer vision, machine learning, deep learning, and explainable AI. I am interested in developing novel techniques for understanding AI models post-hoc, designing new deep learning archiectures that are interpretable-by-design, and introducing paradigms for finding and correcting existing failure points in AI models. Representative papers are highlighted.

There and Back Again: Revisiting Backpropagation Saliency Methods
Sylvestre-Alvise Rebuffi*, Ruth Fong*, Xu Ji*, and Andrea Vedaldi
CVPR, 2020 (to appear)
arxiv | code | bibtex

We outline a novel framework that unifies many backpropagation saliency methods. Furthermore, we introduce NormGrad, a saliency method that considers the spatial contribution of the gradients of convolutional weights. We also systematically study the effects of combining saliency maps at different layers. Finally, we introduce a class-sensitivity metric and a meta-learning inspired technique that can be applied to any saliency method to improve class sensitivity.

tripod
tripod
Understanding Deep Networks via Extremal Perturbations and Smooth Masks
Ruth Fong*, Mandela Patrick*, and Andrea Vedaldi
ICCV, 2019 (Oral)
arxiv | supp | poster | code (TorchRay) | bibtex

We introduce extremal perturbations, an novel attribution method that highlights "where" a model is "looking." We improve upon Fong and Vedaldi, 2017 by separating out regularization on the size and smoothness of a perturbation mask from the attribution objective of learning a mask that maximally affects a model's output; we also extend our work to intermediate channel representations.

wolves
wolves
Occlusions for Effective Data Augmentation in Image Classification
Ruth Fong and Andrea Vedaldi
ICCV Workshop on Interpreting and Explaining Visual Artificial Intelligence Models, 2019
paper | bibtex | code (coming soon)

We introduce a simple paradigm based on batch augmentation for leveraging input-level occlusions (both stochastic and saliency-based) to improve ImageNet image classification. We also demonstrate the necessary of batch augmentation and quantify the robustness of different CNN architectures to occlusion via ablation studies.

filters
filters
Net2Vec: Quantifying and Explaining how Concepts are Encoded by Filters in Deep Neural Networks
Ruth Fong and Andrea Vedaldi
CVPR, 2018 (Spotlight)
arxiv | supp | bibtex | code | talk | slides

Investigating how human-interpretable visual concepts (i.e., textures, objects, etc.) are encoded across hidden units of a convolutional neural network (CNN) layer as well as across CNN layers.

flute
flute
Interpretable Explanations of Black Box Algorithms by Meaningful Perturbation
Ruth Fong and Andrea Vedaldi
ICCV, 2017
arxiv | supp | bibtex | code | book chapter (extended) | chapter bibtex

We developed a theoretical framework for learning "explanations" of black box functions like CNNs as well as saliency methods for identifying "where" a computer vision algorithm is looking.

Using Human Brain Activity to Guide Machine Learning
Ruth Fong, Walter Scheirer, and David Cox
Scientific Reports, 2018
arxiv | supp | Harvard thesis | bibtex

We introduce a biologically-informed machine learning paradigm for object classification that biases models to better match the learned, internal representations of the visual cortex.

Modelling Blind Single Channel Sound Separation Using Predict Neural Networks
Ruth Fong (advised by Ben Willmore and Nicol Harper)
M.Sc. Dissertation #2

I developed an unsupervised learning paradigm for sound separation using fully connected and recurrent neural networks to predict the future from past cochleagram data.

Optimizing Deep Brain Stimulation to Dampen Tremor
Ruth Fong (advised by Rafal Bogacz)
M.Sc. Dissertation #1
Tutorial | Demo | MATLAB Rayleigh statistics toolbox

I developed a computational oscillator model that modeled the tremor-dampening effects of phasic deep brain stimulation and analyzed it on experimental data.

Teaching
pigeons
pigeons

NJ Governor's School in Engineering & Technology: Mathematics in the World - Summer 2015

Harvard CS121: Intro to Theory of Computation - Fall 2014

Harvard CS20: Intro to Discrete Math - Spring 2014

Harvard CS50: Intro to CS I - Fall 2012


This ubiquitous CS researcher website template spawned from here.
Last updated: July 2021