Ruth C. Fong

I am a teaching faculty member in the Department of Computer Science at Princeton University, where I teach introductory CS courses and conduct research in computer vision and machine learning with a focus on explainable AI. At Princeton, I primarily collaborate with Professor Olga Russakovsky and the Visual AI Lab.

I completed my PhD in the Visual Geometry Group at the University of Oxford, where I was advised by Andrea Vedaldi and funded by the Rhodes Trust and Open Philanthropy. Also at Oxford, I earned a Masters in Neuroscience, where I worked with Rafal Bogacz, Ben Willmore, and Nicol Harper. I received a Bachelors in Computer Science at Harvard University, where I worked with worked with David Cox and Walter Scheirer.

Email  /  CV  /  Bio  /  Google Scholar  /  GitHub

headshot
News
  • I am excited to announce that I am joining Princeton's CS department as a teaching faculty member starting July 2021.
  • My PhD thesis on "Understanding Convolutional Neural Networks" can be found here. For those with less experience, all chapters except chapters 3-6 were written with accessibility in mind.
  • We have a new report out on "Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims." webpage | arXiv
  • We just released TorchRay, a PyTorch interpretability library. In the initial release, we focus on attribution and re-implemented popular methods and benchmarks to encourage reproducible research. Resources: tutorial slides | colab notebook
  • Slides for VGG tutorial on interpretability that I gave recently can be found here.
Research

My research interests are in computer vision, machine learning, deep learning, and explainable AI. I am interested in developing novel techniques for understanding AI models post-hoc, designing new deep learning archiectures that are interpretable-by-design, and introducing paradigms for finding and correcting existing failure points in AI models. Representative papers are highlighted; see Google Scholar for the most updated list of papers.

hierarchical transformations On Compositions of Transformations in Contrastive Self-Supervised Learning
Mandela Patrick*, Yuki M. Asano*, Polina Kuznetsova, Ruth Fong, João F. Henriques, Geoffrey Zweig, and Andrea Vedaldi
ICCV, 2021 (to appear) 
code | bibtex

We give transformations the prominence they deserve by introducing a systematic framework suitable for contrastive learning. SOTA video representation learning by learning (in)variances systematically.

describability metric diagram
describability metric diagram
Quantifying Learnability and Describability of Visual Concepts Emerging in Representation Learning
Iro Laina, Ruth Fong, and Andrea Vedaldi
NeurIPS, 2020
arxiv | supp | bibtex

We introduce two novel human evaluation metrics for quantifying for evaluating the interpretability of clusters discovered via self-supervised methods. We also outline how to partially approximate one of the metrics using a group captioning model.

Debiasing Convolutional Neural Networks via Meta Orthogonalization
Kurtis Evan David, Qiang Liu, and Ruth Fong
NeurIPS Workshop on Algorithmic Fairness through the Lens of Causality and Interpretability (AFCI), 2020
arxiv | supp | poster | bibtex

We introduce a novel paradigm for debiasing CNNs by encouraging salient concept vectors to orthogonal to class vectors in the activation space of an intermediate CNN layer (e.g., orthogonalizing gender and oven concepts in conv5).

Contextual Semantic Interpretability
Diego Marcos, Ruth Fong, Sylvain Lobry, Rémi Flamary, Nicolas Courty, and Devis Tuia
ACCV, 2020
arxiv | supp | code | bibtex

We introduce an interpretable-by-design machine vision model that learns to sparse groupings of interpretable concepts and demonstrate the utility of our novel architecture on scenicness prediction.

There and Back Again: Revisiting Backpropagation Saliency Methods
Sylvestre-Alvise Rebuffi*, Ruth Fong*, Xu Ji*, and Andrea Vedaldi
CVPR, 2020
arxiv | code | bibtex

We outline a novel framework that unifies many backpropagation saliency methods. Furthermore, we introduce NormGrad, a saliency method that considers the spatial contribution of the gradients of convolutional weights. We also systematically study the effects of combining saliency maps at different layers. Finally, we introduce a class-sensitivity metric and a meta-learning inspired technique that can be applied to any saliency method to improve class sensitivity.

Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims
Miles Brundage*, Shahar Avin*, Jasmine Wang*, Haydn Belfield*, Gretchen Krueger*, … ,
Ruth Fong, et al.
arXiv, 2020
arxiv | project page | bibtex

This report suggests various steps that different stakeholders can take to make it easier to verify claims made about AI systems and their associated development processes. The authors believe the implementation of such mechanisms can help make progress on one component of the multifaceted problem of ensuring that AI development is conducted in a trustworthy fashion.

tripod
tripod
Understanding Deep Networks via Extremal Perturbations and Smooth Masks
Ruth Fong*, Mandela Patrick*, and Andrea Vedaldi
ICCV, 2019 (Oral)
arxiv | supp | poster | code (TorchRay) | bibtex

We introduce extremal perturbations, an novel attribution method that highlights "where" a model is "looking." We improve upon Fong and Vedaldi, 2017 by separating out regularization on the size and smoothness of a perturbation mask from the attribution objective of learning a mask that maximally affects a model's output; we also extend our work to intermediate channel representations.

wolves
wolves
Occlusions for Effective Data Augmentation in Image Classification
Ruth Fong and Andrea Vedaldi
ICCV Workshop on Interpreting and Explaining Visual Artificial Intelligence Models, 2019
paper | bibtex | code (coming soon)

We introduce a simple paradigm based on batch augmentation for leveraging input-level occlusions (both stochastic and saliency-based) to improve ImageNet image classification. We also demonstrate the necessary of batch augmentation and quantify the robustness of different CNN architectures to occlusion via ablation studies.

filters
filters
Net2Vec: Quantifying and Explaining how Concepts are Encoded by Filters in Deep Neural Networks
Ruth Fong and Andrea Vedaldi
CVPR, 2018 (Spotlight)
arxiv | supp | bibtex | code | talk | slides

Investigating how human-interpretable visual concepts (i.e., textures, objects, etc.) are encoded across hidden units of a convolutional neural network (CNN) layer as well as across CNN layers.

flute
flute
Interpretable Explanations of Black Box Algorithms by Meaningful Perturbation
Ruth Fong and Andrea Vedaldi
ICCV, 2017
arxiv | supp | bibtex | code | book chapter (extended) | chapter bibtex

We developed a theoretical framework for learning "explanations" of black box functions like CNNs as well as saliency methods for identifying "where" a computer vision algorithm is looking.

Using Human Brain Activity to Guide Machine Learning
Ruth Fong, Walter Scheirer, and David Cox
Scientific Reports, 2018
arxiv | supp | Harvard thesis | bibtex

We introduce a biologically-informed machine learning paradigm for object classification that biases models to better match the learned, internal representations of the visual cortex.

Modelling Blind Single Channel Sound Separation Using Predict Neural Networks
Ruth Fong (advised by Ben Willmore and Nicol Harper)
M.Sc. Dissertation #2

I developed an unsupervised learning paradigm for sound separation using fully connected and recurrent neural networks to predict the future from past cochleagram data.

Optimizing Deep Brain Stimulation to Dampen Tremor
Ruth Fong (advised by Rafal Bogacz)
M.Sc. Dissertation #1
Tutorial | Demo | MATLAB Rayleigh statistics toolbox

I developed a computational oscillator model that modeled the tremor-dampening effects of phasic deep brain stimulation and analyzed it on experimental data.

Teaching
pigeons
pigeons

NJ Governor's School in Engineering & Technology: Mathematics in the World - Summer 2015

Harvard CS121: Intro to Theory of Computation - Fall 2014

Harvard CS20: Intro to Discrete Math - Spring 2014

Harvard CS50: Intro to CS I - Fall 2012


This ubiquitous CS researcher website template spawned from here.
Last updated: August 2021