Ruth C. Fong

I am a second-year Ph.D. student in the Visual Geometry Group (VGG) at the University of Oxford, where I am advised by Professor Andrea Vedaldi and funded by the Rhodes Trust. Also at Oxford, I completed a Masters in Neuroscience and worked with Professor Rafal Bogacz as well as Dr. Ben Willmore and Dr. Nicol Harper in the Oxford Auditory Neuroscience Group (ANG).

I received a Bachelors in Computer Science at Harvard University and worked with Professors David Cox and Walter Scheirer while there. I also spent lovely summer months at Microsoft, Apple, and D.E. Shaw.

Email  /  CV  /  Google Scholar  /  GitHub


I'm interested in computer vision, machine learning, and computational neuroscience. Much of my current research is about explaining what deep neural networks are actually learning (i.e., where does an image classifier "look" in an image for evidence for its predicted class?, how are human-interpretable concepts encoded across CNN filters?, how do CNN representations compare to those learned by mammalian visual systems?). Representative papers are highlighted.

Net2Vec: Quantifying and Explaining how Concepts are Encoded by Filters in Deep Neural Networks
Ruth Fong and Andrea Vedaldi
CVPR, 2018 (to appear)
supp / bibtex / code

Investigating how human-interpretable visual concepts (i.e., textures, objects, etc.) are encoded across hidden units of a convolutional neural network (CNN) layer as well as across CNN layers.

Interpretable Explanations of Black Box Algorithms by Meaningful Perturbation
Ruth Fong and Andrea Vedaldi
ICCV, 2017
arxiv / supp / bibtex / code

We developed a theoretical framework for learning "explanations" of black box functions like CNNs as well as saliency methods for identifying "where" a computer vision algorithm is looking.

Using Human Brain Activity to Guide Machine Learning
Ruth Fong, Walter Scheirer, David Cox
Scientific Reports, 2017 (in press)
arxiv / supp / Harvard thesis / bibtex / code (coming soon)

We introduce a biologically-informed machine learning paradigm for object classification that biases models to better match the learned, internal representations of the visual cortex.

Modelling Blind Single Channel Sound Separation Using Predict Neural Networks
Ruth Fong (advised by Ben Willmore and Nicol Harper)
M.Sc. Dissertation #2

I developed an unsupervised learning paradigm for sound separation using fully connected and recurrent neural networks to predict the future from past cochleagram data.

Optimizing Deep Brain Stimulation to Dampen Tremor
Ruth Fong (advised by Rafal Bogacz)
M.Sc. Dissertation #1
Tutorial / Demo / MATLAB Rayleigh statistics toolbox

I developed a computational oscillator model that modeled the tremor-dampening effects of phasic deep brain stimulation and analyzed it on experimental data.

Course Projects

Ensuring Privacy for Genomics Data with K Disease Categories
Ruth Fong and Louis Li, 2014

We generalized differentially private data-release mechanisms for genomics data linked to K possible disease categories for CS 227r: Differential Privacy.

Human-Animal Look-a-likes: Exploring measures of similarity across object categories
Ruth Fong, 2013

I developed a matching paradigm for pairing similar-looking human and animal faces for CS 283: Computer Vision.


NJ Governor's School in Engineering & Technology: Mathematics in the World - Summer 2015

Harvard CS121: Intro to Theory of Computation - Fall 2014

Harvard CS20: Intro to Discrete Math - Spring 2014

Harvard CS50: Intro to CS I - Fall 2012

This ubiquitous CS researcher website template spawned from here.
Last updated: February 2018