Interactive Similarity Overlays

An interactive tool for understanding what neural networks consider similar and different.

Hover over different parts of the above images. This interactive visualization shows how similar (or different) a neural network considers different image patches to the current image patch (highlighted in yellow). Try hovering over animal features (e.g., noses, eyes, faces) and background regions.

This article is best viewed in Google Chrome.

As digital technology has evolved over the past few decades, the ways we interact with it have also evolved. We have moved from typing on a keyboard and viewing a terminal console, to using a mouse and graphical user interface, to employing a variety of touchscreen gestures and voice commands. However, despite the rapid progress in deep learning over the past few years, the ways we interact with research artifacts have remained largely unchanged: most visualizations used in research are non-responsive plots, images, and videos.

In this work, we formalize interactive similarity overlays — an interactive visualization that highlights how a convolutional neural network (CNN) "sees" different image patches as similar or different. Our method builds off of prior work on interactive visualizations for understanding CNNs See for a survey of visualizations for machine learning and deep learning respectively. A number of works use interactivity to navigate a visual interface, focus on visualizing a single model or dataset deeply for pedogogical purposes, treat a model as a black-box by visualizing model inputs and outputs (not internal components), and/or explore other kinds of models or feature representations (e.g., GANs, RNNs, etc.). In contrast, our work is an interactive visualization that can be used to explain the internal representation of any CNN model. and self-similarity image descriptors. We also demonstrate how similarity overlays can be combined with other visualization techniques, such as non-negative matrix factorization and interactive charts, to more richly explore the learned representations of CNNs. We design our similarity overlays so that they can be easily extended or dropped into the existing workflow of a machine learning researcher. To that end, we release a lightweight package github.com/ruthcfong/interactive_overlay that abstracts away the web development aspects of the visualization; with it, researchers can easily generate similarity overlays for any CNN with a Python interface (e.g., TensorFlow, PyTorch).

Interactive similarity overlays allow a user to hover over an image patch and visualize how similar (or different) other image patches are in a CNN representation (see right figure). More precisely, let $s(\mathbf{z_1}, \mathbf{z_2}): \mathbb{R}^{D} \times \mathbb{R}^{D} \to \mathbb{R}$ be a similarity function and let $f_l(\mathbf{x}): \mathbb{R}^{3 \times H \times W} \to \mathbb{R}^{D_l \times H_l \times W_l}$ be a function that takes in an input image and returns a 3D tensor (i.e., a CNN up to layer $l$). Unless otherwise stated, we use the cosine similarity function, $s(\mathbf{a}, \mathbf{b}) = \frac{\mathbf{a} \cdot \mathbf{b}}{\lVert \mathbf{a} \rVert \lVert \mathbf{b} \rVert}$, and GoogLeNet's mixed4d layer as $f_l(\mathbf{x})$. We chose the cosine similarity function because it is the noramlized dot product of two vectors, which quantifies the angle between them (i.e., it captures the directional similarity of two vectors). Now, let $\mathbf{z} = f_l(\mathbf{x})$ and let $\mathbf{z}_{i,j}$ denote the activation at the $(i,j)$-th spatial location in $\mathbf{z}$. Then, we visualize the similarity between a given spatial location $(i,j)$ and every other location $(u,v)$, which is given by $s(\mathbf{z}_{i,j}, \mathbf{z}_{u,v})$. This yields a simple and intuitive visualization that allows for easy exploration of different phenomena, which we explore in the rest of this article.

With this technique, we can compare similarities of spatial locations across images, as shown in the splash figure above. Within this set of images, we notice that simple background scenes (e.g., those for the dog and cat, flowers, and bird images) are similarly activated despite being visually different. We also observe that a few features, such as eyes, are common across object classes (i.e., different species). Taken together, these observations suggest that CNNs are capable of learning broad and flexible semantic concepts.

Hover over image. The intensity of other patches (lighter denotes more similar) captures the similarity to the highlighted image patch (in yellow).

This multi-image example also highlights the main benefit of interactive similarity overlays, which is their ability to allow users to digest a complex amount of data in an interpretable way. For $N$ images, the full scale of the similarities between all image patches is $\mathcal{O}(N^2 \times H_{l}^2 \times W_{l}^2)$. By displaying similarities interactively, we show $\mathcal{O}(N \times H_l \times W_l)$ similarity scores at any given moment, thereby making the data easier to digest.

In the rest of the article, we demonstrate the utility of our interactive similarity overlays in several case studies.

Exploring Different Layers' Representations

First, we consider how interactive similarity overlays help us explore the representations of different CNN layers.

Most prior works have explored layer representations from two perspectives: 1., by exploring how representations in a layer corresponds with different kinds of semantic concepts (e.g., low-level concepts like colors and textures to high-level concepts like objects and scenes), and 2., by visualizing the preferred stimuli of a neuron in specific layers (e.g., activation maximization) or the stimuli that best correspond with a reference activation tensor (e.g., representation inversion, caricatures ). The former approach typically requires access to manual annotations that define semantic concepts; these are used to "test" a CNN representation and is limited by the breadth and quality of annotation. The latter approach produces a static visualization that often trades off how interpretable a visualization is with how accurately it explains a CNN. In the case of feature visualizations, interpretability refers to how easily interpretable a visualization is (e.g., some feature visualizations are highly unnatural and are hard to reason about), while fidelity refers to how accurately a visualization explains a given model component (e.g., neuron or activation tensor; some feature visualizations rely on strong priors in order to be more interpretable by trading off fidelity). Refer to for a discussion of this tradeoff. In contrast, our interactive visualization similarity does not require manual annotations. Furthermore, thanks to its interactive nature, our visualization accurately renders information about activation similarities in an interpretable interface.

To compare representations of the same input image at different layers, we compute similarity scores within each layer and synchronize the spatial location being explained across layers (i.e., the highlighted image patch in yellow). Using this synchronization trick, we first explore the representation of layers with different spatial resolutions. Consistent with some prior work, we find that the earlier layers seem to capture lower-level features like edges while later layers tend to highlight higher-level, semantic features like objects. We also notice that the representations of later layers appear more smooth. Neighboring spatial locations possess similar scores in smooth representations.

Layers with different spatial resolutions.

We also explore the representation of layers with the same spatial resolution. That is, layers that output tensors of the same spatial dimensions (i.e., $H_l$ and $W_l$). In this exploration, we confirm our intuition that layer depth affects both the representational smoothness and semantic-ness.

Mixed4 Layers.

Similarities Across Images

We can also use our visualization to explore representational similarities across images of the same class. One interesting application is to compare correspondences between natural images and generated ones. To that effect, we compute similarity scores across several images, including ones generated to be classified as the same object class. These generated images were constrained by brush strokes and inks to appear like modern art. See for more details about their generation process (the prints are available for purchase here). We observe that there seem to be a few correspondences (e.g., awareness of spatial position on an object, such as the handle vs. the nozzle of a blow dryer).

To enhance our visualization and suggest a few corresponding features, we combined our similarity overlays with another visualization tool: matrix factorization. Matrix factorization factors instances into several groups which best explain the variation in a set. In the following example, we use matrix factorization to group activation vectors at different spatial locations (and in different images) into discrete groups. See "Implementation Details" at the end of the article for more information about how this figure was generated. By combining these two visualization techniques together, we glean more information about the CNN representations being visualized than if we were to use either technique alone. Now, we are able to notice and confirm more interesting correspondences (e.g., the correspondence to abstract strokes in a generated image, such as free black strokes corresponding to cords in the blow dryer example).

Comparison with generated images and matrix factors.

Sensitivity to Geometric Transformations

In a final example, we demonstrate how our interactive similarity overlays help us explore how sensitive or invariant a representation is to geometric transformations (e.g., rotation, scale). By systematically transforming an image (e.g., by fixed-degree rotation) and visualizing similarity scores across transformed images, we can visually inspect the impact of a given transformation. We can also combine our overlays with an interactive chart visualization.

In the rotation example, we show a line chart that displays the similarity scores of the highlighted image patch as well as the corresponding patch in the other transformed images. By leveraging both visualizations, we can quickly notice that more discriminative and oriented features (e.g., animal nose) are more sensitive to rotation than more texture-based, background features (e.g., grass). We also discover rotational sensitivity at image borders; this is likely an artifact from padding the boundaries with zero padding. Highlight a boundary pixel and move inward towards the center of the image; you will notice that the ripple effect in similarity scores shown in the line chart becomes smaller as your cursor moves towards the image center. Due to large receptive fields, patches in between the image border and center may still be partially affected by boundary effects.

By combining visualizations at different layers of abstraction (e.g., qualitative visualization of similarities across all image patches vs. quantitative visualization of a subset of relevant patches), we demonstrate the utility of combining techniques that operate at different levels of abstraction. See for further discussion on the benefits of combining multiple layers of abstraction.

Rotate.

In the scale example, we observe that the spatial relationship of similarities between different features are preserved across scales (e.g., moving a mouse around in one image generates similar "movements" in other images). However, by plotting the similarity scores of the highlighted feature across scales, we see more clearly and quantitatively that similarity scores are somewhat sensitive to large scale changes. This seems to be true for both discriminative features and background ones, though texture-based, background features may be less sensitive (e.g., background grass vs. cat nose).

Scale.

Conclusion

In summary, we introduce a simple interactive visualization , interactive similarity overlays, which allow a user to investigate the representational similarity of various images. Thanks to its interactive nature, our visualization is both interpretable and faithful to the model being explained. We highlighted how our visualization enables the exploration of a few CNN properties as well as how it can be thoughtfully combined with other techniques to yield further insights.

With a recent movement towards supporting deep learning in Javascript and machine learning research articles with interactive figures, we eagerly expect further work on interactive visualizations for understanding CNNs that can be easily combined with existing tools. Currently, there are a few open-source packages (e.g., TensorFlow's Lucid and PyTorch's Captum) that implement several CNN visualization methods and support combining techniques. To that end, we also release a small packagegithub.com/ruthcfong/interactive_overlay that allows anyone to easily use our interactive similarity overlays without needing to know Javascript. We hope more work is done to empower machine learning practitioners and researchers to easily explore the behavior of their models.

Additional Resources

Code: ruthcfong/interactive_overlay
Open-source implementation of our techniques on GitHub.

Notebooks:
Direct links to ipynb notebooks corresponding to the respective sections of this paper.

Further Notebook:
Direct link to an ipynb notebook demonstrating how to use our interactive similarity overlays in other applications using PyTorch.

Acknowledgments

We are deeply grateful to the following people for helpful conversations: Tom White, David Bau, Been Kim, Xu Ji, Sam Albanie, Mandela Patrick, Ludwig Schubert, Gabriel Goh, and Nick Cammarata. We are also thankful to the discussion groups organized by Xu Ji within the VGG group and organized by Chris Olah within the Distill Slack workspace. We are also particularly grateful to Tom White for his permission to use his "Perceptual Engines" generated images and to Been Kim for open-sourcing the Gestalt dataset for this project. On the Distill side, we are especially grateful to Ludwig Schubert for his Javascript debugging expertise. Ruth Fong is also grateful for the support of the Open Philanthropy Project.

Lastly, this work was made possible by many open source tools, for which we are grateful. In particular, all of our experiments were based on Tensorflow, PyTorch, and Lucid. We built our interactive visualizations using Svelte and Chart.js. We make our results reproducible using Colab notebooks.

Author Contributions

Research: Alex came up with the initial idea of cosine similarity overlays. Ruth developed its applications to interrogate different layers, geometric transformations, etc. Andrea and Chris suggested helpful research directions; in particular, Chris suggested combining similarity overlays with other visualization techniques.

Writing & Diagrams: The text was initially drafted by Ruth and refined by the other authors. The interactive diagrams were designed by all authors. The final notebooks were primarily created by Ruth, based on earlier code and notebooks by Alex and Chris.

Implementation Details

General: Unless otherwise stated, we use the cosine similarity function, $s(\mathbf{a}, \mathbf{b}) = \frac{\mathbf{a} \cdot \mathbf{b}}{\lVert \mathbf{a} \rVert \lVert \mathbf{b} \rVert}$ as the similarity function with which we compute overlays and visualize GoogLeNet's mixed4d layer as $f_l(\mathbf{x})$.

Non-negative matrix factorization (NNMF): For each object class (e.g., blow dryer), 10 (out of 50) real images from the ImageNet validation set for that class were selected. NNMF was computed on the set of 11 images (10 real images and 1 generated image). By default, 4 components were computed; upon visual inspection, this was reduced to 3 for a few object classes. We found that 10 real images was an ideal number of images to use to compute salient components and chose images were somewhat visual similar to one another (e.g., simple backgrounds for blow dryer example). We then selected 5 of the 10 real images to be shown in the figure.

Citation

For attribution in academic contexts, please cite this work as

Fong et al., "Interactive Similarity Overlays", VISxAI 2021. Retrieved from https://www.ruthfong.com/projects/interactive_overlay/

BibTeX citation

@InProceedings{fong_interactive_2021,
  author={Fong, Ruth and Mordvintsev, Alexander and Vedaldi, Andrea and Olah, Chris},
  title={Interactive Similarity Overlays},
  booktitle={VISxAI},
  year={2021},
  url={https://www.ruthfong.com/projects/interactive_overlay/},
}