Note UNUSUAL DAY and LOCATION
APPLIED MATH SEMINAR
Speaker: David van Dijk, Yale
Date: Wednesday, October 10, 2018
Time: 3:45p.m. Refreshments (AKW, 1st Floor Break Area)
4:00p.m. Seminar (LOM 214)
Title: Understanding neural networks inside and out: designing constraints to enhance interpretability
Deep neural networks can learn meaningful representations of data. However, these representations are hard to interpret. In this talk I will present three ongoing projects in which I use specially designed constraints on latent representations of neural nets in order to make them more interpretable. First, I will present SAUCIE (Sparse Autoencoder for Clustering Imputation and Embedding) which is a framework for performing several data analysis tasks on a unified data representation. In SAUCIE we constrain the latent dimensions to be amenable to clustering, batch correction, imputation, and
visualization. Next, I will present a novel class of regularizations termed Graph Spectral Regularizations that impose graph structure on the otherwise unstructured activations of latent layers. By
considering the activations as signals on this graph we can use graph signal processing, and specifically graph filtering, to constrain the activations. I will show that, among other uses, this allows us to extract topological structure, such as clusters and progressions from data. Further, I will show that when the imposed graph is a 2D grid, with a smoothing penalty, the latent encodings become image-like. Such imposed grid structure also allows for addition of convolutional layers, even when the input data is naturally unstructured. Finally, in the third project, I propose a neural network framework, termed DyMoN (Dynamics Modeling network), that is capable of learning any stochastic dynamic process. I show that a DyMoN can learn harmonic and chaotic behavior, of single and double pendula respectively, and can give insight into the dynamics of biological systems.