Hi Everyone,

Tomorrow Jacob will be giving us group meeting. Please see below for the title and abstract of his talk.

Jennifer

-----------------------------------

Title: Uncovering and Exploiting Sparsity in Neural Networks

Abstract: Neural networks exhibit various kinds of sparsity.  The most obvious type of sparsity is the structural connectivity of the network: relatively few neurons are synaptically linked to each other.  I will begin by reviewing an experimental proposal to recover the sparse connectivity matrix by randomly expressing green fluorescent protein in various neurons and then applying compressed sensing to recover the connectivity.  Moving beyond structural sparsity, I will discuss the idea of "functional sparsity"--the idea that while a neural network is governed by a vast number of microscopic parameters, only a few combinations of these parameters might be relevant to the macroscopic behavior of the network.  In particular, I will propose that the recent idea of parameter space compression could be applied to neurons to find a minimal set of macroscopic characteristics required to describe a neural network, and illustrate how it might work within the context of a model for learning and memory known as spike-time-dependent plasticity.