Full Program »
Representational Explanation in Computational NeuroscienceMuch of the recent literature in philosophy of neuroscience takes explanations in neuroscience to describe causal mechanisms in the brain (Bechtel 2008, Craver 2007). Although this mechanistic approach has been applied to several subfields like neurophysiology, computational neuroscience has received little attention. In computational neuroscience, mathematical models and computer simulations are used to explain how the brain processes information (Dayan and Abbott 2001). An interesting feature of these explanations is that they sometimes abstract away from the causal mechanisms that give rise to some phenomenon. Therefore, even if there are mechanistic explanations to be found in computational neuroscience, the question arises of whether there are also non-mechanistic explanations.
To answer this question, I focus on sparse coding explanations in computational neuroscience. A striking feature of the mammalian visual cortex is that it contains neurons called “simple cells” that are selectively tuned to edges of a particular size, orientation, and location in the visual field. Why do simple cells behave this way? Olshausen and Field (1996) answer this question using a sparse coding explanation. Sparse coding is a method of representing a data set (e.g. a set of images) in terms of a dictionary of features, such that any particular piece of data (e.g. a particular image) can be represented using a small subset of features from the dictionary. Olshausen and Field demonstrate that the optimal features for sparse coding of natural images are oriented, localized edges that strongly resemble simple cell receptive fields. So the behavior of simple cells can be explained in terms of the hypothesis that the visual cortex is using a sparse code for images.
Note that the sparse coding explanation does not describe causal mechanisms in the brain. After all, this explanation need not say anything about the causal mechanism that the visual system uses to find the sparse code, or even about the causal role that the sparse representation plays in visual processing. For this and other reasons, this explanation is not easily assimilated to the mechanistic approach to explanations in neuroscience. Instead of causal mechanisms, what the sparse coding explanation describes is how simple cells represent information (namely, using a sparse representation of an image). Based on these sorts of considerations, I characterize this explanation as being representational rather than mechanistic in character. Representational explanations describe how various neural systems represent information in order to make sense of their behavior. As I will argue, recognizing representational explanation as a distinctive kind of explanation allows us to get clearer on the kinds of assumptions that need to be made in order to get many explanations in computational neuroscience off the ground.
Bechtel, William. Mental mechanisms: Philosophical perspectives on cognitive neuroscience. Taylor & Francis, 2008.
Craver, Carl F. Explaining the brain. 2007.
Dayan, Peter, and Laurence F Abbott. Theoretical neuroscience. Cambridge, MA: MIT Press, 2001.
Olshausen, Bruno A. "Emergence of simple-cell receptive field properties by learning a sparse code for natural images." Nature 381.6583 (1996): 607-609.
Simon Fraser University