Cellular-resolution optogenetics reveals attenuation-by-suppression in visual cortical neurons

Authors

  • PK LaFosse
  • Z Zhou
  • HN Mulholland
  • JF O'Rawe
  • NG Friedman
  • VM Scott
  • Y Deng
  • MH Histed

Abstract

The relationship between neurons’ input and spiking output is central to brain computation. Studies in vitro and in anesthetized animals suggest nonlinearities emerge in cells’ input-output (activation) functions as network activity increases, yet how neurons transform inputs in vivo has been unclear. Here, we characterize cortical principal neurons’ activation functions in awake mice using two-photon optogenetics and imaging. We find responses to fixed optogenetic input are nearly unchanged as neurons are excited, reflecting a linear response regime above neurons’ resting point. In contrast, responses are dramatically attenuated by suppression. This attenuation is a powerful means to filter inputs arriving to suppressed cells, privileging other inputs arriving to excited neurons. In addition, the function we measure is best described not by a ReLU or power law (as in the stabilized supralinear network (SSN), Ahmadian et al., 2013), but by the Ricciardi function created in recurrent integrate-and-fire networks in the balanced state (Sanzeni et al., 2020). We are currently examining theoretical implications for the SSN model. Also, control analyses based on within-cell variability support the idea that while individual neurons’ IO function slope (gain) may vary, individual cells all show nearly-linear responses when activated. These data have two major implications. First, neural activation functions in vivo accord with the activation functions used in recent machine learning systems. Second, neurons’ IO functions can enhance sensory processing by attenuating some inputs while leaving others unchanged: attenuation-by-suppression.

Scientific Focus Area: Neuroscience

This page was last updated on Tuesday, August 6, 2024