High-level description Often, the brain receives sensory information from a stimulus and generates choice behavior that takes into account received sensory information. In context of such sensory-based decision-making, the decision signal is likely computed in real time - without averaging the neural activity across time or across trials. In this project, we attempt to capture brain’s readout of the upcoming the behavioral choice (about the match vs. non-match of stimuli) from neural responses in the primary visual cortex. We design a linear model that decodes from spike trains of neural populations in real time (single trial and time-dependent).

Our decoding model predicts the choice of the animal (“same” vs. “different”) from spiking responses in the primary visual cortex (V1). Intelligent systems presumably generalize their learning to tasks for which they have not been explicitly trained. We suggest that for solving the simple binary task about matching/non-matching of stimuli, the primary visual cortex learns only a single set of weights that are informative about both the stimulus class as well as about the choice of the animal. Thus, learning read-out weights in correct trials where there is information about the stimuli as well as about the choice, generalizes to trials that only differ in choice, but not in the stimulus. Moreover, we find that temporal structure, as well as across-neuron structure of neural responses are important for predicting the choice, as the decoded choice signal strongly decreases or collapses to zero when these types of information are removed from the data. Finally, we find that subnetworks of neurons that are useful for the decoder are also more strongly correlated, and that noise correlations are stronger between neurons with similar decoding selectivity. Following this evidence, we suggest that noise correlations between neurons with similar selectivity strengthen the decision signal in a time-dependent population code.

to Biorxiv pre-print

to code Github repository