Authors: Veronika Koren, Ariana R. Andrei, Ming Hu, Valentin Dragoi, Klaus Obermayer

High-level description Often times, animals make decisions based on observation of sensory evidence. In the context of such sensory-based decision-making, the decision signal is likely computed over time as the sensory evidence unfolds - without averaging the neural activity across time or across trials. While the computation of decision signal presumably takes place in real time, the strength of synaptic connections (synaptic weights) between a neural network and readout neurons is shaped over longer time epochs, while the animal is learning the task. Here, we separate these two processes and propose a decoding model where the neural network learns synaptic weights from patterns of spike counts. These decoding synaptic weights and then used to compute a time-dependent a single-trial based decision signal about the upcoming correct choice of the animal.

Published in: PLOS ONE | to the paper to code repository on Github