Subspace locally competitive algorithms

Published in Neuro-Inspired Computational Elements, 2020

Paper link

We introduce subspace locally competitive algorithms (SLCAs), a family of novel network architectures for modeling latent representations of natural signals with group sparse structure. SLCA first layer neurons are derived from locally competitive algorithms, which produce responses and learn representations that are well matched to both the linear and non-linear properties observed in simple cells in layer 4 of primary visual cortex (area V1). SLCA incorporates a second layer of neurons which produce approximately invariant responses to signal variations that are linear in their corresponding subspaces, such as phase shifts, resembling a hallmark characteristic of complex cells in V1. We provide a practical analysis of training parameter settings, explore the features and invariances learned, and finally compare the model to single-layer sparse coding and to independent subspace analysis.

Recommended citation:
Dylan M. Paiton, Steven Shepard, Kwan Ho Ryan Chan, and Bruno A. Olshausen. “Subspace Locally Competitive Algorithms.” In Proceedings of the Neuro-inspired Computational Elements Workshop (NICE ‘20). Association for Computing Machinery, New York, NY, USA, Article 9, 1–8. DOI:

  author={Paiton, Dylan M. and Shepard, Steven and Chan, Kwan Ho Ryan and Olshausen, Bruno A.},
  title={Subspace Locally Competitive Algorithms},
  publisher={Association for Computing Machinery},
  address={New York, NY, USA},
  booktitle={Proceedings of the Neuro-Inspired Computational Elements Workshop},
  keywords={sparse coding, subspace image coding, invariance, unsupervised learning, neural network architectures, group sparse coding},
  location={Heidelberg, Germany},
  series={NICE '20}