 |
 |
 |
|
Harvard University
Papers and research supported in part or in whole by The Swartz Foundation
Pakman, A., Nejatbakhsh, A., Gilboa, D., Makkej, A., Mazzucato, L., Wibral, M., Schneidman, E.,
"Estimating the Unique Information of Continuous Variables in Recurrent Networks", NeurIPS 2021.
Wang, T., Buchanan S., Gilboa, D., Wright, J.,
"Deep Networks Provably Classify Data on Curves", NeurIPS 2021.
Buchanan, S., Gilboa, D., Wright, J.,
"Deep Networks and the Multiple Manifold Problem", ICLR 2021.
Gilboa, D., Pakman, A., Vatter, T.,
"Marginalizable Density Models", Arxiv preprint, 2021.
Farrell, M., Bordelon, B., Trivedi, S., Pehlevan, C.,
"Capacity of Group-Invariant Linear Readouts from Equivalent Representations: How Many Objects Can Be Linearly Classified Under All Possible Views?" Arxiv preprint, 2021.
J. Steinberg, M. Advani, and H. Sompolinsky.
"A new role for circuit expansion for learning in neural networks," Physical review. E, 01 Feb 2021, 103(2-1):022404.
Advani, Madhu S., Andrew M. Saxe, and Haim Sompolinsky.
"High-dimensional dynamics of generalization error in neural networks." Neural Networks 132 (2020): 428-446.
N. Shaham, J. Chandra, G. Kreiman, and H. Sompolinsky.
"Continual learning, replay and consolidation in a forgetful memory network model," Cosyne Abstracts 2020.
Julia Steinberg.
"Associative Memory of Structured Knowledge," APS Meeting, March, 2020.
Return to Swartz Program in Theoretical Neurobiology at Harvard main page
Return to main Research page
|
|
 |
|
The Swartz Foundation is on Twitter: SwartzCompNeuro
|
|
|
 |