Dr. Robert Legenstein

Associate Professor

Institute Head

Institute for Theoretical Computer Science

Graz University of Technology

Research       Teaching      Projects       Publications


[54] Y. Yan, D. Kappel, F. Neumärker, J. Partzsch, B. Vogginger, S. Höppner, S. Furber, W. Maass, R. Legenstein, and C. Mayr.
Efficient reward-based structural plasticity on a spinnaker 2 prototype.
IEEE Transactions on Biomedical Circuits and Systems, 13(3):579-591, June 2019. (PDF).

[53] G. Bellec, F. Scherr, E. Hajek, D. Salaj, R. Legenstein, and W. Maass.
Biologically inspired alternatives to backpropagation through time for learning in recurrent neural nets.
arxiv.org/abs/1901.09049, January 2019. (PDF).

[52] C. Liu, G. Bellec, B. Vogginger, D. Kappel, J. Partzsch, F. Neumärker, S. Höppner, W. Maass, S. B. Furber, R. Legenstein, and C. G. Mayr.
Memory-efficient deep learning on a spinnaker 2 prototype.
Frontiers in Neuroscience, 2018. (Journal link to the PDF)

[51] G. Bellec, D. Salaj, A. Subramoney, R. Legenstein, and W. Maass.
Long short-term memory and learning-to-learn in networks of spiking neurons.
32nd Conference on Neural Information Processing Systems (NIPS 2018), Montreal, Canada; arXiv:1803.09574, 2018. (PDF).

[50] R. Legenstein, W. Maass, C. H. Papadimitriou, and S. S. Vempala.
Long term memory and the densest K-subgraph problem.
In Proc. of Innovations in Theoretical Computer Science (ITCS), 2018. (PDF).

[49] C. Pokorny, M. J. Ison, A. Rao, R. Legenstein, C. Papadimitriou, and W. Maass.
Associations between memory traces emerge in a generic neural circuit model through STDP.
bioRxiv:188938, 2017. (link to the PDF)

[48] G. Bellec, D. Kappel, W. Maass, and R. Legenstein.
Deep rewiring: training very sparse deep networks.
International Conference on Learning Representations (ICLR), 2018. (PDF).

[47] R. Legenstein, Z. Jonke, S. Habenschuss, and W. Maass.
A probabilistic model for learning in cortical microcircuit motifs with data-based divisive inhibition.
arXiv:1707.05182, 2017. (PDF).

[46] Z. Jonke, R. Legenstein, S. Habenschuss, and W. Maass.
Feedback inhibition shapes emergent computational properties of cortical microcircuit motifs.
Journal of Neuroscience, 37(35):8511-8523, 2017. (PDF).

[45] D. Kappel, R. Legenstein, S. Habenschuss, M. Hsieh, and W. Maass.
A dynamic connectome supports the emergence of stable computational function of neural circuits through reward-based learning.
eNeuro, 2 April, 2018. DOI: doi.org/10.1523/ENEURO.0301-17.2018. (PDF).

[44] M. A. Petrovici, S. Schmitt, J. Klähn, D. Stöckel, A. Schroeder, G. Bellec, J. Bill, O. Breitwieser, I. Bytschok, A. Grübl, M. Güttler, A. Hartel, S. Hartmann, D. Husmann, K. Husmann, , S. Jeltsch, V. Karasenko, M. Kleider, C. Koke, A. Kononov, C. Mauch, P. Müller, J. Partzsch, T. Pfeil, S. Schiefer, S. Scholze, A. Subramoney, V. Thanasoulis, B. Vogginger, R. Legenstein, W. Maass, R. Schüffny, C. Mayr, J. Schemmel, and K. Meier.
Pattern representation and recognition with accelerated analog neuromorphic systems.
arXiv:1703.06043, 2017. (PDF).

[43] S. Schmitt, J. Klähn, G. Bellec, A. Grübl, M. Güttler, A. Hartel, S. Hartmann, D. Husmann, K. Husmann, S. Jeltsch, V. Karasenko, M. Kleider, C. Koke, A. Kononov, C. Mauch, E. Müller, P. Müller, J. Partzsch, M. A. Petroviciy, S. Schiefer, S. Scholze, V. Thanasoulis, B. Vogginger, R. Legenstein, W. Maass, C. Mayr, R. Schüffny, J. Schemmel, and K. Meier.
Neuromorphic hardware in the loop: Training a deep spiking network on the BrainScaleS Wafer-Scale System. In
IEEE International Joint Conference on Neural Networks (IJCNN) 2017, pages 2227-2234, 2017. (PDF).

[42] W. Maass, C. H. Papadimitriou, S. Vempala, and R. Legenstein.
Brain computation: A computer science perspective.
Draft of an invited contribution to Springer Lecture Notes in Computer Science, vol. 10000, 2017. (PDF).

[41] R. Legenstein, C. H. Papadimitriou, S. Vempala, and W. Maass.
Assembly pointers for variable binding in networks of spiking neurons.
arXiv preprint arXiv:1611.03698, 2016. (PDF).

[40] A. Serb, J. Bill, A. Khiat, R. Berdan, R. Legenstein, and T. Prodromakis.
Unsupervised learning in probabilistic neural networks with multi-state metal-oxide memristive synapses.
Nature Communications, 7:12611, 2016. (Journal link to the PDF)

[39] Z. Yu, D. Kappel, R. Legenstein, S. Song, F. Chen, and W. Maass.
CaMKII activation supports reward-based neural network optimization through Hamiltonian sampling.
arXiv:1606.00157, 2016. v2. . (link to the PDF)

[38] D. Kappel, S. Habenschuss, R. Legenstein, and W. Maass.
Synaptic sampling: A Bayesian approach to neural network plasticity and rewiring. In
Advances in Neural Information Processing Systems 28, C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, pages 370-378. Curran Associates, Inc., 2015. (PDF).

[37] J. Bill, L. Buesing, S. Habenschuss, B. Nessler, W. Maass, and R. Legenstein.
Distributed Bayesian computation and self-organized learning in sheets of spiking neurons with local lateral inhibition.
PLOS ONE, 10(8):e0134356, 2015. (Journal link to the PDF)

[36] R. Legenstein.
Nanoscale connections for brain-like circuits.
Nature, 521:37-38, 2015. (PDF).

[35] D. Kappel, S. Habenschuss, R. Legenstein, and W. Maass.
Network plasticity as Bayesian inference.
PLOS Computational Biology, 11(11):e1004485, 2015. (Journal link to the PDF)

[34] R. Legenstein.
Recurrent network models, reservoir computing. In
Encyclopedia of Computational Neuroscience, pages 1-5. Springer New York, 2014.

[33] J. Bill and R. Legenstein.
A compound memristive synapse model for statistical learning through STDP in spiking neural networks.
Frontiers in Neurosciense, 8(214):1-18, 2014. (Journal link to PDF)

[32] R. Legenstein and W. Maass.
Ensembles of spiking neurons with noise support optimal probabilistic inference in a dynamically changing environment.
PLOS Computational Biology, 10(10):e1003859, 2014. (Journal link to the PDF)

[31] A. V. Blackman, S. Grabuschnig, R. Legenstein, and P. J. Sjöström.
A comparison of manual neuronal reconstruction from biocytin histology or 2-photon imaging: morphometry and computer modeling.
Frontiers in neuroanatomy, 8, 2014. (Journal link to the PDF)

[30] G. Indiveri, B. Linares-Barranco, R. Legenstein, G. Deligeorgis, and T. Prodromakis.
Integration of nanoscale memristor synapses in neuromorphic computing architectures.
Nanotechnology, 24:384010, 2014. (PDF).

[29] G. M. Hoerzer, R. Legenstein, and Wolfgang Maass.
Emergence of complex computational structures from chaotic neural networks through reward-modulated Hebbian learning.
Cerebral Cortex, 24:677-690, 2014. (PDF). (Supplementary material PDF)

[28] R. Legenstein and W. Maass.
Branch-specific plasticity enables self-organization of nonlinear computation in single neurons.
The Journal of Neuroscience, 31(30):10787-10802, 2011. (PDF). (Commentary by R. P. Costa and P. J. Sjöström in Frontiers in Synaptic Neuroscience PDF)

[27] R. Legenstein, N. Wilbert, and L. Wiskott.
Reinforcement learning on slow features of high-dimensional input streams.
PLoS Computational Biology, 6(8):e1000894, 2010. (PDF).

[26] M. Jahrer, A. Töscher, and R. Legenstein.
Combining predictions for accurate recommender systems. In
KDD '10: Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 693-702, New York, NY, USA, 2010. ACM. (PDF).

[25] R. Legenstein, S. M. Chase, A. B. Schwartz, and W. Maass.
A reward-modulated Hebbian learning rule can explain experimentally observed network reorganization in a brain control task.
The Journal of Neuroscience, 30(25):8400-8410, 2010. (PDF).

[24] R. Legenstein, S. A. Chase, A. B. Schwartz, and W. Maass.
Functional network reorganization in motor cortex can be explained by reward-modulated Hebbian learning. In
Proc. of NIPS 2009: Advances in Neural Information Processing Systems, D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors, volume 22, pages 1105-1113. MIT Press, 2010. (PDF).

[23] L. Buesing, B. Schrauwen, and R. Legenstein.
Connectivity, dynamics, and memory in reservoir computing with binary and analog neurons.
Neural Computation, 22(5):1272-1311, 2010. (PDF).

[22] B. Schrauwen, L. Buesing, and R. Legenstein.
On computational power and the order-chaos phase transition in reservoir computing. In
Proc. of NIPS 2008, Advances in Neural Information Processing Systems, volume 21, pages 1425-1432. MIT Press, 2009. (PDF).

[22b] B. Schrauwen, L. Buesing, and R. Legenstein.
Supplementary material to: On computational power and the order-chaos phase transition in reservoir computing. In
Proc. of NIPS 2008, Advances in Neural Information Processing Systems, volume 21. MIT Press, 2009. in press. (PDF).

[21] Andreas Toescher, Michael Jahrer, and Robert Legenstein.
Improved neighborhood-based algorithms for large-scale recommender systems. In
KDD-Cup and Workshop. ACM, 2008. (PDF).

[20] R. Legenstein, D. Pecevski, and W. Maass.
A learning theory for reward-modulated spike-timing-dependent plasticity with application to biofeedback.
PLoS Computational Biology, 4(10):e1000180, 2008. (Journal link to the PDF)

[19] R. Legenstein, D. Pecevski, and W. Maass.
Theoretical analysis of learning with reward-modulated spike-timing-dependent plasticity. In
Proc. of NIPS 2007, Advances in Neural Information Processing Systems, volume 20, pages 881-888. MIT Press, 2008. (PDF).

[18] S. Klampfl, R. Legenstein, and W. Maass.
Spiking neurons can learn to solve information bottleneck problems and extract independent components.
Neural Computation, 21(4):911-959, 2009. (PDF).

[17] R. Legenstein and W. Maass.
On the classification capability of sign-constrained perceptrons.
Neural Computation, 20(1):288-309, 2008. (PDF).

[16] S. Klampfl, R. Legenstein, and W. Maass.
Information bottleneck optimization and independent component extraction with spiking neurons. In
Proc. of NIPS 2006, Advances in Neural Information Processing Systems, volume 19, pages 713-720. MIT Press, 2007. (PDF).

[15] R. Legenstein and W. Maass.
Edge of chaos and prediction of computational performance for neural circuit models.
Neural Networks, 20(3):323-334, 2007. (PDF).

[14] R. Legenstein and W. Maass.
What makes a dynamical system computationally powerful?. In
New Directions in Statistical Signal Processing: From Systems to Brains, S. Haykin, J. C. Principe, T.J. Sejnowski, and J.G. McWhirter, editors, pages 127-154. MIT Press, 2007. (PDF).

[13] R. Legenstein, C. Naeger, and W. Maass.
What can a neuron learn with spike-timing-dependent plasticity?.
Neural Computation, 17(11):2337-2382, 2005. (PDF).

[13a] R. Legenstein and W. Maass.
Additional material to the paper: What can a neuron learn with spike-timing-dependent plasticity?. Technical report, Institute for Theoretical Computer Science, Graz University of Technology, 2004. (PDF). (PDF)

[12] R. Legenstein and W. Maass.
A criterion for the convergence of learning with spike timing dependent plasticity. In
Advances in Neural Information Processing Systems, Y. Weiss, B. Schoelkopf, and J. Platt, editors, volume 18, pages 763-770. MIT Press, 2006. (PDF).

[11] T. Natschlaeger, N. Bertschinger, and R. Legenstein.
At the edge of chaos: Real-time computations and self-organized criticality in recurrent neural networks. In
Advances in Neural Information Processing Systems 17, Lawrence K. Saul, Yair Weiss, and Léon Bottou, editors, pages 145-152. MIT Press, Cambridge, MA, 2005. (PDF).

[10] W. Maass, R. Legenstein, and N. Bertschinger.
Methods for estimating the computational power and generalization capability of neural microcircuits. In
Advances in Neural Information Processing Systems, L. K. Saul, Y. Weiss, and L. Bottou, editors, volume 17, pages 865-872. MIT Press, 2005. (PDF).

[9] R. A. Legenstein and W. Maass.
Wire length as a circuit complexity measure.
Journal of Computer and System Sciences, 70:53-72, 2005. (PDF).

[8] R. Legenstein, H. Markram, and W. Maass.
Input prediction and autonomous movement analysis in recurrent circuits of spiking neurons.
Reviews in the Neurosciences (Special Issue on Neuroinformatics of Neural and Artificial Computation), 14(1-2):5-19, 2003. (PDF).

[7] W. Maass, R. Legenstein, and H. Markram.
A new approach towards vision suggested by biologically realistic neural microcircuit models. In
Biologically Motivated Computer Vision. Proc. of the Second International Workshop, BMCV 2002, Tuebingen, Germany, November 22-24, 2002, H. H. Buelthoff, S. W. Lee, T. A. Poggio, and C. Wallraven, editors, volume 2525 of
Lecture Notes in Computer Science, pages 282-293. Springer (Berlin), 2002. (PDF).

[6] R. A. Legenstein.

The Wire-Length Complexity of Neural Networks. PhD thesis, Graz University of Technology, 2002. (PDF).

[5] R. A. Legenstein and W. Maass.
Neural circuits for pattern recognition with small total wire length.
Theoretical Computer Science, 287:239-249, 2002. (PDF).

[4] R. A. Legenstein.
On the complexity of knock-knee channel routing with 3-terminal nets.
Technical Report, 2002. (PDF).

[3] R. A. Legenstein and W. Maass.
Optimizing the layout of a balanced tree.
Technical Report, 2001. (PDF).

[2] R. A. Legenstein and W. Maass.
Foundations for a circuit complexity theory of sensory processing. In
Proc. of NIPS 2000, Advances in Neural Information Processing Systems, T. K. Leen, T. G. Dietterich, and V. Tresp, editors, volume 13, pages 259-265, Cambridge, 2001. MIT Press. (PDF).

[1] R. A. Legenstein.
Effizientes Layout von Neuronalen Netzen. Master's thesis, Technische Universitaet Graz, September 1999.

[-] R. Legenstein, S. A. Chase, A. B. Schwartz, and W. Maass.
A model for learning effects in motor cortex that may facilitate the brain control of neuroprosthetic devices.
38th Annual Conference of the Society for Neuroscience, Program 517.6, 2008.

[-] R. Legenstein and W. Maass.
An integrated learning rule for branch strength potentiation and STDP.
39th Annual Conference of the Society for Neuroscience, Program 895.20, Poster HH36, 2009.


Dr. Robert Legenstein
Institute of Theoretical Computer Science
Inffeldgasse 16b/I
8010 Graz

+43 / 316 873 5824