Few Research Directions in Systems Neuroscience

Amos Tversky (1937-1996)

Being a graduate student is hard. You have to be a little lucky to be at the right place at the right time and work with the right professor. It is also about preparation and background, of course. Ultimately though it is about hard work. If you work hard you will find your way. My son is a first year student in a Neuroscience PhD program. Like all graduate students he is psychologically dealing with many uncertainties. More urgently though, he is looking for a research direction. Empathizing with him, I thought about what I would do, what research direction I would pursue in Neuroscience. I decided to share the 3 research directions that seem promising to me.

Research direction 1:

Progress Towards a Useful Theory of Neural Computation

Neuroscientists have no useful theory of biological neural computation yet. I wrote a blog post about this recently. Any contribution in this area is likely to create a big impact.





Research direction 2:

Similarity Measures of Neural Networks

This is an important area in all fields of science, technology and even finance. Yes…its is that important. Developing metrics for similarity is a well studied subject in AI. Making a significant contribution will be hard. On the other hand, there is room for everyone. The more the merrier!

How can Neural Network Similarity Help Us Understand Training and Generalization

Similarity Learning with (or without) Convolutional Neural Network

Neural Similarity Learning

Speaking of similarity metrics we should all pay our respects to the memory of Amos Tversky. I wrote a post about him and mentioned the wonderful book titled “Preference, Belief, and Similarity” (edited by Eldar Shafir) that contains the selected writings by Amos Tversky. The chapters on similarity are:

  • Features of Similarity
  • Additive Similarity Trees
  • Studies of Similarity
  • Weighting Common and Distinctive Features in Perceptual and Conceptual Judgments
  • Nearest Neighbor Analysis of Psychological Spaces
  • On the Relation between Common and Distinctive Feature Models

Research direction 3:

Discovering the Invariants of Neural Networks

The title of this research direction can also be called “Discovering the Symmetries of Neural Networks”. In physics elementary particles have invariant quantities such as charge and spin. Physicists do not know how to explain these invariants in terms of more fundamental quantities but they use them to build predictive models. There may be invariants of neural networks. We can develop algorithms to discover them. The algorithms may employ AI. These invariants of neural networks can be very useful to build models of cognition.

Can Deep Networks Learn Invariants?

Machine Learning Topological Invariants with Neural Networks

Measuring Invariances in Deep Networks

Network Motif Discovery Using Subgraph Enumeration and Symmetry-Breaking

This entry was posted in brain, Neuroscience, science and tagged , , , . Bookmark the permalink.