Once in a while, we hear that the Physics world is overjoyed with the discovery of a new elementary particle, often decades after theoretical models have predicted its existence. But Neuroscience news has a different stroke, with novel insight mainly emerging through experiments. Can theoretical approaches also provide testable predictions about how the brain works?
Throughout his scientific career, Daniel McNamee has been walking a tightrope between Theoretical Neuroscience and data-driven approaches. He believes that his new lab at the Champalimaud Centre for the Unknown can strike the right balance between the two. In this interview, he talks about his plans to develop Neuroscience models that make concrete, testable predictions and the sinuous path that led him here.
How did you become interested in theoretical neuroscience?
I’m originally from a small village north of Dublin called Malahide. For my undergraduate and masters degrees, I headed south to study Theoretical Physics at Trinity College in Dublin. The mathematics was really beautiful, which is what attracted me to the field initially. Still, the questions felt too abstract and disconnected from the real world, so I decided to change track.
While at Trinity, I had done a senior thesis in Neuroscience and found that it was a wide-open field. I felt that there was a lot of scope for interesting work and interaction. One could also empirically validate theories, which was not possible for many of the questions I came across in high-energy Physics.
After graduating, I embraced a more empirical approach by undertaking a PhD in Computation & Neural Systems at CalTech [California Institute of Technology]. It was a fantastic experience. Caltech is a kind of a monastery at the base of the mountains of Angeles National Forest. There, I worked on machine learning applications to different types of neural data, including fMRI and electrophysiology.
The work was, in practice, to take existing machine learning models and try to fit them to the data. This is an increasingly common approach in the field, whereby algorithms developed within an artificial intelligence framework are used to study naturally intelligent systems. The idea is that the same principles that give rise to machine intelligence may also apply to animals and humans.
By the end of my PhD, even though it was a great experience overall, I felt like I had tilted too far into the empirical side and I wanted to tilt back into theory. I also realised that the approach we were implementing wasn’t the best. Scientists develop machine learning algorithms within a specific context. Consequently, they impose computational constraints that are not really concerns of the natural domain resulting in predictions and outputs that often do not capture the richness of our internal experience.
What was the outcome of this realisation?
I decided that for my postdoctoral research, I was going to work on theories based on the idea that the agent has an internal model of the world and that it uses this internal model to think, rather than just react to whatever it experiences. But since I didn’t find much inspiration within existing machine learning models, I thought that, naively, I could contribute to these algorithms myself.
However, one faces two main challenges when coming up with new models. The first is developing the model itself, which is difficult. And it also means that you run an extra risk because you have to convince people of the validity and value of your model. So it has to be really good. The other challenge is to design a good experiment to test your model and then fine-tune the equations according to the experimental results. So it’s a much more lengthy and involved process than just applying existing models to data. As I’m describing this, it sounds kind of wild. It really was a risky project.
I ended up doing this work at Cambridge University. It’s a very strong place theoretically and has a great mix of machine learners and computational neuroscientists. I also got a Wellcome Fellowship, which meant that I could operate entirely independently within the Computational and Biological Learning Lab collaborating with Daniel Wolpert and Máté Lengyel.
What question did you tackle with your model?
I addressed a challenging cognitive problem – planning. For example, imagine that an animal had gone foraging for food and is now planning a long route home over treacherous terrain. What’s the best navigation plan? And how should it change if the animal is interrupted by a predator or a sudden storm?
My aim was to develop a set of equations that would capture the best, or “optimal”, solution to this problem. I asked, amongst all possible planning algorithms, which is the optimal one? How would a super-intelligent agent plan? Optimal solutions sound strange in the context of human or animal behaviour, which is necessarily noisy and prone to inaccuracies. Still, optimal models help us understand natural behaviour because they establish a benchmark to compare real-life performance. This way, you can see how animals and people approximate optimal strategies and identify which variables play critical roles.
Specifically, I chose to focus on optimisation in respect to time. This is because out of all the variables that play a role in planning, time is fundamental. Even if you were a super-intelligent being with limitless memory and problem-solving skills, you still wouldn’t be able to escape it. Also, suppose that you had no time limit to execute your plan. In that case, there would be no point in planning, or at least, the optimal planning process would become infinitely slow. Thus time became the only limitation, and the focus of this work was to study planning under time constraints.
The model itself targeted two specific questions. First, given any amount of time, what is the best, or “optimal”, plan I can come up with? Second, what is the shortest amount of time I need to reach any particular level of performance? [The overall performance level being the difference between the total reward, which could be food for example, and cumulative costs, such as effort or distance.]
Did the risk pan out? Did the model meet your expectations?
I remember a mentor of mine once told me that there are three things you should think about when developing a model: first – is it novel; second – is it intuitive, meaning, can you easily explain it to other people; and third – is it predictive.
I think the model I developed checks all the boxes. First, it is certainly novel. No one else has formulated an optimal planning solution. And it’s amazing, actually, since you have optimal models for many cognitive processes, such as perception. However, even though planning is such an engaging problem, there was no optimal model for it when I began. Second, it is conceptually intuitive (hopefully!). And finally, it is predictive since the patterns in which it weighs up different possible actions are highly distinctive and so could be fairly easily teased apart experimentally.
Will you continue this line of research at Champalimaud? How are you planning to bridge the theoretical aspect of your work with a more data-driven approach?
This model was one big thrust of my postdoc, which I think about at the theoretical level. The other big thrust was on the more empirical level. In particular, I got really interested in the hippocampal-entorhinal network. This network creates internal representations of the external world. It has also been implicated in cognitive functions like imagination, simulation and episodic memory. This is a rich and complex system, but most models of hippocampal function only focus on how it keeps track of the animal’s location in the environment. I felt that this was an impoverished view of the system.
To make a long story short, I ended up having a nice collaboration with Kim Stachenfeld and Matt Botvinick from DeepMind, and Sam Gershman from Harvard University. Together, we came up with a new perspective on the entorhinal-hippocampal network, and compared our new model against a large number of neural datasets. The hippocampus is known for producing sequences of representations across internal cognitive maps. Our results help explain how variations in population activity across this network may cause the hippocampus to enter into different regimes of sequence generation, thus supporting specific cognitive functions.
I think the two outcomes of my postdoctoral work put me in a good position to combine the theoretical and empirical approaches. Specifically, one direction I’d like to move in is to understand how the planning model I developed may be implemented in the entorhinal-hippocampal circuit dynamics we’ve described.
The central value I would have for my lab is to be very serious about developing models to make concrete predictions for experiments. I’m also very much encouraged to develop falsifiable models. If the experiments prove that our models are wrong, we would just get rid of them and think of new ones. This approach flourishes by collaborating with neuroscience experimentalists, of whom there are many at Champalimaud Research. This way, teams can make strong contributions that broaden the scientific effort.Loading Likes...