Florian Engert: “If you say computers will never beat a human… you’re going to lose”

(Photo credit: Tor Stensola)

Will we ever understand the brain? Should we be afraid of artificial intelligence? Will Humankind be surpassed by machines? Harvard neuroscientist Florian Engert shares his views on these matters.

If you happened to walk by Florian Engert in the street, you would never, ever, imagine what he does for a living. His sleeveless black T-shirt and muscular, tanned arms, would certainly fool you (could he be a biker? a surfer? a rock star?).

But at the end of last October, for someone attending the latest Champalimaud Neuroscience Symposium, watching the remarkable ease with which Engert made his entrance into the auditorium of the Champalimaud Centre for the Unknown, the way he was surrounded and greeted by scientists – and the way he greeted Champalimaud president Leonor Beleza, kissing her on both cheeks –, there could be no room for doubt: he must be a well-known neuroscientist.

And indeed, this Munich-born, smiling, rather short and stocky man of 52, whose lab at Harvard University studies the brain and behavior of zebrafish larvae, was one of the invited speakers at this year’s Symposium on theoretical neuroscience.

A couple of times, he became the life of the party, making unconventional claims and asking provocative questions. “Will we understand the brain a small step at a time or through giant leaps of fundamental discoveries?”, he asked his colleagues during a round-table on artificial intelligence (AI) and deep learning networks. And he made everyone say what they thought – including the audience, who had to vote on these options (by the way, the “small steps” hypothesis won).

At the close of the symposium, it was our turn to ask him some questions.

What is your research about?

We try to understand how brains work. We do this in a small model organism: the larvae of zebrafish. The general idea is to understand how brains generate adaptive behaviors.

You were the first to image all active zebrafish neurons in the zebrafish larva brain at the same time using a technique called 2-photon laser scanning microscopy. Do you think that being able to record brain-wide neural activity is important for understanding the brain?

No. Two-photon laser scanning microscopy is certainly a useful tool, but I don’t think it’s sufficient (and I’m not even sure it’s necessary) to understand the brain. Having access to all the neurons in the brain just makes it easier and quicker to identify the relevant subcircuits that are involved in a certain behavior. Otherwise, it would just take much longer.

What exactly does “understanding the brain” mean to you?

I really think you have to divide it into three different questions, three levels of understanding. This is not my own idea, David Marr has already written this down.

The first level of understanding the brain is to understand why it produces the behaviors that you can observe. That means understanding the brain in terms of an evolutionary context. Why do animals behave the way they do, what is the purpose of their behavior in this context, and what good does their behavior do to the animals.

The second level is to understand the computation the brain is performing. What’s the nature of the input, what’s the nature of the output – both of these are observable. The important aspect of understanding at this level is to identify the algorithms and the computations that link inputs to outputs.

Finally, the third level concerns the understanding of how these computations are implemented in the brain. How they are being performed by the elements that the brain has at its disposal, namely the neurons and the synapses.

If you want to claim understanding, you need to cover all three different levels.

Do you think we will ever understand the brain completely? That there will be one unified formula that beautifully describes everything it does?

I don’t think one can understand the brain, because that doesn’t exist. Brains come in very different flavors, have evolved for very specific niches and they solve very different tasks.

So the first question is which brain we want to understand: the larval zebrafish brain, or the monkey brain, or the human brain?

“I don’t think one can understand the brain, because that doesn’t exist. Brains come in very different flavors”

Also, I don’t know if we can understand it in all possible contexts. Animals have relatively limited tasks in order to survive, and I think that’s approachable. We can pick different model systems and then the first question that you have to answer is, again, what are the different problems that these animals have to solve.

And there’s a difference between describing and explaining. An explanation would require generating a mechanistic, reduced and realistic circuit model that explains how the activity in particular neurons arises given the stimulus.

Actually what you need to explain is the behavior of the animal – the behavioral output. If your model can take the sensory input world of the animal, and can explain how the neurons and the synapses compute and process this information and transform it, and then produce behavioral output that is statistically indistinguishable from that of a real animal, then you are closer to having a complete model of what the animal does.

I think that the process of understanding means putting together certain more fundamental rules and then suddenly realizing how they come together in a realistic circuit model. Then you can explain how the behavior of the animal is generated.

Like in our case, where we’re looking at decision-making in larval zebrafish: they see a random dot motion and they have to decide whether the motion is going to the left or to the right – and the question is how does the brain do that. Once we understand how the brain is doing that, the challenge is to identify the processing units. In this case, it turns out to be a group of neurons that integrate motion information until it reaches a certain threshold. When the threshold is reached, the behavior is executed.

If you put all of this together, you can claim some form of understanding.

But how do you know that those algorithms are actually what the brain uses to go from input to output?

It’s true that there are many ways, many different algorithms that the brain can use. But you can exclude algorithms that are not being used, or that just don’t apply – that is, you can constrain these different algorithmic models as much as possible – by looking at the animal’s behavior. And the way to do that is by putting the animal into different behavioral contexts.

Getting better all the time

You have an algorithm, you have a machine, you feed it the input, you get the output and it behaves like it should in the world. But it’s not because a machine’s behavior is indistinguishable from a human brain in its answers that it is made like a brain. The architecture of a machine that could pass the Turing test would not be the same as that of the brain.

Well, we don’t have such a machine yet. But let’s say we do. I think once we have a machine that passes the Turing test, it will actually look very very similar to a brain. That is, in my view, where deep networks are going. I think that ultimately we will reach a point where they become indistinguishable from humans.

At that point, we will stop worrying about things like consciousness and feelings, because once machines can perfectly fake it, we have to accept the fact that it’s the same thing.

But if you took the machine apart, it wouldn’t necessarily be the same thing – or would it? They won’t have neurons…

I think they do, kind of. The basic properties of neurons, namely that they receive information and then send it onwards is present in these machines. It’s just not made out of lipids and proteins, they’re made out of silicon chips. But I think they actually do mimic properties of neurons. And the networks that are being made look more and more like brains. We’re not there yet, not even close, but it’s going in that direction.

So you think there will be AI’s that are indistinguishable from the human brain.

Oh, yes.

Will they be better than us?

Yes! They are better already. Humans are being beaten by machines in all aspects. You can pick any topic… If you say computers will never beat a human… you’re going to lose.

But you’re talking about very particular capabilities.

They are getting more general now. One thing that humans have is that they can generalize a lot better than machines at the moment, but I wouldn’t bet anything that this is going to stay like that. So this whole insistence that humans are special and there are a few things that machines can never achieve, I think it’s an extremely dangerous assumption.

Rise of the machines

People find it scary to think that there will be machines like that. In sci-fi movies, the machines are always inherently bad, they want to harm us.

I think these people are projecting. If they are scared of intelligent machines it’s because they’re projecting from their own mental states onto machines and they assume that machines would behave like they would behave if they became all-powerful.

I’m not worried about that for several reasons. The main things that people are worried about come from our fear of death. Why would machines become dangerous to us? They could if they were afraid to die, namely if they didn’t want to get switched off. Another thing is the fear of aggression, the fear that they become aggressive towards us. The third one is the fear that they become like humans in their innately competitive behavior and want to be better than others.

Those three features are programmed in us by evolution because we needed them. Those animals that didn’t have these features went extinct. But none of the existing machines have any of these features programmed in, and there’s also no evolutionary pressure, and that’s why I think machines will lack all of these aspects that make people evil.

We could of course make machines that incorporate that if we wanted to, but at the moment we’re not trying to do it, the current efforts are not going in that direction, so the machines we are generating are benign, they are not evil, unless we program them and I don’t think we should. It’s not happening right now, so there’s no worry about that.

There is another aspect: should we be worried about machines being better than us? They are, but that’s usually a good thing.

What if machines started going to university and got better grades than humans?

I think they can do this already. Computers can outperform most students on almost any topic. But why should we be worried, why is this a concern? I don’t see why it’s a problem. That’s the same worry as finding that machines can build better cars than humans, that they do a better job at welding, at painting. That’s not a concern, it’s a reason for optimism.

“I think machines will lack all of these aspects that make people evil”  

One other argument is people are scared of losing their jobs. But that is just industrialization, not intelligent machines, and what needs to happen is that jobs will have to be redistributed. It’s a short term problem, because we have a political problem about redistribution of wealth, and few people controlling a large amount of the wealth. That is a problem and it needs solving, but I don’t think the problem is machines doing work that humans don’t want to do anyway, that’s actually a good thing. If machines can do work that humans don’t like doing, the long term solution is that people will just have to do less work, or work that is more fulfilling and we have to figure out how much work people should do and what kind of work.

At one point you were very vocal against the European Human Brain Project, arguing that simulating a whole brain didn’t make sense.

That’s because right now, we don’t know enough about the human brain. We don’t know anything about the algorithms, we don’t have enough information. I’m generally doubtful and skeptical about collecting large amounts of data without knowing exactly how to analyze it and what you will do with it. In my experience, this does not lead to useful insights. And the Human Brain Project and a lot of others have a far too strong bias on the collection of the data, therefore on technology development and collection of data rather than thinking carefully about what you want to understand. Phrasing the questions carefully and designing the experiments carefully is much more powerful than developing new technology and applying it in a high-throughput  manner, waiting for a pattern to emerge.

Have there been important discoveries in neuroscience in the last 100 or 150 years?

There were many meaningful breakthroughs, but no fundamental discoveries or breakthroughs in neuroscience. The replication of DNA was a fundamental discovery – it meant suddenly understanding things you hadn’t understood before and where you suddenly think in a very different way. None of this has happened in neuroscience.

Do you think this is because the brain is too complex?

No. I think that what you said at the beginning – this idea that there is one unified formula that beautifully describes everything, this doesn’t exist in the life sciences. There’s nothing to be discovered in the brain of that nature. What the brain is doing is not one unified thing, it is a bunch of hacks, of shortcuts, that evolution has come up with to selectively solve different problems. So there is no unified theory that’s to be discovered, there are many little different tricks that are interesting to be discovered, but the sense that there is something out there waiting to be discovered – it’s just not there.


 

ana-gerschenfeld-01

Ana Gerschenfeld works as a Science Writer at the Science Communication Office at the Champalimaud Neuroscience Programme

 


 

Edited by: Catarina Ramos (Science Communication Office). Photo: Tor Stensola.

 


Loading Likes...