
How to build an AI that more closely resembles the biological brain
27 June 2025
27 June 2025
1 July 2025 – Sanne van den Berg’s doctoral research on artificial intelligence brings us one step closer to a biologically plausible model. These developments could contribute to more capable and effective models for neuroscientific research.
Artificial Intelligence (AI) is taking the world by storm. It is used online, in our homes, and even in the cars we drive. But how intelligent is AI really? And can this new technology tell us anything about our own brains? To reach that point, we need to start rethinking how we build these models.
The terms ‘intelligence’ and ‘neural networks’ might give the false impression that AI resembles something biological. “It might be inspired by biology, but the comparison stops there”, Sanne van den Berg, PhD student in the groups of Pieter Roelfsema (Netherlands Institute for Neuroscience) and Sander Bohte’s (Research centre for Mathematics and Computer Science in the Netherlands), explains.
“Our brains consist of different types of brain cells in a structured network, all communicating through localised signals. If our brains would be like an AI, each cell would have information about what all the other cells are currently doing and have done in the past. This is, of course, very complex and impressive, but not biologically feasible”.
One of the reasons why AI differs so much from a biological brain is because it doesn’t learn how to learn. Instead, it is trained on large datasets in one go. “Imagine you’d want an AI to identify specific images. Typically, you’d give it the entire dataset consisting of thousands of images and corresponding descriptions and leave the AI to its own devices. Now imagine a human trying to learn this way, it would be a nightmare.”
Van den Berg built further on Roelfsema and Bohte’s earlier work and looked at how animals typically learn different tasks. “We don’t just learn through trial and error, we learn dynamically and remember things implicitly. We gain a level of general understanding that an AI isn’t given the opportunity to gain. If an AI makes a mistake, it has nothing to fall back on”.
Van den Berg compares it to a trip to a new supermarket. “Imagine searching for apples in a supermarket that you have never been to. You would probably still find it quite quickly because you know to look for the fresh fruit isle. This isn’t the case for AI. An AI doesn’t know that it can learn from earlier experiences so every new supermarket is a completely unknown territory”.
So how does one help an AI learn more naturally? Van den Berg used a simple task also used in animal learning experiments. First, the animal (or in this case AI) needs to look at a blank screen. After a symbol appears briefly, there is a short delay and the AI needs to respond differently based on the symbol they were shown.
In Van den Berg’s experiment the AI is never explicitly told how to perform the task so it starts by responding at random and performing quite poorly. If, during these random responses, the AI accidentally acts correctly, it is given a reward. As the rewards continue, the AI learns which response is correct just like an animal would.
In a biological brain, a reward comes in the form of dopamine, but how do you reward an AI? Easy. Van den Berg has already programmed it in beforehand. “While building the model, we can define certain numbers to be more pleasurable to the AI. We should be careful when ascribing human characteristics to machines but you could almost imagine it like giving the AI a sweet treat”.
Having a more biologically plausible AI offers two major advantages. The first is that it is more capable and efficient. “When people started building planes, they looked to the birds”, Van den Berg chuckles. “Biological brains have had centuries to evolve. I think we can still learn a lot from them. Imagine not needing to cram all the information on a single chip. We could focus only on the necessary information.”
Additionally, a more biologically plausible AI could offer a lot of new research opportunities for neuroscientists. “Models can be a useful tool for us to explore more abstract questions. We could rebuild simplified models, easily make adjustments, and then check the effects. Ultimately, this would help us form more focused research questions. It just gives us more to build on”, she adds.
Does this mean that the next AI model will be like a human brain? Absolutely not. Van den Berg’s model is incredibly simple and still required a supercomputer . “I receive questions about this quite often”, she responds. “but there is still so much to be done before we even come close to this. The largest issue is scalability and I don’t see us solving that any time soon. Our model is just a small piece of huge biological puzzle”.
Sanne van den Berg: Biologically Plausible Reinforcement Learning of Deep Cognitive Processing. Supervisors: prof. dr. S.M. Bohte and prof. dr. P.R. Roelfsema. The defence will take place on Tuesday 1 July at 13:00 in the Agnietenkapel (Oudezijds Voorburgwal 229–231), Amsterdam
The Friends Foundation facilitates groundbreaking brain research. You can help us with that.
Support our work