Author: Blaise Aguera y Arcas
_Blaise Aguera y Arcas_
Reading time: 27 minutes
Synopsis
What is Intelligence? (2025) says AI is not like a scary alien mind. Instead, it is a normal part of life’s long story. Life has always changed, worked together, and tried to guess what happens next. The book connects bacteria, brains, cities, and computer networks. It shows that intelligence appears when any system learns about itself and its surroundings. It looks at AI’s past, present, and future. It also explains our role in it all.
What’s in it for me? Get a fresh perspective on the evolution of artificial intelligence.
This summary will look at what intelligence means. But there’s more. The author says that today’s AI is not just a smart copy. It is a true form of intelligence. It works on the same basic idea that all living brains use: guessing what will happen next. The main idea is that intelligence is more about computing than about biology. If we see ‘guessing what will happen next’ as the core of intelligence, many things become clearer. This includes how life started, how living things stay alive, and why big AI programs work so well now.
This idea helps us stop arguing if AI is “really” smart. Instead, we can look at the bigger picture: how ‘guessing what will happen next’ grew from tiny particles to brains. It now also works in machines. These new ‘partner’ intelligences might change our world. Let’s start by going back four billion years. That’s when life first began.
Blink 1 – How it all began
Let’s go back 4.6 billion years to Earth’s early days. The world looked very different then. This time is called the Hadean Eon. The planet had seas of lava. The sky was full of volcanic gases. Asteroids hit the surface, strong enough to boil the air. Yet, in this wild time, life’s basic chemicals started to form.
We may never know how life truly started. But one popular idea points to the deep sea. There are hot vents deep in the ocean. These are like tall towers made of minerals and heat. Inside small, holey spaces, water, gas, and rock mixed. These holes naturally made tiny ‘batteries’. They powered chemical reactions. These reactions created cycles that kept themselves going. They were much like the ‘engine’ that works inside every cell today. It’s almost as if our cells remember those first oceans.
Life grew in big, fast steps. For example, a bacterium was once swallowed by another type of cell (an archaeal cell). But it wasn’t eaten. Instead, they joined forces. This created mitochondria. This then led to life with many cells, nerve systems, animals, and finally, us.
Working together (symbiosis) and big changes are not only in biology. Technology also changed quickly in similar ways. New ideas came when different parts were put together. For example, people combined stone, wood, and animal parts to make hunting tools.
You might not think life and computers are alike. But early computer experts like Alan Turing and John von Neumann saw a connection. They thought of machines that could read rules, change them, and even make copies of themselves. This is exactly how DNA makes copies. So, it’s fair to say that a cell is both chemistry and computing.
Simply put: for life to make copies, it needs computing. In nature, millions of tiny workers like ribosomes and enzymes work at the same time. They use chance and simple rules. This helps them achieve one big goal.
The same idea was seen in new experiments. In 2023, scientists tried to make a digital ‘soup’ like the early Earth. They wanted to see if changes and self-copying patterns could appear. They put thousands of simple digital ‘tapes’ into this soup. These tapes had basic code and information. They could also change themselves. After millions of tries, the tapes did start to make copies. Suddenly, they changed from random noise to patterns that copied themselves.
From chance, things that can copy themselves appear. Once they exist, evolution starts. In the next part, we’ll see how this growing complexity can lead to intelligence.
Blink 2 – Intelligence at its most basic level
Simply put, intelligence helps living things survive. Imagine looking very closely at a single bacterium swimming in water. You can see the start of intelligence there. It’s not the intelligence we usually think of, like brain cells working or ideas forming. Instead, it’s a very simple form. Its only job is to keep the cell alive each moment.
Life can’t just make copies without thinking. It must build itself, keep itself working, and do all this in a world that is always changing. This changing world makes things complex. So, living things need a border – a membrane – to protect their sensitive inside parts. But this border can’t be fully closed. Energy and material must move in and out for survival.
Taking things in and letting things out is part of being alive. But everything taken in is also information. Every living thing needs to know what to eat and what to stay away from. This is the first sign of intelligence: using information to survive.
Inside a bacterium, there is a tiny control system. It’s like a chemical ‘brain’ made of genes and proteins. It checks hunger, energy, and many other tiny details. These inside signals mix with outside ones to guide what the bacterium does. Its actions are simple but smart. You could even call it intelligent. Many bacteria swim using a small spinning part called a flagellum. They swim towards food by comparing what is happening now with what happened just a moment ago.
If the smell of food gets stronger, they swim straight for longer. If the smell gets weaker, they turn around more often. For food, heat, or light, bacteria do the same thing: they guess a pattern from small, random events over time. This means they solve problems using facts and chances.
The main aim is homeostasis. This means keeping things inside the body stable and balanced. We can show this with a computer model. This model combines three things: outside information (X), inside condition (H), and possible actions (O). This combined information, P(X,H,O), shows how the living thing sees its whole world. It guesses not only what is outside. It also guesses what could happen if it swims, turns, or turns a gene on or off.
This is where evolution becomes important. Over many generations, living things improve their internal models. These models become simpler, wider-ranging, and better at guessing. When a living thing has a model of the world that includes itself – a model that guides its actions and shapes its future – it has a goal: to stay alive. This goal is the start of having a purpose. This purpose is the beginning of intelligence. It is also the base for everything that follows.
Blink 3 – How computers got smart
Let’s jump to the 1700s. A famous thinker, Gottfried Wilhelm Leibniz from Germany, helped start the time of mechanical calculators.
Leibniz was among the first to see that calculations could be used for bigger, more important things. He dreamed of a step-by-step method that could answer any question with ‘true’ or ‘false’. He hoped that one day, thinkers could solve arguments as easily as accountants check numbers.
But for the next few hundred years, people focused more on business. Leibniz’s dream became less important. In the 1700s and 1800s, calculations and early computers helped start the Industrial Revolution. Machines like the Analytical Engine, made by Charles Babbage, helped. These mechanical computers became like extra parts of the factory.
However, a few people saw that computers could do much more. Ada Lovelace, a British mathematician, believed computers like the Analytical Engine could show us a big theory. This theory would connect machines, math, and the ways life itself works. She imagined a time when we might find the “math of the nervous system.”
By the early 1900s, brain science caught up. Tests showed electrical signals in the nervous system. This led to a bold idea: maybe brain cells (neurons) were like simple switches (logic gates). And maybe the brain was like a calculating machine. In 1943, two US scientists, Warren McCulloch and Walter Pitts, strongly supported this idea. They drew a model of a neuron as a logic gate. This model later helped design digital computers.
These computers became fast, exact, and steady machines. They worked with ‘on’ or ‘off’ (binary) thinking. This led to GOFAI, or ‘good old-fashioned AI’. It meant machines were smart, thinking robots. Humans, however, were seen as sensing and feeling beings.
Later, scientists studied the human brain more closely. They found that neurons didn’t act like simple logic gates. They worked like complex systems that changed and reacted in different ways. The brain seemed less like a machine that proves rules. It seemed more like the complex, living thing it was a part of.
But the GOFAI model was simple to use, so it became popular. AI that was more like humans went into a less known field called cybernetics. This idea saw brains and machines as ‘guessing’ systems. They learn from what happens, control themselves, change for unknown things, and use information from the world to adjust. This way of thinking was based on life, actions, reactions, and guessing. It was not based on perfect logic.
Cybernetics didn’t process things in groups. Instead, it focused on getting constant feedback. These were systems that felt the world, acted, and kept changing how they behaved. Gun aimers, bomb computers, and early flight trainers all used this method.
The ideas of cybernetics were big. But it took time for technology to become good enough to use them. Even so, the main idea of cybernetics never disappeared. This idea was that intelligence comes from guessing, reacting, learning, and constant changes.
Blink 4 – It’s all about prediction
How did the first, unsure steps of cybernetics lead to today’s AI progress? We can’t explain everything, but we must talk about the perceptron. The perceptron is the simplest type of artificial brain cell. You can put many of them together. Then you can give them lots of pictures and ask them to find things, like bananas.
But a machine’s true power to learn about a banana, by trying again and again, comes from computing. Computing is what helps the machine learn. It makes the artificial brain cell ‘activate’ correctly when it makes good choices. This is called an activation function. A common one is ReLU, which stands for Rectified Linear Unit.
The name ReLu might be confusing. But it’s a great way to solve complex problems. It helps a machine see a banana, no matter the light, angle, how ripe it is, or how it was pictured. In simple terms about biology, it’s like brain cells only working when they see a strong enough pattern they like.
When this ‘guessing’ process happens many times, ‘transfer learning’ begins. This means the AI has learned enough. It can then use some of the same patterns for totally new things. This is why you can teach it to recognize an apple with only a few examples.
This leads to ‘unsupervised learning’. Instead of telling the AI what is in millions of pictures, you hide random parts. Then you ask the network to fill in the blanks. It has to guess what is missing. If it does this well, it has learned about the world without being told. It learns about edges, colors, objects, how deep things are, and groups of things. It has created the mental pictures needed for almost any future job. Language programs use the same trick with missing words. Guessing and making models are very similar.
Our eyes and brain constantly learn in a similar way. Much of what we “see” is blurry, unclear, or hidden. But we move our eyes a lot. We check our guesses and fill in missing parts. This helps us build an amazing picture of the world inside our heads. What we truly become aware of is this built-up picture, not just the raw things our eyes take in.
The human brain has many, many brain cells (neurons). But only a small part of them are working at any one time. This is needed for life, and it also makes the brain work well. The brain sends signals forward. But learning happens by going ‘backward’. It’s all about getting information back. Muscles, senses, and brain chemicals all send information back. This changes how brain cells act.
Learning by evolution, learning by brain cells, and machine learning are all the same idea. They are about stable and shared ‘guessing’ that keeps changing. Living things stay alive by guessing the future, guessing what they will do, and guessing what others will do. They do this over long and short periods. That’s where intelligence comes from. Not from names or single prizes. But from always building a picture of the world as it changes.
Blink 5 – The intelligence behind today’s AI
Before we finish this summary, let’s look at AI that most of us know: language models.
Things like ChatGPT seemed like a big step forward. But let’s not think of language as too special. Of course, humans use language in a very complex way. But many animals also communicate well. Dolphins, whales, parrots, and even prairie dogs have very detailed alarm sounds. What is truly important about language is its job: it lets different minds share their thoughts. It’s like a social way to make information smaller. Your brain takes your big, complex inner world, full of feelings, and turns it into a small code. Someone else can then understand this code. Sometimes this code is a shout of pain. Sometimes it’s a deep play like Hamlet.
Without language, we might guess what others think. But with it, we can share memories, plans, ideas, and even think about what others are thinking (theory-of-mind).
Language has important steps. The hardest parts are putting pieces together to make new meanings (compositionality), and talking about ideas. But once a system can use symbols for things like “me,” “you,” “next week,” “maybe,” or “if this happens, then that happens,” it can describe almost any thought world it can imagine.
Before, language programs used grammar rules or clear logic. But this didn’t work well because natural language is not always neat. Neural networks changed everything. Especially when they learned by predicting the next word. This means the program guesses the next word in a sentence, based on the words before it. This way of guessing makes a program learn almost everything humans know about the world. For example, if a program can guess the missing word in “I dropped the bowling ball on the violin, so I had to get it (blank),” it needs to know about both language and how things work in the real world (physics). It must know which object needs fixing.
AI language models have changed from RNNs to Transformers. RNNs were a type of machine learning that had trouble remembering things over long texts. When they were improved to Transformers, this problem was solved. Transformers brought a new way of ‘paying attention’. The model could link each word to any other word in the text. This allowed for complex and varied ways of thinking. It’s interesting that Transformers weren’t made to copy biology. But their methods were similar to parts of the brain, like the hippocampus.
But Transformers have a strange point: they don’t remember how they figured something out. Each word or part is created anew. There’s no ongoing internal memory. That’s why they might solve a math problem right, but then explain it wrongly. If this happens, just tell the model to “think step by step.” Like a student showing their work, the model works better when it breaks down its thinking.
This shows that language helps build thoughts. Thinking step-by-step makes AI work better. It also helps humans gather knowledge.
Blink 6 – The good and bad of intelligence breakthroughs
Let’s finish by looking at what might happen with AI in the future.
Today, AI is not as smart as humans. But it is quite similar to the intelligence we see in other living things.
For instance, bees show many smart skills. They can learn flexibly, apply knowledge widely, remember things for a long time. They can even control themselves and wait for rewards. They build and fix their homes in ways that fit strange situations. They recognize shapes and patterns using different senses. They make careful choices based on past bad experiences. Simply put, they think in a small-brain way, similar to what we call reasoning. Transformers (a type of AI) can also do all these things. Even small AI programs can read short stories and think about them. This makes their intelligence similar to that of a bee.
When people talk about AI’s future, they often imagine three clear steps. First, AI for one task. Then, AI that can do many tasks. Finally, super-smart AI and a big change called the Singularity. But in reality, AI became ‘general’ already, in a quiet way. This happened when we started training huge language models and talking to them.
Once that changed, progress didn’t happen in sudden jumps. Instead, it became a long, fast climb upwards. We’ve seen this pattern before. Early computers were made for specific jobs. Then came the time of digital computers that could be programmed for anything. After that, everything sped up quickly and smoothly. AI is doing the same thing. The big change has already happened. What we see now is the time when things speed up.
But maybe a more interesting question is: what does this mean for life on Earth? These big changes in technology change everything. Farming, cities, factories, electricity – each new step changes how we live. It changes who relies on whom, and how knowledge moves.
AI might be the next big change. Not because machines will take our place. But because humans have joined together into a bigger, shared way of thinking. Each big change has made single humans less able to live alone. They rely more on systems – sometimes worryingly so. Someone who hunts and gathers can live alone in nature for days. Someone in a city can’t last a day without electricity, water, supplies, and ways to talk to others. Every new layer of technology we work with makes us more capable. But it also makes us more open to harm. AI growing follows this pattern: huge new powers come with new things we rely on, which we don’t fully understand yet.
Perhaps we should see AI in this way: it’s a change in our story of evolution that is already happening. Intelligence is becoming more shared, more mixed, and spread out more widely. The important questions now are not about some imaginary super-smart AI appearing suddenly. They are about how this new way of thinking changes our dangers, changes our systems, and changes what we can create together.
Final summary
The main idea of this summary of What Is Intelligence? by Blaise Aguera y Arcas is this: intelligence is not a strange quality only for humans. It’s a natural result of life always trying to guess, change, and stay balanced. Over time, the need to survive led to better nervous systems and more flexible actions. This finally created the social, self-aware minds we see in ourselves. Intelligence is simply building up better guessing, clearer inner models, and more complex ways of getting information back. Today’s AI systems grew from these same ideas. Cybernetics and early computers led engineers to make machines that feel the world, act on it, and change their behavior. This was like nature’s own solutions. Perceptrons and neural networks later continued this idea. They gave us designs that can understand general rules and work with the complex, changing ways real thinking happens. Today’s AI systems still can’t learn continuously or adapt instantly. But they show the same pattern that shaped natural intelligence. AI is not a strange enemy. It is the newest part of a very old story of how things change. This story connects tiny particles, animals, societies, and machines through the simple idea of ‘guessing what will happen next’.
Okay, that’s it for this Blink. We hope you enjoyed it. If you can, please take the time to leave us a rating – we always appreciate your feedback. See you in the next Blink.
Source: https://www.blinkist.com/https://www.blinkist.com/en/books/what-is-intelligence-en