Author: Alex Pentland
_Alex Pentland_
Reading time: 21 minutes
Synopsis
Shared Wisdom (2025) looks at how technology and human nature are connected. It shows how we can use new things like AI to help everyone. The book learns from past big changes in technology and what they did. It explains how, if used well, these tools can make us all smarter together. They can also help us fix big problems around the world.
What’s in it for me? Learn how to use new technology to make human lives better.
We live in hard times. Our systems seem unable to act against climate change, diseases, and social problems. Also, new technologies like Artificial Intelligence might change society even more. Many people think this won’t be a good thing.
But what if there’s a chance for good change in all this trouble? History shows that big new ideas, like cities or scientists checking each other’s work, have often helped humanity move forward in important times.
If you work with technology, make rules, or just care, this summary will give you a proven way to improve our old systems. It will also show how to create technology that truly helps people.
Blink 1 – Humanity’s hidden superpower
What if humanity’s greatest invention wasn’t the wheel, but talking around a campfire?
It turns out that the stories we share do much more than just socializing. Think about it: you make most choices based on stories. When choosing a restaurant, starting a new habit, or solving a problem at work, you don’t often read scientific reports. Instead, you trust what worked for your friend, what you heard, or what your community thinks is right. Human society has always moved forward this way.
Stories are our species’ special power. They pass on important knowledge over time and distances. When our ancestors sat around campfires sharing tales of where to find food or which paths were dangerous, they were creating what we can call “group smarts.” The best stories, the ones that were always helpful, became common knowledge. This knowledge helped groups make choices and work together.
Australian Aboriginal communities, for example, kept knowledge for survival for more than 7,000 years using “songlines.” These were rhythmic stories that showed where to find water, what plants to eat, and how to travel across the land. Different communities had different stories. This created different cultures, which helped some groups survive diseases, climate changes, and disasters.
Today, stories are just as important. A study looked at how 1,700 expert money advisors made choices about money. Something surprising appeared. The experts who only used data and math models did a little better at first than others. But when Brexit happened, these experts who worked alone lost a lot of money. The ones who stayed in touch with their work friends, sharing informal advice, managed the problem well.
Whether you’re an early human or a money advisor: to get ahead in life, you mix what your community has learned with your own experiences to make choices. Researchers found this method leads to choices that cause the “least regret.” This means making the best choice with the information you have.
For thousands of years, humans have survived through group smarts based on shared experiences. And this basic truth should change how we think about AI.
Blink 2 – Communal wisdom through technology
Stories are the basis of human progress. That’s why, throughout history, humans made their biggest advances after new technologies made sharing stories better. Three major changes have changed our path in this area.
First, regular meetings gave groups places to share their experiences every day. Today, this seems normal. But for early humans, sitting around campfires to share information was not natural. It was a social skill they learned. Yet it greatly sped up how fast useful knowledge spread in a group.
Second, when cities started, around 11000 BCE, different groups could share ideas with each other. Having many people live close together—even with problems like sickness and not enough resources—allowed stories to move between different cultures. This sharing between communities was so helpful that more people lived than ever before.
Third, scientific groups made story-sharing formal. They did this by writing things down and giving credit. This gave strong reasons for people to use and improve each other’s work. Beginning around 1500, scholars started sending letters to share what they saw and thought. Groups like the British Royal Society recorded these talks. They started the custom of mentioning other people’s work. The result was large networks of stories that kept changing as people with similar interests added what they learned.
Many successful older AI tools—like map apps, flight booking systems, and internet search—also help us share stories better. Rather than replacing what humans decide, they link humans to what other humans know.
New AI models today have the same potential. Generative models like ChatGPT tell stories. They bring together human stories and patterns. It’s true that if made badly, they could stop the social learning that builds trust, common understanding, and group effort.
But if made with a good goal, they could greatly improve how information moves, helping everyone. They could, for example, connect us to what our own groups are doing, help us find people like us, and support group choices without changing what we decide. Simply put, AI could greatly improve the story-sharing networks that have kept humans alive through many big dangers.
Blink 3 – Truly representative democracy
Think about how your country is governed. If you live in a Western democracy, you probably choose people to make decisions for you, right? Even though that seems modern, here’s a hard truth about these systems: they still work a lot like the systems run by powerful people in old Rome. We’ve just become better at hiding this.
When Japan opened up to the West in the late 1870s, the Japanese leaders were upset that Europeans thought their old system was “uncivilized”. They also saw that rich families mostly ran British and American democracies. So they found a clever way around this. They simply changed the words for their government. They called “fiefdoms” “companies” and “serfs” “lifetime employees”. Suddenly, Japan could look like a modern democracy that fit with Western countries. But the powerful people still stayed in charge.
This unfair amount of power held by a few rich people is still common in today’s systems where we elect representatives. And it’s very bad for true group smarts. When only a few powerful people make choices, we miss out on many different ideas that help make good decisions. The World Bank thinks that this power in a few hands costs the world economy 5 to 20 percent of its total value each year. This is often because rules help those in power more than others.
But there is another way that has been slowly changing the world since the 1600s: agreement networks. The first scientific groups were mainly agreement networks. When scientists started mentioning each other’s work, success didn’t rely on central leaders saying your ideas were good. Instead, it came from other experts in your field finding your work important enough to mention.
This same way of working still helps science, technology patents, and legal rules move forward. For example, doctors talking to each other about how to care for babies has lowered death rates by ten times. Groups creating open-source software built much of our digital world by working together freely. These groups are built around shared interests, not where people live or who is in charge. People get credit by creating work that the group agrees on. They don’t do it by moving up a strict system or using their money to get power.
Now imagine if your government was run this way: as a truly representative, democratic network that is built on sharing all the different ideas in a community. This might sound like a perfect dream. But we already have the digital tools to create agreement networks for how we are governed. We just need to use them correctly.
Blink 4 – Considering risks over rewards
We can imagine many ways new technologies like modern AI could fix society’s problems. But if we want this to really happen, it’s just as important to see how past technologies have made our problems worse.
Since the 1950s, we’ve seen three big AI booms. Each showed a worrying pattern. The technology worked brilliantly for specific tasks—like finding the best delivery routes, making loan decisions automatically, or personalizing search results. But under these successes, they slowly started to damage society.
The first AI systems in the 1960s used logic and math to solve clear problems. For example, they found the best ways for deliveries. Companies saved a lot of money, and this success made people want to do even bigger things. The Soviet Union used a system that won a Nobel Prize to manage its whole economy. It aimed to use all resources in the best way. But this plan failed badly. It helped lead to the country breaking apart. The systems just couldn’t understand how human societies really work and change.
In the 1980s, new US banking systems promised to make loans fairer and cheaper. They replaced human loan officers with automatic, standard decisions. The technology did make things more efficient. But it also ruined over half of the local banks across the country within a few decades. Local credit unions were gone. Bankers who knew your family’s needs were no longer there. What was left were ATMs and call centers. People there followed strict rules and couldn’t deal with the real, complex lives of individuals.
Then, in the 2000s, the internet boom created huge amounts of user information. Companies learned to guess what people would do. They did this by comparing people to others who showed similar habits. This way of linking similar people built big companies like Google and Facebook. But it did this by creating ‘echo chambers.’ In these, people only saw things that were like what others, similar to them, had already liked. Even worse, the computer programs made the voices of people who were very good at getting attention much louder. These powerful people gained huge numbers of followers because success led to more success.
Today’s AI is very different from earlier types. It doesn’t just make things better or guess what will happen. It creates stories and pictures that directly change what people believe. Whether it makes human communities stronger or weaker depends completely on how we choose to build and use it.
Blink 5 – Can AI save democracy?
AI has as many dangers as it promises good things. And as we’ve seen, our democracies and systems of rules were not made for today’s digital world. They are still using old ways of organizing to fix problems of the 21st century. And we can see the results. Trust in government has fallen from 80% in 1960 to less than 20% today. Communities feel weak. They are managed by professionals far away who don’t understand local needs.
But even the strictest systems can change. This happens when they allow decisions to be made by many different people. Take the US Army in Iraq in 2003. They faced fast, irregular fighters. The old way of giving orders from the top couldn’t keep up. General Stanley McChrystal made a big change. He created “teams of teams.” This meant front-line groups could make their own choices. They followed the general’s main goal, not strict orders. Using digital networks to share information fast, the Army became truly flexible.
We could use this same idea to make our democratic systems stronger. We could truly get the benefits of new technologies. The main thing is to give power back to communities. These are the people who are really affected by decisions. Taiwan already uses a digital platform called Polis for policy debates. Citizens share ideas and vote on what others say. But they cannot reply directly. This stops arguments from getting out of control. The system shows where people agree and where they still disagree. What makes it work are Taiwan’s strong local traditions. These give real reasons for people to find things they can agree on.
This method fits with the work of Nobel winner Elinor Ostrom. She studied how to manage shared resources. Her findings are clear: good governance needs three things. Communities must rule themselves. They need clear limits on what they control. They need clear information to see results and make leaders responsible. And there must be a true match between what people give and what they get.
Today’s systems, where power is in the center and we elect representatives, break all three of these rules. But digital tools make it possible to spread out power. It’s also cheaper than having one central group manage everything. Citizen Stack, a data platform made by an Indian non-profit, shows this on a large scale. It lets over a billion people control their own data through local groups, much like credit unions manage money. They allow certain uses while keeping ownership. They successfully compete with big tech companies.
The way forward is not more central control. It is giving power back to communities. At the same time, we use AI and digital networks to help them work together, learn from each other, and be responsible.
Blink 6 – Not-so-new rules and regulations
Two years ago, world leaders met at the Club de Madrid. An interesting difference of opinion appeared. High-ranking EU officials wanted strict, central control over AI. They wanted new laws and systems for each type of technology. But the past presidents and prime ministers thought differently. They suggested something simpler: ways to track decisions and rules about who is responsible if things go wrong. Let companies create new things, but make them responsible when there are problems.
This difference shows our main problem with controlling AI. We want to guess and stop every possible harm before it occurs. But AI changes too fast and has too many types for this method to work. Wanting to control everything can be stronger than good, practical judgment.
There’s a better way. It already exists in places you might not expect. Think about how the internet works worldwide. Or how global money rules stop cheating. Or how the World Health Organization helps with disease outbreaks. These groups that work by agreement don’t have power to force rules. Yet, they work very well. Countries work together because it helps them.
These same ways of working can help us manage AI. We need three main things: clear data showing what AI systems truly do, regular checks to find problems early, and rules that work together across countries. This stops companies from just moving to places with fewer rules.
The idea of holding someone responsible is not new. It’s how we’ve controlled physical products since the 1960s. If your toaster starts a fire, the company that made it is responsible. Why should AI be different? Make companies keep detailed records of AI decisions. Create officials who can check those records. And let civil law decide who is responsible when harm happens.
The story of the internet actually gives us a helpful warning here. The internet began in military and university networks where everyone was trusted. Because of this, security was not built in from the beginning. Because of this, we are now trying hard to add security to a global system later on.
With AI, we have a chance to do it right from the start. Not by guessing and controlling, but by making people responsible and being able to change. Then, we can build a system that truly helps human connection and sharing stories, instead of destroying it.
Final summary
The main idea from this summary of Shared Wisdom by Alex Pentland is that new technologies should be made by groups of people, for groups of people.
Human progress has always come from sharing stories and group knowledge. This is true from old campfire talks to today’s scientific groups. New AI might stop this social learning by making people feel alone. This is like how older technology booms weakened community groups. However, if made with care, AI could greatly improve how we share knowledge and work together. The answer is to give power back to communities. At the same time, we should use digital tools to make connections stronger, not replace them. Instead of strict rules from the top, we need clear records of actions and ways to hold companies responsible for any harm they cause. We know that group smarts help people live well. With this, we can create a tech revolution that truly helps what humans need.
That’s it for this summary. We hope you liked it. Please leave us a rating if you can – we always value your thoughts. See you in the next summary.
Source: https://www.blinkist.com/https://www.blinkist.com/en/books/shared-wisdom-en