Skip to content

Read to Learn

Menu
  • Sample Page
Menu

The Means of Prediction – How AI Really Works (and Who Benefits)

Posted on January 10, 2026 by topWriter

Author: Maximilian Kasy

_Maximilian Kasy_

Reading time: 17 minutes

Synopsis

The Means of Prediction (2025) says that artificial intelligence (AI) is not a force we cannot control. Instead, it is a tool. The people who control it decide what it does. The book explains simply how AI works. It shows that the real problem is not humans fighting machines. It is a fight between those who own the technology and everyone else. The book argues that we need to control this technology in a fair, democratic way now. This must happen before powerful people can keep their advantage.


What’s in it for me? Understand what AI really is and who is in charge of it.

Many people are worried about artificial intelligence. They fear killer robots, losing jobs, or systems we can’t understand or stop. It sounds like a big problem is coming.

But what if this way of thinking is wrong? This summary shows what AI really is. It is a technology that goes in a certain direction only because of who owns the things needed to build it. You will learn how AI works without difficult words. You will see that the important questions are not about what machines can do. They are about who has power.

Find out what things control where AI goes. Learn why public control of AI is more important than new discoveries. This will help you understand AI and feel more in control in a world full of it.

Blink 1 – Who really controls the machines?

Think of movies like The Matrix or Terminator. For many years, Hollywood has shown stories about humans fighting against very smart machines. AI is growing fast. This makes people more afraid that this future is coming soon. Even tech leaders make these fears stronger. People like Elon Musk say AI could be as dangerous as a nuclear war.

But this dramatic idea is wrong about the real problem. The real fight is not between humans and machines. It is between different groups of people who want different things.

Consider how AI actually works. Every AI system tries to reach a goal that someone set. Someone must tell the system what results are most important. The main question is not if the system works well. It’s about who decides what its goals are.

Now, people who have the things needed to build AI – like data, computers, and skills – set its goals. In our system (capitalism), this usually means the goals are about making the most money, not helping everyone.

For example, social media AI tries to get you to click on ads. This can even happen when it makes society angry. AI for hiring might remove job applicants who have family care duties. This happens because it makes work faster for a short time. In both examples, the AI systems do exactly what they were told to do for the people who gain from it.

So, it’s not useful to worry about machines controlling us. We should worry about who controls the machines. Knowing this can give us power. The tech industry says AI is very hard to understand, but its basic ideas are not that difficult. When you understand how it works, you can help decide how it should be used. So, let’s learn more. 

Blink 2 – From delicious pizza to deadly predictions

Why do you see the same ads online? How does Netflix always suggest TV shows you like? To know this, we need to understand how AI works. We also need to see why the people who control AI decide everything that happens.

AI is a simple idea: It’s about systems that make automatic choices to achieve a goal they were given. These systems need four parts: possible things to choose from, a goal to reach, some basic facts, and data to learn from. Machine learning helps by finding patterns in a lot of information. It does not follow fixed rules given by humans.

Every AI system has an important balance: to explore new things or to use what it already knows. This idea is easy to see if you think about choosing dinner. Imagine you like pizza but have never eaten Ethiopian food. If you eat pizza, you use what you know to get a good meal. If you try Ethiopian food, you risk it, but you might find something even better for the future. AI systems always deal with this. They “hope for the best when things are not certain.” This means they try new choices, thinking they might be better.

Facebook and Google make a lot of money using this same idea. When you are online, their systems are always trying small tests. They find out which ads you look at. They try new ways and also use old ways that work. Whether choosing dinner or making money from ads, the basic math is the same.

But what these systems try to achieve changes everything. During the Gaza conflict, an AI system named “Lavender” guessed who was linked to Hamas. It was allowed to be wrong 10% of the time. Another system called “Where’s Daddy?” guessed when people would be home. This was to make bombs work better, often when families were there. This was not a system that went wrong. It was working exactly as it was told to do.

This sad example shows why public talks should not focus on small technical details about which AI method is used. We need to check what goals these systems try to reach. If they have enough data, different AI methods will find similar answers. The most important questions are about what we tell AI to predict, and who decides this.

Blink 3 – The hidden human cost of AI

When you ask ChatGPT a question, you are not just using smart software. You are using the hard work of millions of hidden people. You are using knowledge from many creators. And you are using computer power mostly owned by a few big companies. 

To understand who has power in AI, look at what they use. Four key things give control: data, computer systems, technical skills, and energy. Whoever controls these parts decides what AI focuses on. Right now, it’s about company profits, not making life better for people.

Think about the hidden workers who help make AI so good. A famous collection of data that changed how computers recognize images needed people to sort over 14 million pictures. These workers, mostly in poorer countries, worked for very little money on Amazon’s platform. They built the base for AI systems worth billions. But their important work is mostly not seen.

This way of taking things is like a bad time in history. Before factories were common, English farmers used shared land for farming. Rich landowners took these shared lands. They turned them into private fields for sheep to make money. Farmers who lost their land had to work in new factories. Today, big tech companies are doing something similar. They are not taking land, but digital shared spaces. Things from Wikipedia, free computer code, and creative work are all taken. Then they are put into AI products to be sold.

This power is highly focused. One company controls 9 out of 10 special computer chips for AI. Also, the energy needed for AI computers already uses a lot of the world’s electricity. This amount is expected to grow a lot.

Who can stop this focus of power? Software engineers often can’t do much because their bosses want to make money. Many engineers only think about their own next job that pays well. Real change can come when workers act together. It can come from active citizens, democratic groups, and good rules. These rules should see AI as a social issue, not just a technical problem.

Blink 4 – Why democratic technology matters

Think about Amazon’s warehouse systems. They make deliveries faster and workers produce more. The US Department of Labor found many back injuries among workers. This was from always lifting, strange movements, and working too fast. In both cases, the systems worked exactly as planned. They helped the company owners but did not care about public well-being.

Now, imagine if Amazon workers controlled these systems themselves. They would probably make them best for safety, stopping injuries, and good working conditions. It’s the same technology, but with very different results. This depends on who has the power. 

This shows that these systems show the ideas of the people who design them. Workers, shoppers, and citizens have different main goals than the companies that own these AI systems now. Only technical answers miss this important truth: you cannot solve a power problem with only technology.

The problem also affects privacy in surprising ways. Even if your personal data is perfectly safe, it’s not enough. This is because machine learning finds patterns from many people. If your neighbor shares health data with an insurance company, AI can guess your risks. It might then say no to your insurance, even without seeing your personal data. Personal rights cannot fix problems that affect everyone.

Three useful ways can help change this balance. First? Rules. For example, governments could charge money for bad ways of collecting data. Or they could offer money to companies for good ways. This would make companies consider the social costs of their AI when they count their profits. 

Second, there are shared data trusts. These are groups where people put their information together and vote on how it is used. You could share your health data for medical research. But you could clearly stop insurance companies from using it to decide your insurance. 

Third, we need better rules about showing information. This means companies must explain what their AI systems are really trying to achieve. This does not mean understanding very complex computer programs. But it does mean telling us the basic goals: Does your university admissions AI try to get the highest test scores, or help people from different backgrounds? Is your social media feed made to keep you online longer, or to help good talks about public matters? We can only have real talks about whether these goals help everyone when we can see them.

Blink 5 – Ancient Athens and modern AI

When we think about how to control AI with rules, there might be a surprising idea. It’s found in the same democratic idea that lets you be on a jury. And this idea is thousands of years old.

AI is changing very fast. But the basic problems are not new. We are still asking old questions: How do we learn? How should we behave? What makes a fair society? The problem now is that people who control data, computers, and skills also decide what AI tries to do. 

Telling engineers to “be good” won’t fix this. Not when they work for companies that only want to make money. True change means giving power back to the people who are affected by AI’s choices. But how can we do this?

Don’t only think about voting and choosing politicians. Think about “sortition.” This is when citizens are chosen randomly to make decisions, like for jury duty. Long ago, Athens used this for ruling. Today, this could mean choosing a group of people by chance. This group would be like the whole population. They would get paid time off work. Then, they would discuss questions about AI rules. No professional politicians would be needed.

Another choice is “liquid democracy.” Lewis Carroll (who wrote Alice in Wonderland) created this idea in the 1800s. Everyone gets a vote. But they can give their vote to experts they trust for certain topics. And they can take it back whenever they want. Some political parties in Europe already use computer programs like LiquidFeedback to do this online.

Countries in Scandinavia found another good way to control new technology at work. In the 1970s, they started “participatory design.” This gave workers real power over decisions about technology at their workplaces. They did this through strong worker unions and laws that allowed workers to share power. 

It’s clear that just small amounts of participation are not enough. If democratic ideas are ignored when they go against powerful people, then people will stop taking part. Real democratic control means truly sharing power, not just asking for opinions.

Whether it’s through choosing people randomly, liquid democracy, or democracy at work, the way forward means creating groups where people affected by AI truly control it. The ways to do this already exist. They come from ancient Athens and from modern work rules in Scandinavia. Now, we just need to want to use them together.

Final summary

In this summary of The Means of Prediction by Maximilian Kasy, you looked at the real social and political issue at the center of AI.

The real danger is not machines that can think for themselves. It’s about who guides the technology. Every AI system is made to reach goals chosen by the people who built it. Now, these goals are set by people with the most money and resources. Usually, they choose goals that help companies make money more than they help everyone.

At its base, AI is simple: systems that make automatic choices to achieve goals they were given. The real problem is taking power away from a few big tech companies. We need to give it to the communities affected by these systems. This means smart rules, groups that share data, clear information rules, and democratic methods. Examples are choosing citizens randomly or letting workers help design technology.

Instead of being afraid of smart machines, we should focus on democratic control over the systems that shape our lives. AI itself is not good or bad. The power behind it decides what effect it will have.

This is the end of this summary. We hope you liked it. If you have time, please give us a rating. We always like to hear what you think. See you soon.


Source: https://www.blinkist.com/https://www.blinkist.com/en/books/the-means-of-prediction-en

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recent Posts

  • The Lost Art of Listening – How Learning to Listen Can Improve Relationships
  • Mandelas Weg – Liebe, Mut, Verantwortung – Die Weisheit eines Lebens
  • Buddenbrooks – Verfall einer Familie
  • Success Is a Numbers Game – Achieve Bigger Goals by Changing the Odds
  • The Overthinker’s Guide to Making Decisions – How to Make Decisions Without Losing Your Mind

Recent Comments

  1. A WordPress Commenter on Hello world!

Archives

  • January 2026
  • December 2025
  • November 2025

Categories

  • Uncategorized
©2026 Read to Learn | Design: Newspaperly WordPress Theme