Author: Katharina Zweig
_Katharina Zweig_
Reading time: 18 minutes
Synopsis
In Weiß die KI, dass sie nicht weiß? (2025), you will learn how AI models work today. You will find out how they get their answers, what they can do, and where you can use them safely.
What you will get: Learn why AI is sometimes great, but often makes mistakes.
You have probably tried AI tools like ChatGPT, Gemini, Claude, or Copilot. You might have seen that these virtual helpers do some tasks very well. But they fail badly at other tasks that seem easy. Why does this happen? And what about the promise that AI will soon act alone? Will it book a whole trip to Budapest for your family with just one request?
In this summary, we will look inside these LLMs. These are the main AI models used today. We will explain how they create the answers you see on your screen. With this knowledge, you can better decide. You will know which tasks AI can handle. You will also know when it might give you bad information. And when you should be careful.
Blink 1 – A world of meaningless building blocks
We call it artificial intelligence, or AI. At first, it seems smart. Conversations with tools like ChatGPT and Gemini are often very good. AI can tell jokes, help you with car warning lights, translate, write texts, and more. It seems almost human, but not quite. On the surface, AI’s answers look like they come from a friend or colleague. It depends on how you use it. But deep down, it’s not thinking like you do.
The AI models we use in 2025 are called LLMs, or Large Language Models. They work like this: First, they get a lot of information from the internet and other places. Then, they break down language into tokens. A token can be a word, a part of a word, or even a letter. After that, they learn which tokens often come after others.
When you ask an LLM a question, it builds the answer like a puzzle. It picks the token most likely to come first after your question. Then it picks the next, and so on. So, at the start of a sentence, the AI “knows” nothing about how it will end. It “knows” nothing at all, if we mean understanding. It truly has no idea what it is talking about.
Imagine you learn all Chinese characters by heart. But you only know how to draw them, not what they mean. If you then read all Chinese books and remember them, you could put characters together like dominoes. These combinations might make sense. But you still wouldn’t know if it’s a recipe, a phone number, or a love poem.
Blink 2 – Vectors in multidimensional spaces
How do LLMs figure out which token is most likely to come after others? This is quite interesting. It is also complex, so we need to explain it more.
Imagine you are in a planetarium. In the middle, there are several projectors. Each projector shows one word on the dome. For example, “cafe”, “building”, “house”, “dog”, “cat”, “crocodile”.
Now, you ask different people to point the projectors. They should make words with similar meanings close together on the dome. Most of us would put “dog” and “cat” very close. Both are mammals and pets. “House” and “building” would also be close. Maybe “cafe” would be between them, as a cafe can be in a house or building. A dog or cat could also be in a cafe. But a crocodile probably not. You see the point: In our minds, we have a map of how words connect. For example, we know that Madrid is to Spain like Rome is to Italy. These connections between words are called vectors.
AI does something similar when it sorts tokens from its training data. But AI can calculate faster than us. So, it doesn’t do this in a planetarium. It does it in a “multidimensional space.” This space can have hundreds of dimensions. Don’t try to imagine it; it will only give you a headache. In this space, the AI places all the tokens it knows. How close or far apart the tokens are shows how likely they are to follow each other.
That’s wild, right? It’s hard for us to picture this in our 3D world. So, let’s get back to simple facts. Let’s look at something we all know well: language.
Blink 3 – The limits of words
Can AI think, feel, or decide? Companies say yes. But science says no. Thinking and feeling are complex. We need a body, direct feedback from our world, and social contact. A dog or a chicken can “think” more like humans than AI can.
AI can learn to copy human actions. So, on the surface, it seems to think, write, and decide like us. But it’s a very different way to get a similar result.
So, we should choose our words carefully when talking about AI. The author shows the changed meaning of AI words by adding a tilde (~). For example, AI ~understands, ~thinks, or ~decides. But this is not easy when we speak or listen to a summary.
Many AI words are used wrongly. This is often for marketing or just out of habit. Of course, companies say their AI understands you. But should you use the same word? This is a good question to think about.
In some ways, we just don’t have the right words for what AI does. That is okay. We also lack words to describe what we do. Think about riding a bike without training wheels for the first time. Suddenly, you understood how to balance. But can you explain it in words so someone else can do it right away? No, you can’t.
One example of a wrong word for AI is “hallucinations”. Everyone knows this word. It seems logical: AI says silly things, so it “hallucinates”. But “hallucinations” is a word from medicine and psychology. It means seeing something that is not there. AI only “sees” what is there: the prompts, the tokens, and its programming. So, psychologists now say we should use confabulations instead. In psychology, someone “confabulates” when they invent false things but believe them to be true at that moment.
Let’s look at this behaviour more closely in the next part. What is happening when AI confidently shares untrue things?
Blink 4 – Invented, lied, or just wrong?
Let’s be clear: AI cannot lie. To lie, we need to understand what we are saying. We also need a reason or a plan to trick someone. AI has none of these.
Basically, LLMs cannot even make mistakes. Like a calculator, the program always does what it was made to do. It puts tokens together in the best possible way. If these tokens then make a meaning that is not true to our reality, then we, as users, are unhappy. But this does not mean the AI made a mistake. The result is only “wrong” for us. For the algorithm, it is right. Or as developers would say: confabulating is not a bug, it is a feature.
The main question is not why AI says false things. You should already know the answer to this by now. Much more important is how we, as users and society, deal with untrue AI statements. This is a big danger. AI says nonsense, and someone believes it. If it’s just for a school project, it might not be too bad. But if AI is used as a financial advisor, for health questions, or as a judge, it can become very dangerous fast.
So, we should always leave some tasks to humans.
Blink 5 – Where AI fails
Let’s quickly remember what AI actually does. It doesn’t talk to you. It just plays a game, like Minecraft, but with words and syllables instead of pixels. It doesn’t feel. It doesn’t learn from experience. If it’s wrong, there are no bad effects for it, maybe just a ‘thumbs down’ from you.
This fact explains almost everything AI cannot do. For example, multiplying numbers. It has read many math problems online. So, for small numbers, it can find the right answer from its training data. But with bigger numbers, it just guesses the most likely answer. But math doesn’t work that way. We learned this the hard way in school.
Today’s LLMs are useful. But they will not be full agents soon, even if their makers promise it. Do you really want to let an algorithm book a whole trip if it can’t even do maths? Probably not, right? For even a simple trip, you often need to give so much information. It’s faster to book it yourself. For example, is quiet important in your bedroom? Do you prefer to sit facing forward on the train? And so on; the list is endless.
Basically, AI can only work in two ways here. You give it very exact details. But then you might as well do it yourself. Or, you give it freedom. But then it will make many mistakes. We see this problem with chatbots that big companies use to help their call centers. The risk of them making false promises is too high. The companies would have to pay for these mistakes. So, chatbots often have no freedom to decide. They just repeat useless text like parrots and waste customers’ time.
Blink 6 – Where AI shines
Okay, you might be thinking now: What can I use AI for that is truly useful and reliable? The answer might seem a bit disappointing at first. But there is more power in it than you think. AI is excellent at what it was made for: playing puzzles with words.
This is very helpful, especially for ideas or creative tasks. Do you need ideas for a presentation? Or for your next short story? AI is a perfect helper. Do you only have mustard, quail eggs, and corn in your fridge? AI will tell you how to make a meal from them.
AI is also often very good as a smarter search engine. What was the name of that movie with the Italian song and two handsome gangsters that was in cinemas a few years ago? For questions like this, ChatGPT can often help you better than your best friend who saw it with you.
AI is also a great help for first drafts of texts. It can make your emails sound friendlier or more professional. It can even write code, but you must test that code carefully.
For everything else, remember this: Only use AI if you can check the results yourself. For example, only let it translate texts to or from languages you understand. This is important if the result needs to be truly correct. In short, AI can do tasks that you can check faster than you can do them yourself.
By the way, there is a great tool for creative work with AI texts. Not many people know or use it. Today, you can set the temperature in LLMs. This has nothing to do with hot or cold. It changes how likely the AI is to pick certain tokens. The range goes from 0 to 2. At a temperature of 0, the AI always picks the most likely token. At 0.5, it might pick tokens that are likely, but not the most likely. And at a temperature of 2, it will simply pick any tokens from its data and put them together randomly.
What does this mean? If you set the temperature to 0, you will always get the exact same answer for the exact same question. The higher you set the temperature, the more creative, unusual, and wild the answers will be. So, if you want to brainstorm or let your imagination run free, look online for how to change your model’s temperature and try it out.
Fazit
We hope we have helped you understand which tasks you can trust AI with, and which ones you cannot.
When you have a new task and wonder if AI can do it, remember how AI works. Make sure you can check the results yourself. And always think about what could go wrong if AI gives a wrong answer.
Finally, here is one last and very important tip. It comes at the end so you will surely remember it: Never give your credit card details to an AI. It’s that simple.
Source: https://www.blinkist.com/https://www.blinkist.com/de/books/weiss-die-ki-dass-sie-nichts-weiss-de