Think AI “knows” what it’s doing? Well, think again


Man meets machine. Image by Tim Sandle.

By calling AI platforms terms such as “smart”, or saying that a current form of AI “knows” something, might sound harmless, yet this quietly misleads people about what AI actually does and its actual capabilities. How carefully are journalists when describing the potential – or otherwise – of AI?

Across part of the media, we see the concept of the hype cycle (which Digital Journal seeks to avoid). This is defined by Gartner and it illustrates how emerging technologies like AI rise on waves of inflated expectations, often leading to a subsequent period of disillusionment. This cycle typically involves an initial surge of enthusiasm, followed by a realisation of the technology’s limitations, and eventually, a more grounded understanding of its practical applications.

Building on this concept, a new study, from Iowa State University, shows that news writers are more careful than expected, rarely using strongly human-like language. When they do, it often falls on a spectrum—sometimes describing simple requirements, other times hinting at human traits.

Think, know, understand, remember

The process of ‘think, know, understand, remember’ presents everyday words that people use to describe what goes on in the human mind. However, when these same terms are applied to artificial intelligence, they unintentionally make machines seem more human than they really are.

In other words, we use mental verbs all the time in our daily lives, so it makes sense that we might also use them when we talk about machines — it helps us relate to them. However, in doing so we risk of blurring the line between what humans and AI can do.

Too many journalists describe AI using human-like language. This type of wording, known as anthropomorphism, assigns human traits to non-human systems.

Why human-like language about AI is misleading

According to the researchers, using mental verbs to describe AI can create a false impression. Words such as “think,” “know,” “understand,” and “want” suggest that a system has thoughts, intentions, or awareness. In reality, AI does not possess beliefs or feelings. It produces responses by analysing patterns in data, not by forming ideas or making conscious decisions.

This kind of language can overstate what AI is capable of. Phrases like “AI decided” or “ChatGPT knows” can make systems seem more independent or intelligent than they actually are. This can lead to unrealistic expectations about how reliable or capable AI is.

There is also a broader concern. When AI is described as if it has intentions, it can distract from the humans behind it. Developers, engineers, and organizations are responsible for how these systems are built and used.

How journalists actually use AI language

To better understand how often this kind of language appears, the researchers analyzed the News on the Web (NOW) corpus. This massive dataset contains more than 20 billion words from English-language news articles published in 20 countries.

The researchers focused on how frequently mental verbs such as “learns,” “means,” and “knows” were used alongside terms like AI and ChatGPT. The study found that news writers do not frequently pair AI-related terms with mental verbs. While anthropomorphism is common in everyday speech, it appears far less often in news writing.

Among the examples identified, the word “needs” appeared most often with AI, showing up 661 times. For ChatGPT, “knows” was the most frequent pairing, but it appeared only 32 times.

The researchers also noted that editorial standards may play a role. Associated Press guidelines, which discourage attributing human emotions or traits to AI, could be influencing how journalists write about these technologies. Even when mental verbs were used, they were not always anthropomorphic.

For instance, the word “needs” often described basic requirements rather than human-like qualities. Phrases such as “AI needs large amounts of data” or “AI needs some human assistance” are similar to how people describe non-human systems like cars or recipes. In these cases, the language does not imply that AI has thoughts or desires.

In other cases, “needs” was used to express what should be done, such as “AI needs to be trained” or “AI needs to be implemented.” Aune explained that these examples were often written in passive voice, which shifts responsibility back to human actors rather than the technology itself.

Anthropomorphism – the spectrum

The study also showed that not all uses of mental verbs are equal. Some phrases move closer to suggesting human-like qualities. For example, statements like “AI needs to understand the real world” can imply expectations tied to human reasoning, ethics, or awareness. These uses go beyond simple descriptions and begin to suggest deeper capabilities.

Why language choices about AI matter

Overall, the researchers found that anthropomorphism in news coverage is both less frequent and more nuanced than many might assume. The findings highlight the importance of context. Simply counting words is not enough to understand how language shapes meaning.

The research team also emphasised that these insights can help professionals think more carefully about how they describe AI in their work. This should help technical and professional communication practitioners reflect on how they think about AI technologies as tools in their writing process and how they write about AI.

As AI continues to develop, the way people talk about it will remain important.

The research appears in the journal Technical Communication Quarterly, titled “Anthropomorphizing Artificial Intelligence: A Corpus Study of Mental Verbs Used with AI and ChatGPT.”

A recent AI model called Centaur seemed to offer a breakthrough, claiming it could mimic human thinking across 160 different cognitive tasks. But new research is challenging that bold claim, suggesting the model isn’t truly “thinking” at all—it is just memorising patterns.

See: “Can Centaur truly simulate human cognition? The fundamental limitation of instruction understanding”, published in the journal National Science Open.



Think AI “knows” what it’s doing? Well, think again

Leave a Reply

Your email address will not be published. Required fields are marked *