Real people need to come back: Business bots are increasingly rebuffed by Internet users
Image: — © AFP,File Cole BURSTON
Nearly half of Internet users suspect that some of the people they interact with online may not be human, according to a new global survey.
Based on responses from 6,792 Internet users worldwide, the research highlights how growing exposure to automated accounts is reshaping everyday digital trust across social media, messaging apps, online communities, and marketplaces. The survey was conducted by ClarityCheck.
Specifically, 47% of users believe they have interacted with a bot while assuming it was a real person. This has irked a number of users, since 41% report they have taken steps to verify someone’s identity before continuing a conversation.
AI bots (or chatbots) are software applications using Generative AI, natural language processing (NLP), and large language models (LLMs) to simulate human-like text or voice conversations. Some are more effective than others.
The process of detection, however, is becoming tougher, since 57% say automated accounts are harder to detect than they were two years ago. Hence, as AI-generated content advances through more realistic profile images and improved conversational automation, then the process of distinguishing between real and synthetic accounts appears to be growing more difficult.
Younger users appeared appear to be more alert to the issue of an online bot. Respondents aged 18 to 29 (at 62%) said automated accounts now seem significantly more convincing than in the past. Among users aged 40 and older, that figure fell to 48%.
This gap may reflect higher exposure to fast-moving social platforms, creator ecosystems, and app-based messaging environments where unsolicited contact is more common.
Checking out the bot
More sophisticated automated profiles are beginning to reshape how trust forms in digital communication. For example, 34% of respondents said they had searched for additional information about someone before continuing a conversation, while 29% said they had ended an interaction after suspecting the account might be automated.
One weakness with AI bots is their ability to perform sums, according to YouGov. Research has identified how chatbots are not good at mathematical problems. This is simply because that is not what they are designed to do (“language models” are designed to be able to communicate effectively, but this is a very different function to performing calculations).
These findings provide an indication of how AI, online authenticity, and how ordinary Internet behaviour is changing. This is especially as users become more sceptical when weighing up who (or what) is really behind the screen.
Given the lengths some consumers will go to in order to avoid communicating with AI, there may be lessons for some businesses to either put people back into the role or invest in more sophisticated and conversation astute AI.
Real people need to come back: Business bots are increasingly rebuffed by Internet users
#Real #people #Business #bots #increasingly #rebuffed #Internet #users