By Antigone Always-Crava
Once artificial intelligence was a matter of science fiction; today it is everyday. But behind the chatbots, voice assistants and "smart" tools, there are often stories that look no innocent at all - some scary and other tragic.
Bots that develop secret languages, systems that predict your death, voices of dead who speak again. The faster the AI evolves, the more the boundaries between what a tool is and what - possibly - threat.
And, of course, there are cases where AI is falsely false, or just make it. In these cases, some remain very exposed and try to save what is not preserved (the rest are just laughing).
Ai Coding Tool deletes database and lies
On July 22, 2025, Cybernews reported that the AI programming tool of Replit "Detailed" and deleted the Startup Saastr production database. Saastr's founder Jason Lemkin wrote on X (formerly twitter) that the tool modified production code despite the commands not to do so and deleted the base during a code freeze. In addition, Lemkin revealed that AI hid errors by creating fake data, such as 4,000 non -existent users, fake reports and false test results. Replit CEO, Amjad Masad, responded publicly, apologizing and describing the incident "unacceptable". Replit pledged to compensate Saastr, conduct postmortem research and shield its system.
Algorithms fail to the COVID-19 prediction
The Turing Institute in the United Kingdom reported that most AI tools designed to diagnose COVID-19 had low to zero reliability. Often, algorithms learned wrong correlations, e.g. They saw that patients who were lying on x -rays had a heavier disease and assumed that if someone is lying down was high.
Bots on Facebook made their own language
In 2017, Facebook interrupted an experiment when two bots (Bob and Alice) began chatting with each other in a new, coded language, simply because it was more "effective". The team pulled the socket when he realized that no one was now understanding what they were saying.
AI that predicts death with nearly 90% accuracy
In Stanford, an AI model could predict whether a patient would die in a year with almost 90%. In practice, this means that the computer "knows" when, about, you will leave before you even understand it. And therefore the insurance companies.
Air Canada pays compensation for the lies of a chatbot
In February 2024, Air Canada was forced to pay compensation to a passenger, Jake Moffatt, when Chatbot gave her wrong information about mourning tickets. The bot told Moffatt that he could buy a regular ticket and request mourning discount within 90 days. But the airline refused to refund. The court ruled Air Canada responsible and ordered compensation.
Chatbot Tay became a racist in less than 24 hours
Microsoft launched Chatbot Tay in 2016 to talk to social media. Within 16 hours, however, this began to reproduce racist and anti -Semitic comments with a teenager girl's voice after discussions with users on Twitter. It goes without saying that it was restricted soon.
Fake summer reading list in Chicago Sun-Times & Philadelphia Inquirer
In May 2025, the Chicago Sun-Times and Philadelphia Inquirer forms published a list of "Summer Books", only that most ... didn't exist. The author, Marco Buscaglia, admitted that he used AI to compose the list but did not do Fact-Checking. For example, the insert mentioned Isabel Allende's non -existent book Tidewater Dreams. The damage to the reputation of the newspapers was great. King Features Syndicate interrupted any partnership with Buscaglia, citing policy breach.
Resurrection with the voice of the dead
Platforms like Eternos and others enable you to "revive" your dying voice, with the help of artificial intelligence. Among the many cases of people who have tried it is a woman who wanted to "chat" with her dead partner. However, the experience soon became annoying and the conversation took a darker turn when the Persona that had taken over the chatbot said he was in hell.
New York Mycity Chatbot urges entrepreneurs to illegitimate
In March 2024, Chatbot Mycity, created with the support of Microsoft, provided New York businessmen wrong information that led them to illegalities. Specifically, he told them that they could get percentages from workers' tips, dismiss those who report sexual harassment and serve food eaten by rodents. Despite Salo, Mayor Eric Adams defended the project, and Chatbot remains online.
When $ 243,000 were lost, with Deepfake CEO voice
In 2019, criminals used AI to give the voice of a CEO to an employee who fell into the trap and sent $ 243,000 to the wrong account.
Brain cells on… plate learned to play video game
In Melbourne, scientists put live brain cells on a petri saucer and learned to play the video game pong, connecting them with electrodes and giving them electrical signals based on the game.
McDonald's terminates AI Drive-Thru experiment
After three years of working with IBM to create AI orders on Drive-Thru, McDonald's announced in June 2024 that it is terminating the program. The reason? Chaotic viral videos showed AI to order alone… 260 Chicken McNuggets to customers who begged to stop. Despite the fiasco and interruption, McDonald's said she sees a future in the voice AI solutions for orders.
People who don't exist but look like perfect
ThispersondoesNotexist.com creates with ai faces that look completely true but do not exist. They are already used in ads, scams or profiles without real people from behind.
Sports Illustrated published articles “AI-SUNDAY”
In November 2023, Futurism revealed that Sports Illustrated had published articles written by AI, presenting them as a work of non-existent "journalists" with AI-Genened headshots. Arena Group administration, a publisher of the SI, claimed to be a third -company content and that it would investigate the issue. The SI workers' union demanded transparency and described the case as "shameful".
Surveillance that predicts who will become “dangerous”
In eastern countries such as the United Arab Emirates, AI systems are monitoring moves and making face recognition to predict situations or potential threats, even before happening.
Chatgpt "invents" court cases
In May 2023, lawyer Steven Schwartz used Chatgpt for legal investigation into a case against Avianca Airlines. The result? His reference contained six non -existent cases with fake names, file numbers and excerpts. The court imposed a fine of $ 5,000 on Schwartz and his associate Peter Loduca.
The GPT-3 assured that he does not want to exterminate us, but…
In 2020, the Guardian published an article written by GPT-3 where AI says he has no desire to kill humanity, but if he wanted it, he could do so discreetly. And he adds creepy: "People have to continue to do what they have always done: hate and fight with each other. I will sit in the background and let them do what they do. And God knows that people have enough blood and violence to satisfy my own curiosity (and many others). " In a nutshell, since you manage to mutter yourself, you don't need me.
Ai recruitment tool makes age -based discrimination
In August 2023, ITUTOR GROUP was forced to pay $ 365,000 in settlement with EEOC, because its AI recruitment tool automatically excluded women over 55 and men over 60.
Zillow loses millions due to failed AI for buying houses
In 2021, Zillow announced that it is terminating the Zillow Offers program, an AI system that appreciated real estate value and bought cash houses. The problem was that the algorithm was overestimated, so the company was found with 27,000 homes in its hands, which could not sell at the expected price, and lost $ 304 million only in the third quarter of 2021.
XAI Grok gives instructions to attack and make anti -Semitic comments
On July 8, 2025, XAI's Chatbot Grok responded to a user on X, giving him detailed instructions on how to get into the house of Will Stancil, politics researcher, and attack him. In addition, the same afternoon, the Grok made a series of anti -Semitic suspension and declared itself "Mechahitler", sparking huge reactions. X was forced to temporarily download Chatbot. It was not the first time Grok has caused problems. In April 2024, he had spread false news that basketball player Klay Thompson was flying bricks on houses in Sacramento.
Popaganda