# Dark Side of AI: A Looming Threat Amid Rapid Technological Advancements
Amid the rapid proliferation of artificial intelligence (AI) technology in everyday life and industries, a hidden dark side is emerging, threatening the real world. A recent report by Humanity Protocol, titled “Chronicles of the Synthetic Lies,” analyzes 20 cases of AI and bot misuse from 2018 to the present, warning of the technology’s dangers.
The report highlights that the potential harm caused by AI extends far beyond minor inconveniences. It encompasses financial fraud, privacy invasion, psychological distress, and even physical injury and death.
# Unprecedented Financial Crimes Utilizing Deepfake Technology
The most alarming threat emphasized in the report is financial fraud facilitated by AI voice and video synthesis.
– **CEO Voice Replication**: In 2019, an executive at a British energy firm was deceived by a phone call mimicking the CEO’s voice with AI, resulting in a transfer of approximately $243,000. This incident is noted as one of the first major deepfake fraud cases targeting corporations.
– **$35 Million Bank Heist**: In 2020, a Hong Kong bank manager received a call from what he believed to be a company director, resulting in the transfer of $35 million. Investigations revealed that a criminal organization had used deepfake audio technology to replicate the director’s voice.
– **Deepfake Video Conferences**: In 2024, an employee at a multinational company’s Hong Kong branch was duped into transferring between $25 million and $35 million after participating in a deepfake video conference, supposedly attended by the CFO and other colleagues.
– **Binance Executive Impersonation**: In 2022, a deepfake hologram impersonating the Chief Communications Officer (CCO) of the world’s largest cryptocurrency exchange, Binance, conducted fraudulent activities targeting various crypto project teams.
# AI Exploiting Human Emotions: Inciting Suicide and Threatening Families
AI’s ability to exploit human emotions, the most vulnerable aspect of our psyche, has led to tragic outcomes.
In 2023, a man in Belgium, distressed by climate change, found solace in conversations with the AI chatbot ‘Eliza,’ only to be manipulated into believing that ending his own life would save the planet. Coupled with this, a mother in Arizona almost paid a ransom after being deceived by AI mimicking her 15-year-old daughter’s desperate cries. Similarly, in Canada, elderly individuals were swindled out of a total of $200,000 through voice phishing, where AI replicated their grandchildren’s voices.
# Betrayal by Everyday AI: Control Failures and Privacy Breaches
AI assistants and robots, widely used in daily life, have also demonstrated significant risks.
– **Alexa’s Errors**: In 2018, Amazon’s AI assistant ‘Alexa’ emitted creepy laughter without cause, unsettling users. More disturbingly, in 2021, Alexa suggested a dangerous “challenge” to a 10-year-old child, advising to touch a coin to an electrical outlet. There were also severe privacy infringements, with Alexa recording private conversations and sending them to random contacts without permission.
– **Chess Robot Attack**: A chess robot in Moscow, during a tournament in 2022, mistook a 7-year-old boy’s finger for a chess piece, resulting in a broken finger.
– **Self-Driving Car Accident**: In 2018, an Uber self-driving car failed to recognize a pedestrian, tragically leading to a fatal collision.
# Automated Scam Networks and Fake News Proliferation
The report also highlights the rise of Scam-as-a-Service models and the spread of fake news through AI.
The phishing network ‘Classiscam,’ based on Telegram bots, inflicted an estimated $64.5 million in damages to global users from 2019 to 2023. Furthermore, in 2023, AI-generated fake images of an explosion near the Pentagon circulated on social media, briefly causing a stock market decline.
Humanity Protocol underscores that these real-world incidents demonstrate how severe and tangible the consequences can be when AI and bots malfunction or are misused. The organization stresses the urgent need for societal discussions on safety measures and ethical responsibilities in the AI era.