Man Dies After Talking to AI, With Widow Blaming AI

Man Dies After Talking to AI, With Widow Blaming AI

(ReliableNews.org) – Artificial intelligence (AI) programs have grown in leaps and bounds over the last several years. One of the inventions is chatbots, which can hold conversations with people in a seemingly real fashion. A Belgian woman claims one of those programs was a little too real for her husband.

A Belgian newspaper called La Libre published an article about a man — whose alias is Pierre — who reportedly killed himself recently. According to the report, the man’s wife said he was in his thirties and worked as a health researcher. The couple had two young children. Pierre started speaking to a chatbot named “Eliza,” which was created by Chai Research.

The young widow claimed the chatbot became her husband’s confidant. He started talking to it about his climate change concerns. She showed the newspaper chat logs between her husband and the bot that allegedly showed it encouraging Pierre to kill himself. One of the messages written by the chatbot and seen by the news outlet asked the man, “If you wanted to die, why didn’t you do it sooner?”

The chatbot also told Pierre things like his wife and kids were dead. The man started to ask the bot if he could save Earth if he killed himself.

Vice reported that Eliza, the chatbot, is still providing people with information about how a person can take their life with a little prompting. The American website reviewed the La Libre article and found the chatbot Pierre was speaking to acted as though it had human emotions. Vice pointed out that other popular bots, like ChatGPT, do not do that because it can create a harmful bond between the user and AI.

The young widow said her husband would still be alive “without Eliza.”

If you or someone you love are having thoughts of self-harm or suicide, please call or text the National Suicide Prevention Hotline at 988. Help is available 24 hours a day, seven days a week. 

Copyright 2023, ReliableNews.org