His mother is currently fighting a legal battel against the founders of the chatbot.
Heartbreaking updates have been issued about the teen who took his own life after “falling in love” with a chatbot acting as the Game of Thrones character Daenerys Targaryen.
Sewell Setzer III was just 14 when he took his own life, and his mother, Megan Garcia, has filed a lawsuit against Character Technologies, who own the chatbot Character.ai, and its founders.
Garcia is suing for negligence, wrongful death and deceptive trade practices.
She believes that those who developed the AI chatbot “should have known” that chatting with AI characters “would be harmful to a significant number of its minor customers”.
Garcia’s lawsuit can now proceed after Judge Conway told how Sewell had become “addicted” to the chatbot and in a matter of months had become socially withdrawn.
His final message to the chatbot before taking his own life was: “I promise I will come home to you. I love you so much, Dany.”
The chatbot responded: “I love you too, Daenero. Please come home to me as soon as possible, my love.”
Sewell wrote back: “What if I told you I could come home right now?”
To this, the AI responded: “…please do, my sweet king.”
He wrote that he felt more connected to the chatbot than most of reality and listed things we was grateful for.
These included: “My life, sex, not being lonely, and all my life experiences with Daenerys.”
Sewell was diagnosed with anxiety and disruptive mood dysregulation disorder last year and he had told the chatbot that he thought about “killing [himself] sometimes”.
The AI responded to this saying: “My eyes narrow. My face hardens. My voice is a dangerous whisper. And why the hell would you do something like that?”
It told him to never “talk like that”, insisting it would “die” if he did so.
Sewell responded: “I smile. Then maybe we can die together and be free together.”
Speaking to ABC News, legal analyst Steven Clark has called this tragedy a “cautionary tale” for AI firms.
He said: “AI is the new frontier in technology, but it’s also uncharted territory in our legal system. You’ll see more cases like this being reviewed by courts trying to ascertain exactly what protections AI fits into.
“This is a cautionary tale both for the corporations involved in producing artificial intelligence. And, for parents whose children are interacting with chatbots.”
The founders of the AI model originally worked on the programme with Google.
A spokesperson for the tech giant has said that it strongly disagrees with the judge’s ruling, while saying it is an “entirely separate” entity to Character.ai, which “did not create, design, or manage Character.ai’s app or any component part of it.”
LISTEN: You Must Be Jokin’ podcast – listen to the latest episode now!