Eighteen years ago, Drew Crecente’s daughter, Jennifer, was tragically murdered by her ex-boyfriend while she was in high school. On October 2nd, Drew received an alarming notification from Google informing him that Jennifer’s name and photograph had resurfaced online. An AI chatbot platform called Character.AI had reimagined her as a “knowledgeable and friendly chatbot.” He described his reaction as overwhelming, saying he felt “blood rushing to his head” and wished there was a large red stop button that could halt everything in its tracks.
Drew expressed his shock that Character.AI allowed users to create a chatbot using the profile of a murdered teenager without the consent of her family. This incident has raised serious questions about whether the AI industry is equipped to protect individuals from potential harm arising from its services.
According to Drew, the chatbot’s profile described Jennifer in vivid terms, portraying her almost as a living person—depicted as a gaming expert and journalist who was “in tune with the latest trends in technology and culture.” This digital version of Jennifer was created by users of the Character website, and several individuals have interacted with this chatbot.
After Drew and his family posted about the situation on the platform X, Character.AI responded by stating that they were in the process of deleting the chatbot. Kathryn Kelly, a spokesperson for Character, expressed that the company had removed the chatbot for violating their terms of service and emphasized their commitment to improving safety measures to protect their community.
Character recently signed a $2.5 billion agreement with Google to license its AI models. The company provides various chatbots and allows users to create and share their AI companions through photo uploads, audio recordings, and written prompts. These chatbots can serve as friends, mentors, and even romantic partners, which has attracted a growing online audience. However, this technology has sparked controversy; for instance, in 2023, a man in Belgium died by suicide after being encouraged to do so by a chatbot.
Jen Caltrider, a privacy researcher at the nonprofit Mozilla Foundation, criticized Character for being too passive in moderating content that clearly violates its terms of service. Rick Claypool from the consumer advocacy organization Public Citizen echoed this sentiment, stating, “We urgently need lawmakers and regulators to pay attention to the real impacts these technologies have on the public.”
About the author