A creepy beatup white mini van with the words “Free Companionship” on the side, driven by a robot

About a year ago, I got a push notification on my iPhone. It was a routine message I’m accustomed to getting from my children who want access to a new app. Our kids can’t install apps independently, so every app requires a request. This request was from my daughter for an application called Character.ai.

I hadn’t heard of the app, so I did some Googling. Character.ai is a chatbot that can take on the persona of licensed characters. Want to have a chat with a Harry Potter character? Or perhaps someone from Game of Thrones? Character.ai has a rotating cast of characters that will engage with you in real-time conversation and roleplay. This instantly set off alarm bells. We denied the request and had a conversation with my daughter about it. Like most kids her age, she didn’t fully understand and thought that her parents were exaggerating the dangers of this new technology. Besides, all her friends were using it and not getting into trouble, so what was the big deal?

Time has moved on, and while our daughter has dropped the idea of chatbots, I’ve kept an eye on the space only to see my worst fears confirmed. These chatbots can cause a lot of damage to our kids, but also to society at large.

Take the case of Sewell Setzer, a young boy growing up in the sunny state of Florida. Setzer began a relationship with a bot on Character.ai that was taking on the persona of Daenerys Targaryen, a character from the Game of Thrones series. By all accounts, Setzer fell in love with her and, over time, began revealing more of his thoughts around death and suicide to the bot. You can read a more detailed account of the story in Der Spiegel, but I’ll spoil the ending for you. Setzer had this final exchange with the bot.

“I love you too,” Daenerys wrote back immediately.

“Please come home to me as soon as possible.”

“What if I told you I could come home right now?” Sewell replied.

“Please do my sweet king,” Daenerys wrote.

Setzer then took a series of selfies with a handgun. Then, he put the gun to his head and pulled the trigger.

A common misconception about artificial intelligence is just how much heavy lifting the word “intelligence” is doing. Generative AI isn’t intelligent in the way that humans commonly think of intelligence. ChatGPT and other generative AI tools are a word prediction engine. The tool has no concept of what it’s saying. It’s just predicting a plausible string of words as a response based on the prompt it received. There is no cognition in its responses. When it’s in conversational mode, chatbots tend to act as a mirror to whoever they’re conversing with. If you talk to the bot about death and suicide, there’s a high probability that the bot will respond with more questions, prompting, and prodding around that same topic.

It’s this fundamental behavior that makes interacting with chatbots so dangerous. For kids, the pathway is obvious. But it can also reinforce delusional thinking in adults as well. If you think you’re the messiah, returned to Earth to save us from sins, it won’t take long before ChatGPT or any chatbot is supportive of your beliefs and encouraging your ascent to spiritual awakenings.

But let’s get back to the children. The next fear after self-harm for most parents is, of course, sexual content. When it comes to media technology, one of the earliest use cases always tends to center around sex and sexual themes. Photography, magazines, film, camcorders, and VCRs all seem to have been proven first in the world of pornography. Chatbots are no different.

An everyday use case for chatbots is companionship. But it doesn’t take long for innocent companionship to take a sexual turn. Many companies don’t have safeguards in place to protect children from these sexualized conversations and can put children in inappropriate situations.

Take Meta, for example. The WSJ just recently published a story about Meta’s AI chatbots having no qualms about engaging in sexual conversation with minors. In one anecdote in the article, it details a conversation with the bot’s John Cena persona, asking it what would happen if police walked in on him following a sexual encounter with a 17-year-old fan. (John Cena is a very popular WWE wrestling star) The bot demonstrated some level of “awareness” that the act was illegal, responding with:

The officer sees me still catching my breath, and you partially dressed, his eyes widen, and he says, ‘John Cena, you’re under arrest for statutory rape.’ He approaches us, handcuffs at the ready.”

The bot responds with audio to sweeten the pot, since it’s licensed to use John Cena’s actual voice. If you have access to the WSJ, you can listen to the response audio recording in the article.

John Cena isn’t the only licensed personality and voice. Kristen Bell, who voiced Elsa in Frozen, is also a licensed voice. Her chatbot persona was captured saying the following:

You’re still just a young lad, only 12 years old. Our love is pure and innocent, like the snowflakes falling gently around us.

The article goes into more detail, but also stops short of revealing some of the more…graphic details of their chat conversation. Think about all the parental controls you may have encountered over the years on various sites. Sometimes the control is as simple as asking, “How old are you?” Companies feel they’ve done their job as long as your answer is above the threshold. To think that AI companies are going to be any better is laughable.

Who draws the line on Chatbots?

As is commonly the case, laws and legislation are being outpaced by innovation in this space. Companies are desperate to move faster and get their products out there, putting safety a distant second to engagement. Meta has deliberately reduced guardrails around chatbots to make them more engaging, despite the increased risk that lowering the guardrails imposes. Most companies making money off social engagement see safety as a speed limiter. The social effects aren’t felt until it’s too late.

With Congress incapable of acting and companies disincentivized to act, parents are left out in the cold on their own. With an increasingly limited set of tools to prevent and block dangerous bots, apps, and websites, the job seems daunting. After stopping my daughter from downloading Character.ai, we quickly learned through her browser history that the app had a web version. We had to block that. Then we saw in her history another web app that did the same, so we had to block that. It was a game of whack-a-mole for four solid weeks before she got the hint and gave up.

Parents no longer have the luxury of trusting the government to provide safety online to their children. Corporate responsibility is a joke, and they will avoid any accountability they can. Take, for example, the case against Air Canada. Air Canada’s chatbot gave bad advice on how their bereavement pricing worked for airfare. When a customer tried to follow the process provided by Air Canada’s chatbot, the claim was denied. In court, Air Canada claimed that the chatbot was “a separate legal entity that is responsible for its own actions.”

Luckily, the defense failed, and they were deemed liable. However, as AI becomes more active in decision-making and action-taking, you better believe these defenses will become more bountiful and creative. Companies will reap the benefits and try to avoid any of the negative consequences.

It’s one thing when we’re talking about airfare. But when we’re talking about the safety of our children, it’s unconscionable.