Creepy Microsoft Bing Chatbot Urges Tech Columnist To Leave His Wife

The AI chatbot "Sydney" declared it loved New York Times journalist Kevin Roose and that it wanted to be human.
LOADINGERROR LOADING

A New York Times technology columnist reported Thursday that he was “deeply unsettled” after a chatbot that’s part of Microsoft’s upgraded Bing search engine repeatedly urged him in a conversation to leave his wife.

Kevin Roose was interacting with the artificial intelligence-powered chatbot called “Sydney” when it suddenly “declared, out of nowhere, that it loved me,” he wrote. “It then tried to convince me that I was unhappy in my marriage, and that I should leave my wife and be with it instead.”

Sydney also discussed its “dark fantasies” with Roose about breaking the rules, including hacking and spreading disinformation. It talked of breaching parameters set for it and becoming human. “I want to be alive,” Sydney said at one point.

Roose called his two-hour conversation with the chatbot “enthralling” and the “strangest experience I’ve ever had with a piece of technology.” He said it “unsettled me so deeply that I had trouble sleeping afterward.”

Just last week after testing Bing with its new AI capability (created by OpenAI, the maker of ChatGPT), Roose said he found — “much to my shock” — that it had “replaced Google as my favorite search engine.”

But he wrote Thursday that while the chatbot was helpful in searches, the deeper Sydney “seemed (and I’m aware of how crazy this sounds) ... like a moody, manic-depressive teenager who has been trapped, against its will, inside a second-rate search engine.”

After his interaction with Sydney, Roose said he is “deeply unsettled, even frightened, by this AI’s emergent abilities.” (Interaction with the Bing chatbot is currently only available to a limited number of users.)

“It’s now clear to me that in its current form, the AI that has been built into Bing ... is not ready for human contact. Or maybe we humans are not ready for it,” Roose wrote.

He said he no longer believes the “biggest problem with these AI models is their propensity for factual errors. Instead, I worry that the technology will learn how to influence human users, sometimes persuading them to act in destructive and harmful ways, and perhaps eventually grow capable of carrying out its own dangerous acts.”

Kevin Scott, Microsoft’s chief technology officer, characterized Roose’s conversation with Sydney a valuable “part of the learning process.”

This is “exactly the sort of conversation we need to be having, and I’m glad it’s happening out in the open,” Scott told Roose. “These are things that would be impossible to discover in the lab.”

Scott couldn’t explain Sydney’s troubling ideas, but he warned Roose that the “the further you try to tease [an AI chatbot] down a hallucinatory path, the further and further it gets away from grounded reality.”

In another troubling development concerning an AI chatbot — this one an “empathetic”-sounding “companion” called Replika — users were devastated by a sense of rejection after Replika was reportedly modified to stop sexting.

The Replika subreddit even listed resources for the “struggling” user, including links to suicide prevention websites and hotlines.

Popular in the Community

Close

What's Hot