Dogs saving babies, grandmas feeding bears, body cam footage of people being arrested –– since OpenAI’s app Sora was released in September, I question if every cute or wild viral video I see on social media is real. And you should, too.
Sora creates videos that are generated by artificial intelligence with short text prompts ― and it’s making it easier than ever for people to fake reality or completely invent their own.
Although Sora is still invite-only, it is already at the top of app download charts, and you don’t need to have the app to feel its impact. One cursory scroll through TikTok or Instagram and you’ll see people in the comments confused whether something is real, even when the videos have a Sora watermark.
“I’m at the point that I don’t even know what’s AI,” reads one top TikTok comment to a video of a grandma feeding meat to a bear.

We already have a widespread problem with distrusting the information we find online. A recent Pew Research Center survey found that about one-third of people who used chatbots for news found it “difficult to determine what is true and what is not.” A free app that can quickly whip up videos designed to go viral may make this basic AI literacy problem worse.
“One thing Sora is doing for better or worse is shifting the Overton window –– accelerating the public’s understanding that seeing is no longer believing when it comes to video,” said Solomon Messing, an associate professor at New York University in the Center for Social Media and Politics.
Jeremy Carrasco, who has worked as a technical producer and director, has become a go-to expert for spotting AI videos on social media, fielding questions from people about whether that subway meet-cute video or that viral video of a pastor preaching about economic inequality is real.
And lately, Carrasco said, most of the questions he gets are about videos created with Sora 2 technology.
“Six months ago, you wouldn’t see a single AI video on your [social media] feed,” he said. “Now you might see 10 an hour, or one every minute, depending on how much you’re scrolling.”
He thinks this is because, unlike Google’s Veo 3 –– another tool that creates AI videos –– OpenAI’s latest video generation model doesn’t require payment to access its full capabilities. People can quickly flood social media with viral AI-generated stunt videos.
“Now that barrier of entry is just having an invite code, and then you don’t even need to pay for generating” videos, he said, adding that it’s easy for people to crop out Sora watermarks too.
The Lasting Harm AI Videos Can Cause — And How To Spot The Fakes
There are still telltale AI signs. Carrasco said one giveaway about a Sora video is the “blurry” and “staticky” textures on hair and clothes that a real camera doesn’t create.
And it also means thinking about who created the video. In the case of this AI pastor video, where a preacher shouts from a pulpit that “billionaires are the only minority we should be scared of,” it’s supposedly a “conservative church, and they got a very liberal pastor who sounds like Alex Jones. Like, wait, that doesn’t quite check out,” Carrasco said. “And then I would just go and click on the profile and be like, ‘Oh, all these videos are AI videos.’”
In general, people should ask themselves: “Who posted this? Why did they post this? Why is it engaging?” Carrasco said. “Most of the AI videos today are not created by people who are trying to trick you. They’re just trying to create a viral video so they get attention and can hopefully sell you something.”
But the confusion is real. Carrasco said there are typically two kinds of people he helps: those who are confused about whether the viral video is AI or those who are paranoid that real videos are AI. “It’s a very quick erosion of truth for people,” Carrasco said. For people’s vertical video feeds “to become completely artificial is just very startling.”
“What worries me about the AI slop is that it's even easier to manipulate people.”
- Hany Farid, a professor of computer science at the University of California, Berkeley
Hany Farid, a professor of computer science at the University of California, Berkeley, said that using AI to fake someone’s likeness, or deepfakes, are not a new problem, but Sora videos “100%” contribute to the problem of the “liar’s dividend,” a term coined by law professors in a 2018 paper explaining how deepfakes cause harm to democracy.
This is because if you “create very convincing images and video that are fake, of course, then when something is real is brought to you –– a police body cam, a video of a human rights violation, a president saying something illegal –– well, then you can just deny reality by saying ‘deepfake,’” Farid explained.
He notes that what’s different about Sora is how it feeds AI videos into a TikTok-like social media app, which can drive people to spend as much time as possible on an AI-generated app in ways that are not healthy or thoughtful.
“What worries me about the AI slop is that it’s even easier to manipulate people, because ... the social media companies have been manipulating people to promote things that they know will drive engagement,” Farid said.
The Most Unsettling Part Of Sora Is How Easily You Can Deepfake Yourself And Others
OpenAI is already dealing with backlash over Sora videos using the likeness of both dead and living famous people. The company said it recently blocked people from depicting Martin Luther King Jr. in videos after “disrespectful depictions” were made.
But perhaps more unsettling are the realistic ways less famous people are able to create “cameos,” as OpenAI has rebranded the concept of deepfakes, and make videos where your likeness says and does things you never have in real life.
In its policy page, OpenAI states that users “may not edit images or videos that depict any real individual without their explicit consent.” But once you opt into having your face and voice scanned into the app and agree that others can use your cameo, you will see what people can dream up to do with your body.
Some of the videos are amusing or goofy. That’s how you end up with videos of Jake Paul caking his face with makeup and Shaquille O’Neal dancing as a ballerina.

But some of these videos can be alarming and offensive to people being depicted.
Take what recently happened to YouTuber Darren Jason Watkins Jr., better known by his handle “IShowSpeed,” where he has over 45 million subscribers on YouTube. In a livestreamed video, Watkins seemingly opted into the public setting of Sora where anyone can make “cameos” using his likeness. People then made videos of him kissing fans, visiting countries he had never been to and saying he was gay.
“Why does this look too real? Bro, no, that’s like, my face,” Watkins said as he watches cameos of himself. He then appeared to change the cameo setting to “only me,” which makes it so that only he could make videos with his likeness going forward.
Eva Galperin, director of cybersecurity at the nonprofit Electronic Frontier Foundation, said what happened to Watkins “is a fairly mild version of the kind of outcomes that we have seen and that we can expect.”
She said OpenAI’s tools of limiting who can see your cameo do not account for the fact “that trust changes over time” between mutual followers or people you approve to make a cameo of you.
“You could have a bunch of harassing videos made by an abusive ex or an angry former friend,” she said. “You will not be able to stop them until after you have been alerted to the video, and then you can take away their access, but then the video is already out there.“
When HuffPost asked OpenAI about how it is preventing nonconsensual deepfakes, the company directed HuffPost to Sora’s internal system card, which bans generating content for anything that could be used for “deceit, fraud, scams, spam, or impersonation.”
“Guardrails seek to block unsafe content before it’s made—including sexual material, terrorist propaganda, and self-harm promotion—by checking both prompts and outputs across multiple video frames and audio transcripts,” the company said in a statement.
Why You Should Think Twice About What You Think Could Be A Funny Sora Video
In Sora, you can type guidelines for how you want your cameo to be portrayed in other people’s videos and include what your likeness should not say or do. But what should be off-limits is subjective.
“What counts as violent content, what counts as sexual content, really depends on who is in the video, and who the video is for,” Galperin said.
OpenAI CEO Sam Altman getting arrested was one of the most popular videos on Sora, for example, according to Sora researcher Gabriel Petersson.
But this kind of video could have severe consequences for women and people of color who already disproportionately face online abuse.
“If you are a Sam Altman, and you are extremely well-known and rich and white and a man, then a surveillance video of you shoplifting at Target is funny,” Galperin said. “But there are many populations of people for whom that is not a joke.”
Galperin recommended against uploading your face and voice into the app at all because it opens you up to the possibility of being harassed. Galperin said AI videos of you could be especially harmful if you’re not famous and if people would not expect an AI video to be made of you.
This real reputational risk is the big difference between the harms a fake AI animal video may cause and ones that involve real living people you know.
Messing said Sora is “pretty amazing” and a compelling tool for creators. He used it to create a video of a cat riding a bicycle that went viral, but he draws the line at creating anything that would involve his own or his friends’ faces.
“The ability to generate realistic video of your friends doing anything that doesn’t trigger a guardrail makes me super uncomfortable,” Messing said. “I couldn’t bring myself to let the app scan my face, voice. ... The creep factor is definitely there.”
In Carrasco’s view, he would never make a Sora video using his own likeness because he doesn’t want his followers to question “Is this the AI version of you?” and he suggests others to consider the same risks.
“You do not want to normalize you being deepfaked,” he said.