Throughout 2024, OpenAI teased the public release of Sora, its new video generation large language model, capable of creating lifelike visuals out of user prompts.
But due to concerns about the tool being used to create realistic disinformation during a critical U.S. election year, the company delayed its release until after the elections.
Now, a year later, critics are warning their fears about Sora’s reality distortion powers have come to pass, flooding the internet with false, fabricated or manipulated AI content, often with minimal or no labeling to indicate the media is synthetic.
““The rushed release of Sora 2 exemplifies a consistent and dangerous pattern of OpenAI rushing to market with a product that is either inherently unsafe or lacking in needed guardrails,” wrote J.B. Branch, who leads AI accountability work at nonprofit Public Citizen, in a Nov. 11 letter addressed to OpenAI CEO Sam Altman.
Branch added that releasing Sora 2 shows “reckless disregard” for product safety, the rights of public figures whose names or images could be deepfaked, and consumer protections against other abuses.
Public Citizen is pressing OpenAI to temporarily take the tool offline and work with outside experts to build better guardrails.
“We urge you to pause this deployment and engage collaboratively with legal experts, civil rights organizations, and democracy advocates to establish real, hard technological and ethical redlines,” around Sora, the group wrote.
Large language models have been able to create deepfakes for years, but the technology was often plagued by a string of identifiable visual cues, such as people having more than 5 fingers or videos that appear overly polished or defy the laws of physics.
Over the past year, new tools like Sora have overcome many of those technical obstacles, and can now deliver lifelike videos. The only indicator a video may be fake is a small, OpenAI watermark from OpenAI in the lower right corner. Cybersecurity experts say it is trivial in many cases for bad actors to remove or crop out such labeling before sharing them on social media as if they’re real.
Compounding matters, while OpenAI and other AI image and video generators have historically made efforts to prevent their tools from impersonating politicians, celebrities or copyrighted characters, Sora 2 initially launched with none of those guardrails in place. The first weeks of the release were filled with users sharing videos of Altman grilling Pikachu, a popular character from the cartoon anime Pokémon and other fictional figures protected by copyright law.
Bala Kumar, chief product and technology officer at Jumio, said Sora 2 “lowers the barrier to deepfakes for everyone in the general public.”
“But what makes it accessible to everyday people makes it vulnerable to bad actors for misuse,” Kumar added. “While there’s a small watermark on these videos, fraudsters can easily remove it.”
In October, following objections from actor Bryan Cranston and the Screen Actors Guild-American Federation of Television Artists (SAG-AFTRA), OpenAI changed its policy to prevent Sora from generating videos of live celebrities or copyrighted figures.
However, that still allows people to create realistic and disruptive deepfakes without breaking OpenAI’s rules. For instance, the prohibition on public figures only extends to living people, meaning users can still generate videos of dead public figures.
This has led to videos that seem like harmless fun, such as rappers Tupac Shakur and The Notorious B.I.G. participating in a pro-wrestling-styled feud or singer Michael Jackson dancing at fast food restaurants and stealing chicken from customers. .
But as the Washington Post has reported, Sora 2 has also been used to create racist videos of deceased public figures, like Martin Luther King Jr. stuttering and drooling, or John F. Kennedy joking about the assassination of right-wing personality Charlie Kirk. OpenAI called the videos of King Jr. “disrespectful” and pulled them offline after his relatives complained.
Beyond historical figures, Sora and other tools can easily be used to generate fake videos that tap into current political issues of the moment for virality. One recent example was a series of videos depicting Americans angrily reacting to food prices at grocery stores, in their cars and other locations.
The videos came while Congress and the White House were in a standoff over government funding, including the money needed for the Supplemental Nutrition Assistance Program (SNAP). The videos showed AI-generated people saying things like “I ain’t paying for none of this s–t” and “iIt is the taxpayer’s responsibility to take care of my kids!”
It’s not clear what model was used to generate the videos, though some briefly flash a recognizable Sora watermark, but media outlets like Fox News initially published stories that treated the clips as genuine, with headlines like “SNAP beneficiaries threaten to ransack stores over government shutdown.” Fox News later updated its story and headline to note that the stories were AI-generated, and the story appears to have since been removed from the outlet’s website.
Outside of politics, these tools have plenty of potential to upend the lives of ordinary Americans who don’t hold power or appear on television. The most popular use of deepfakes by far in the generative AI era has been for nonconsensual pornography targeting women.
Although Public Citizen’s letter doesn’t accuse people of using Sora 2 to generate pornography, it criticizes OpenAI for allowing“non-nude fetish content” to proliferate on Sora’s social media platform. .
“There is a dangerous lack of moderation pertaining to underage individuals depicted in sexual contexts, making Sora 2 unsuitable for public use,” Public Citizen wrote.
OpenAI did not respond to requests for comment from CyberScoop on the Public Citizen letter at press time.
The post Advocacy group calls on OpenAI to address Sora 2’s deepfake risks appeared first on CyberScoop.
–
Read More – CyberScoop



