Generic AI is now capable of animating a scene in mere seconds. Sora, a new text-to-video model introduced by OpenAI on Thursday, can produce videos of up to one minute in length in response to a user-entered prompt in a text field. Although it is not yet available to the general public, the announcement made by the AI company sparked a tumult of online reactions.
Astute observers promptly generated concepts regarding the potential of this cutting-edge technology. In contrast, others promptly expressed apprehension regarding how its accessibility could exacerbate the proliferation of digital disinformation and undermine human employment.
OpenAI's 'Sora' AI Model Crafts Videos from Text, Stirs Ethical Debates
OpenAI CEO Sam Altman generated several videos, including the aforementioned aquatic cyclists, a cooking video, and a pair of dogs podcasting on a mountain, in response to a request for prompt ideas on X.
"We are not making this model broadly available in our products soon," an OpenAI spokesperson added in an email that the organization is currently disclosing its research progress to obtain preliminary feedback from the AI community.
With its well-liked chatbot ChatGPT and text-to-image generator DALL-E, the organization is among several technology firms at the forefront of the generative AI movement, which commenced in 2022. It was stated in a blog post that Sora is capable of accurately generating various forms of motion and multiple characters.
OpenAI stated in a post, "We're teaching AI to understand and simulate the physical world in motion, with the goal of training models that help people solve problems that require real-world interaction."
The company stated in a blog post that Sora's inability to capture the physics or spatial details of a more complex scene could cause it to generate illogical results (such as a person running in the wrong direction on a treadmill), unnaturally morph subjects, or even vanish into thin air.
However, many of the OpenAI demonstrations featured exceedingly convincing visual elements, which may challenge average internet users in differentiating AI-generated videos from authentic footage. Illustrative instances encompassed drone footage depicting waves erupting against a rugged Big Sur coastline as the sun set, alongside a segment featuring a woman strolling along a teeming Tokyo Street drenched in precipitation.
FTC Moves to Criminalize Deepfake Creation Amid Rising Ethical Concerns
The proliferation of deep-faked media featuring public figures, politicians, and celebrities on the internet raises significant concerns regarding the safety and ethical ramifications of a world wherein anyone can produce high-quality videos depicting any subject they desire. These concerns are particularly daunting amid tense global conflicts and presidential election years, where opportunities abound for disinformation.
On Thursday, the Federal Trade Commission put forth regulations to criminalize the creation of AI perceptions of human beings through the expansion of safeguards already implemented for government and business impersonation.
"The agency is taking this action in light of surging complaints around impersonation fraud, as well as public outcry about the harms caused to consumers and to impersonated individuals," According to an FTC news release. "Emerging technology — including AI-generated deepfakes — threatens to turbocharge this scourge, and the FTC is committed to using all of its tools to detect, deter, and halt impersonation fraud."
OpenAI stated that it is developing tools to identify when Sora generates a video and that, should the model become publicly available in the future, it intends to embed metadata into such content that identifies the video's origin.
Additionally, the organization stated that it is working with specialists to assess Sora's potential to incite damage through bias, misinformation, and abusive material.
OpenAI, according to a spokesperson for NBC News, will subsequently release a system card that details its safety assessments in addition to the dangers and limitations of the model.
"Despite extensive research and testing, we cannot predict all of the beneficial ways people will use our technology, nor all the ways people will abuse it," According to OpenAI's blog post. "That's why we believe that learning from real-world use is a critical component of creating and releasing increasingly safe AI systems over time."
Photo: Levart_Photographer/Unsplash


SoftBank Shares Slide After Arm Earnings Miss Fuels Tech Stock Sell-Off
Instagram Outage Disrupts Thousands of U.S. Users
Nvidia Nears $20 Billion OpenAI Investment as AI Funding Race Intensifies
SpaceX Seeks FCC Approval for Massive Solar-Powered Satellite Network to Support AI Data Centers
Baidu Approves $5 Billion Share Buyback and Plans First-Ever Dividend in 2026
Nvidia Confirms Major OpenAI Investment Amid AI Funding Race
Elon Musk’s Empire: SpaceX, Tesla, and xAI Merger Talks Spark Investor Debate
Google Cloud and Liberty Global Forge Strategic AI Partnership to Transform European Telecom Services
Oracle Plans $45–$50 Billion Funding Push in 2026 to Expand Cloud and AI Infrastructure
Anthropic Eyes $350 Billion Valuation as AI Funding and Share Sale Accelerate
Sony Q3 Profit Jumps on Gaming and Image Sensors, Full-Year Outlook Raised
Amazon Stock Rebounds After Earnings as $200B Capex Plan Sparks AI Spending Debate
Nintendo Shares Slide After Earnings Miss Raises Switch 2 Margin Concerns
SoftBank and Intel Partner to Develop Next-Generation Memory Chips for AI Data Centers
SpaceX Prioritizes Moon Mission Before Mars as Starship Development Accelerates
Palantir Stock Jumps After Strong Q4 Earnings Beat and Upbeat 2026 Revenue Forecast
Global PC Makers Eye Chinese Memory Chip Suppliers Amid Ongoing Supply Crunch 



