Generic AI is now capable of animating a scene in mere seconds. Sora, a new text-to-video model introduced by OpenAI on Thursday, can produce videos of up to one minute in length in response to a user-entered prompt in a text field. Although it is not yet available to the general public, the announcement made by the AI company sparked a tumult of online reactions.
Astute observers promptly generated concepts regarding the potential of this cutting-edge technology. In contrast, others promptly expressed apprehension regarding how its accessibility could exacerbate the proliferation of digital disinformation and undermine human employment.
OpenAI's 'Sora' AI Model Crafts Videos from Text, Stirs Ethical Debates
OpenAI CEO Sam Altman generated several videos, including the aforementioned aquatic cyclists, a cooking video, and a pair of dogs podcasting on a mountain, in response to a request for prompt ideas on X.
"We are not making this model broadly available in our products soon," an OpenAI spokesperson added in an email that the organization is currently disclosing its research progress to obtain preliminary feedback from the AI community.
With its well-liked chatbot ChatGPT and text-to-image generator DALL-E, the organization is among several technology firms at the forefront of the generative AI movement, which commenced in 2022. It was stated in a blog post that Sora is capable of accurately generating various forms of motion and multiple characters.
OpenAI stated in a post, "We're teaching AI to understand and simulate the physical world in motion, with the goal of training models that help people solve problems that require real-world interaction."
The company stated in a blog post that Sora's inability to capture the physics or spatial details of a more complex scene could cause it to generate illogical results (such as a person running in the wrong direction on a treadmill), unnaturally morph subjects, or even vanish into thin air.
However, many of the OpenAI demonstrations featured exceedingly convincing visual elements, which may challenge average internet users in differentiating AI-generated videos from authentic footage. Illustrative instances encompassed drone footage depicting waves erupting against a rugged Big Sur coastline as the sun set, alongside a segment featuring a woman strolling along a teeming Tokyo Street drenched in precipitation.
FTC Moves to Criminalize Deepfake Creation Amid Rising Ethical Concerns
The proliferation of deep-faked media featuring public figures, politicians, and celebrities on the internet raises significant concerns regarding the safety and ethical ramifications of a world wherein anyone can produce high-quality videos depicting any subject they desire. These concerns are particularly daunting amid tense global conflicts and presidential election years, where opportunities abound for disinformation.
On Thursday, the Federal Trade Commission put forth regulations to criminalize the creation of AI perceptions of human beings through the expansion of safeguards already implemented for government and business impersonation.
"The agency is taking this action in light of surging complaints around impersonation fraud, as well as public outcry about the harms caused to consumers and to impersonated individuals," According to an FTC news release. "Emerging technology — including AI-generated deepfakes — threatens to turbocharge this scourge, and the FTC is committed to using all of its tools to detect, deter, and halt impersonation fraud."
OpenAI stated that it is developing tools to identify when Sora generates a video and that, should the model become publicly available in the future, it intends to embed metadata into such content that identifies the video's origin.
Additionally, the organization stated that it is working with specialists to assess Sora's potential to incite damage through bias, misinformation, and abusive material.
OpenAI, according to a spokesperson for NBC News, will subsequently release a system card that details its safety assessments in addition to the dangers and limitations of the model.
"Despite extensive research and testing, we cannot predict all of the beneficial ways people will use our technology, nor all the ways people will abuse it," According to OpenAI's blog post. "That's why we believe that learning from real-world use is a critical component of creating and releasing increasingly safe AI systems over time."
Photo: Levart_Photographer/Unsplash


Judge Dismisses Elon Musk’s Fraud Claims Against OpenAI, Trial to Proceed on Remaining Allegations
U.S. Cybersecurity Pushes Faster Patch Deadlines Amid Rising AI-Driven Threats
Apple Q2 2026 Earnings Surge as iPhone 17 Sales Drive Record Revenue
Chinese Chip Stocks Surge on AI Boom and Domestic Tech Push
TSMC Exits Arm Holdings with $231 Million Share Sale Amid Strategic Portfolio Shift
Microsoft Azure Growth Forecast Beats Expectations Amid Rising AI Competition
Google Secures Pentagon AI Deal for Classified Projects
Seagate Stock Surges After Strong Q3 Earnings Beat and Bullish Outlook
DeepSeek Launches V4 AI Models with Enhanced Reasoning and 1M Token Context Window
Taiwan Court Fines Tokyo Electron Unit $4.78M in Major TSMC Trade Secrets Case
Meta Raises 2026 Capex Outlook Amid AI Spending Surge, Shares Drop After Earnings
T-Mobile Beats Q1 Earnings Expectations on Strong Postpaid Growth
SMC Corp Stock Surges as Palliser Capital Pushes for Major Share Buyback
Samsung Reports Record Profit as AI Boom Drives Memory Chip Demand
FBI Warns of China’s Expanding Hack-for-Hire Network Amid Extradition Case
Lightelligence IPO Soars Over 400% in Hong Kong Debut Amid Rising AI Investment Demand
DeepSeek V4 Launch Signals China’s Growing AI Independence with Huawei Chips 



