Generic AI is now capable of animating a scene in mere seconds. Sora, a new text-to-video model introduced by OpenAI on Thursday, can produce videos of up to one minute in length in response to a user-entered prompt in a text field. Although it is not yet available to the general public, the announcement made by the AI company sparked a tumult of online reactions.
Astute observers promptly generated concepts regarding the potential of this cutting-edge technology. In contrast, others promptly expressed apprehension regarding how its accessibility could exacerbate the proliferation of digital disinformation and undermine human employment.
OpenAI's 'Sora' AI Model Crafts Videos from Text, Stirs Ethical Debates
OpenAI CEO Sam Altman generated several videos, including the aforementioned aquatic cyclists, a cooking video, and a pair of dogs podcasting on a mountain, in response to a request for prompt ideas on X.
"We are not making this model broadly available in our products soon," an OpenAI spokesperson added in an email that the organization is currently disclosing its research progress to obtain preliminary feedback from the AI community.
With its well-liked chatbot ChatGPT and text-to-image generator DALL-E, the organization is among several technology firms at the forefront of the generative AI movement, which commenced in 2022. It was stated in a blog post that Sora is capable of accurately generating various forms of motion and multiple characters.
OpenAI stated in a post, "We're teaching AI to understand and simulate the physical world in motion, with the goal of training models that help people solve problems that require real-world interaction."
The company stated in a blog post that Sora's inability to capture the physics or spatial details of a more complex scene could cause it to generate illogical results (such as a person running in the wrong direction on a treadmill), unnaturally morph subjects, or even vanish into thin air.
However, many of the OpenAI demonstrations featured exceedingly convincing visual elements, which may challenge average internet users in differentiating AI-generated videos from authentic footage. Illustrative instances encompassed drone footage depicting waves erupting against a rugged Big Sur coastline as the sun set, alongside a segment featuring a woman strolling along a teeming Tokyo Street drenched in precipitation.
FTC Moves to Criminalize Deepfake Creation Amid Rising Ethical Concerns
The proliferation of deep-faked media featuring public figures, politicians, and celebrities on the internet raises significant concerns regarding the safety and ethical ramifications of a world wherein anyone can produce high-quality videos depicting any subject they desire. These concerns are particularly daunting amid tense global conflicts and presidential election years, where opportunities abound for disinformation.
On Thursday, the Federal Trade Commission put forth regulations to criminalize the creation of AI perceptions of human beings through the expansion of safeguards already implemented for government and business impersonation.
"The agency is taking this action in light of surging complaints around impersonation fraud, as well as public outcry about the harms caused to consumers and to impersonated individuals," According to an FTC news release. "Emerging technology — including AI-generated deepfakes — threatens to turbocharge this scourge, and the FTC is committed to using all of its tools to detect, deter, and halt impersonation fraud."
OpenAI stated that it is developing tools to identify when Sora generates a video and that, should the model become publicly available in the future, it intends to embed metadata into such content that identifies the video's origin.
Additionally, the organization stated that it is working with specialists to assess Sora's potential to incite damage through bias, misinformation, and abusive material.
OpenAI, according to a spokesperson for NBC News, will subsequently release a system card that details its safety assessments in addition to the dangers and limitations of the model.
"Despite extensive research and testing, we cannot predict all of the beneficial ways people will use our technology, nor all the ways people will abuse it," According to OpenAI's blog post. "That's why we believe that learning from real-world use is a critical component of creating and releasing increasingly safe AI systems over time."
Photo: Levart_Photographer/Unsplash


Apple Alerts EU Regulators That Apple Ads and Maps Meet DMA Gatekeeper Thresholds
AI-Guided Drones Transform Ukraine’s Battlefield Strategy
Morgan Stanley Boosts Nvidia and Broadcom Targets as AI Demand Surges
Wikipedia Pushes for AI Licensing Deals as Jimmy Wales Calls for Fair Compensation
Australia Moves Forward With Teen Social Media Ban as Platforms Begin Lockouts
Taiwan Opposition Criticizes Plan to Block Chinese App Rednote Over Security Concerns
Quantum Systems Projects Revenue Surge as It Eyes IPO or Private Sale
TSMC Accuses Former Executive of Leaking Trade Secrets as Taiwan Prosecutors Launch Investigation
Apple Appoints Amar Subramanya as New Vice President of AI Amid Push to Accelerate Innovation
ByteDance Unveils New AI Voice Assistant for ZTE Smartphones
Amazon and Google Launch New Multicloud Networking Service to Boost High-Speed Cloud Connectivity
Australia Releases New National AI Plan, Opts for Existing Laws to Manage Risks
Sam Altman Reportedly Explored Funding for Rocket Venture in Potential Challenge to SpaceX
EU Prepares Antitrust Probe Into Meta’s AI Integration on WhatsApp
Apple Leads Singles’ Day Smartphone Sales as iPhone 17 Demand Surges
Baidu Cuts Jobs as AI Competition and Ad Revenue Slump Intensify
OpenAI Moves to Acquire Neptune as It Expands AI Training Capabilities 



