BartDay
  • Economy
    • Business
    • Politics
  • Cryptocurrency
  • Investing
    • Banking
    • Forex
    • Financial Services
  • Markets
    • Capital Markets
    • Emerging Markets
  • People
    • Consumer & Retail
    • Health
    • Opinion
  • Environment
    • Energy
    • Industrials
    • Manufacturing
  • Technology
    • Learning
    • Auto & Transportation
    • Data
    • Science
    • Telecommunications
  • Featured
  • About
  • Economy
    • Business
    • Politics
  • Cryptocurrency
  • Investing
    • Banking
    • Forex
    • Financial Services
  • Markets
    • Capital Markets
    • Emerging Markets
  • People
    • Consumer & Retail
    • Health
    • Opinion
  • Environment
    • Energy
    • Industrials
    • Manufacturing
  • Technology
    • Learning
    • Auto & Transportation
    • Data
    • Science
    • Telecommunications
  • Featured
  • About
BartDay
BartDay
  • Economy
    • Business
    • Politics
  • Cryptocurrency
  • Investing
    • Banking
    • Forex
    • Financial Services
  • Markets
    • Capital Markets
    • Emerging Markets
  • People
    • Consumer & Retail
    • Health
    • Opinion
  • Environment
    • Energy
    • Industrials
    • Manufacturing
  • Technology
    • Learning
    • Auto & Transportation
    • Data
    • Science
    • Telecommunications
  • Featured
  • About
Paper planes

Sora

  • February 15, 2024
  • 5 minute read
Total
0
Shares
0
0
0
0

Creating video from text

Sora is an AI model that can create realistic and imaginative scenes from text instructions.

All videos on this page were generated directly by Sora without modification.


Partner with bartday.com. Kindly head here.


From our partners:

CITI.IO :: Business. Institutions. Society. Global Political Economy.
CYBERPOGO.COM :: For the Arts, Sciences, and Technology.
DADAHACKS.COM :: Parenting For The Rest Of Us.
ZEDISTA.COM :: Entertainment. Sports. Culture. Escape.
TAKUMAKU.COM :: For The Hearth And Home.
ASTER.CLOUD :: From The Cloud And Beyond.
LIWAIWAI.COM :: Intelligence, Inside and Outside.
GLOBALCLOUDPLATFORMS.COM :: For The World's Computing Needs.
FIREGULAMAN.COM :: For The Fire In The Belly Of The Coder.
ASTERCASTER.COM :: Supra Astra. Beyond The Stars.
BARTDAY.COM :: Prosperity For Everyone.


We’re teaching AI to understand and simulate the physical world in motion, with the goal of training models that help people solve problems that require real-world interaction.

Introducing Sora, our text-to-video model. Sora can generate videos up to a minute long while maintaining visual quality and adherence to the user’s prompt.

Prompt: Several giant wooly mammoths approach treading through a snowy meadow, their long wooly fur lightly blows in the wind as they walk, snow covered trees and dramatic snow capped mountains in the distance, mid afternoon light with wispy clouds and a sun high in the distance creates a warm glow, the low camera view is stunning capturing the large furry mammal with beautiful photography, depth of field.

Today, Sora is becoming available to red teamers to assess critical areas for harms or risks. We are also granting access to a number of visual artists, designers, and filmmakers to gain feedback on how to advance the model to be most helpful for creative professionals.

We’re sharing our research progress early to start working with and getting feedback from people outside of OpenAI and to give the public a sense of what AI capabilities are on the horizon.

Prompt: Historical footage of California during the gold rush.

Sora is able to generate complex scenes with multiple characters, specific types of motion, and accurate details of the subject and background. The model understands not only what the user has asked for in the prompt, but also how those things exist in the physical world.

Prompt: The camera follows behind a white vintage SUV with a black roof rack as it speeds up a steep dirt road surrounded by pine trees on a steep mountain slope, dust kicks up from it’s tires, the sunlight shines on the SUV as it speeds along the dirt road, casting a warm glow over the scene. The dirt road curves gently into the distance, with no other cars or vehicles in sight. The trees on either side of the road are redwoods, with patches of greenery scattered throughout. The car is seen from the rear following the curve with ease, making it seem as if it is on a rugged drive through the rugged terrain. The dirt road itself is surrounded by steep hills and mountains, with a clear blue sky above with wispy clouds.

The model has a deep understanding of language, enabling it to accurately interpret prompts and generate compelling characters that express vibrant emotions. Sora can also create multiple shots within a single generated video that accurately persist characters and visual style.

Prompt: Tour of an art gallery with many beautiful works of art in different styles.

The current model has weaknesses. It may struggle with accurately simulating the physics of a complex scene, and may not understand specific instances of cause and effect. For example, a person might take a bite out of a cookie, but afterward, the cookie may not have a bite mark.

The model may also confuse spatial details of a prompt, for example, mixing up left and right, and may struggle with precise descriptions of events that take place over time, like following a specific camera trajectory.

Prompt: Step-printing scene of a person running, cinematic film shot in 35mm. Prompt: Step-printing scene of a person running, cinematic film shot in 35mm. Weakness: Sora sometimes creates physically implausible motion.

Safety

We’ll be taking several important safety steps ahead of making Sora available in OpenAI’s products. We are working with red teamers — domain experts in areas like misinformation, hateful content, and bias — who will be adversarially testing the model.

We’re also building tools to help detect misleading content such as a detection classifier that can tell when a video was generated by Sora. We plan to include C2PA metadata in the future if we deploy the model in an OpenAI product.

In addition to us developing new techniques to prepare for deployment, we’re leveraging the existing safety methods that we built for our products that use DALL·E 3, which are applicable to Sora as well.

For example, once in an OpenAI product, our text classifier will check and reject text input prompts that are in violation of our usage policies, like those that request extreme violence, sexual content, hateful imagery, celebrity likeness, or the IP of others. We’ve also developed robust image classifiers that are used to review the frames of every video generated to help ensure that it adheres to our usage policies, before it’s shown to the user.

We’ll be engaging policymakers, educators and artists around the world to understand their concerns and to identify positive use cases for this new technology. Despite extensive research and testing, we cannot predict all of the beneficial ways people will use our technology, nor all the ways people will abuse it. That’s why we believe that learning from real-world use is a critical component of creating and releasing increasingly safe AI systems over time.

Prompt: The camera directly faces colorful buildings in burano italy. An adorable dalmation looks through a window on a building on the ground floor. Many people are walking and cycling along the canal streets in front of the buildings.

Research techniques

Sora is a diffusion model, which generates a video by starting off with one that looks like static noise and gradually transforms it by removing the noise over many steps.

Sora is capable of generating entire videos all at once or extending generated videos to make them longer. By giving the model foresight of many frames at a time, we’ve solved a challenging problem of making sure a subject stays the same even when it goes out of view temporarily.

Similar to GPT models, Sora uses a transformer architecture, unlocking superior scaling performance.

We represent videos and images as collections of smaller units of data called patches, each of which is akin to a token in GPT. By unifying how we represent data, we can train diffusion transformers on a wider range of visual data than was possible before, spanning different durations, resolutions and aspect ratios.

Sora builds on past research in DALL·E and GPT models. It uses the recaptioning technique from DALL·E 3, which involves generating highly descriptive captions for the visual training data. As a result, the model is able to follow the user’s text instructions in the generated video more faithfully.

In addition to being able to generate a video solely from text instructions, the model is able to take an existing still image and generate a video from it, animating the image’s contents with accuracy and attention to small detail. The model can also take an existing video and extend it or fill in missing frames. Learn more in our technical report.

Sora serves as a foundation for models that can understand and simulate the real world, a capability we believe will be an important milestone for achieving AGI.


Research Leads
Bill Peebles & Tim Brooks

Systems LeadConnor Holmes

Contributors

Clarence Wing Yin Ng
David Schnurr
Eric Luhman
Joe Taylor
Li Jing
Natalie Summers
Ricky Wang
Rohan Sahai
Ryan O’Rourke
Troy Luhman
Will DePue
Yufei Guo

Special Thanks
Bob McGrew, Brad Lightcap, Chad Nelson, David Medina, Gabriel Goh, Greg Brockman, Ian Sohl, Jamie Kiros, James Betker, Jason Kwon, Hannah Wong, Mark Chen, Michelle  Fradin, Mira Murati, Nick Turley, Prafulla Dhariwal, Rowan Zellers, Sarah Yoo, Sandhini Agarwal, Sam Altman, Srinivas Narayanan & Wesam Manassra

Communications

Elie Georges
Justin Wang
Kendra Rimbach
Niko Felix
Thomas Degry
Veit Moeller

Legal

Che Chang
Fred von Lohmann
Gideon Myles
Tom Stasi

External Engagement
Alex Baker-Whitcomb, Allie Teague, Anna Makanju, Anna McKean, Becky Waite, Brittany Smith, Chan Park, Chris Lehane, David Duxin, David Robinson, James Hairston, Jonathan Lachman, Justin Oswald, Krithika Muthukumar, Lane Dilg, Leher Pathak, Ola Nowicka, Ryan Biddy, Sandro Gianella, Stephen Petersilge, Tom Rubin & Varun Shetty

Executive Producer
Aditya Ramesh

Built by OpenAI in San Francisco, California
Published February 15, MMXXIV

Dean Marc

Part of the more nomadic tribe of humanity, Dean believes a boat anchored ashore, while safe, is a tragedy, as this denies the boat its purpose. Dean normally works as a strategist, advisor, operator, mentor, coder, and janitor for several technology companies, open-source communities, and startups. Otherwise, he's on a hunt for some good bean or leaf to enjoy a good read on some newly (re)discovered city or walking roads less taken with his little one.

Related Topics
  • AI
  • Artificial Intelligence
  • Generative AI
  • Sora
  • Video
You May Also Like
Read More
  • 5 min
  • Technology

Canonical Releases Ubuntu 25.04 Plucky Puffin

  • April 17, 2025
Read More
  • 3 min
  • Technology

Tokyo Electron and IBM Renew Collaboration for Advanced Semiconductor Technology

  • April 2, 2025
Read More
  • 4 min
  • Technology

IBM contributes key open-source projects to Linux Foundation to advance AI community participation

  • March 22, 2025
Read More
  • 2 min
  • Technology

Reducing malnutrition in hospitals with AI cameras

  • March 16, 2025
Read More
  • 4 min
  • Technology

Mitsubishi Motors Canada Launches AI-Powered “Intelligent Companion” to Transform the 2025 Outlander Buying Experience

  • March 10, 2025
Read More
  • 4 min
  • Technology

IBM Completes Acquisition of HashiCorp, Creates Comprehensive, End-to-End Hybrid Cloud Platform

  • February 27, 2025
Read More
  • 2 min
  • Technology

New Meta for Education Offering is Now Generally Available

  • February 26, 2025
Read More
  • 7 min
  • Technology

Claude 3.7 Sonnet and Claude Code

  • February 25, 2025
  • college-of-cardinals-2025
    The Definitive Who’s Who of the 2025 Papal Conclave
    • May 7, 2025
  • conclave-poster-black-smoke
    The World Is Revalidating Itself
    • May 6, 2025
  • Conclave: How A New Pope Is Chosen
    • April 25, 2025
  • Canonical Releases Ubuntu 25.04 Plucky Puffin
    • April 17, 2025
  • Tokyo Electron and IBM Renew Collaboration for Advanced Semiconductor Technology
    • April 2, 2025
about
Unleash Your Financial Potential With Us

BartDay is your all-in source of information for market insights, finance news, investing, trading, and more.

Data and information is provided “as is”. BartDay and any of its information service providers or third party sources is not liable for loss of revenues or profits and damages.

For comments, suggestions, or sponsorships, you may reach us at [email protected]
  • college-of-cardinals-2025 1
    The Definitive Who’s Who of the 2025 Papal Conclave
    • May 7, 2025
  • conclave-poster-black-smoke 2
    The World Is Revalidating Itself
    • May 6, 2025
  • 3
    Conclave: How A New Pope Is Chosen
    • April 25, 2025
  • 4
    Canonical Releases Ubuntu 25.04 Plucky Puffin
    • April 17, 2025
  • 5
    Tokyo Electron and IBM Renew Collaboration for Advanced Semiconductor Technology
    • April 2, 2025
BartDay
  • Economy
  • Cryptocurrency
  • Investing
  • Markets
  • People
  • Environment
  • Technology
  • Featured
  • About
Unleash Your Financial Potential With Us

Input your search keywords and press Enter.