fbpx Skip to main content

On Friday, OpenAI unveiled Sora, their text-to-video AI model and it’s seriously next level! Sora operates by allowing you (the user) to describe in text what you want to create; it then generates a video that brings your text prompt to life.

The ability to create AI generated video has been around for some time, but when you see the outputs, you’ll see why it’s a big deal.

The videos showcased on OpenAI‘s website are ridiculously good. The things that stood out to me was the textures, reflections, and movements being so lifelike.

If you were to see a brief five-second b-roll clip in passing, you’d be hard-pressed to tell if it was generated by AI. Currently, Sora is in closed beta, and only capable of creating videos up to one minute in duration. That being said, this is the worst this technology will ever be, it’s only going to get more and more refined as it develops.

Now, let’s look at real world applications. I see two primary ways we, as business owners can benefit from Sora. Firstly, you can create bespoke b-roll clips for your videos without the need to film on location or spend additional time capturing footage. Simply describe what you need and put it into action.

Secondly, it offers the ability to rapidly prototype videos to test in the market before committing resources and energy to production. There’s also a third aspect, where the entire video is generated by AI tools like Sora or Synthesia.

However, I think this will be very obvious, similar to how we can tell ChatGPT generated copywriting from that written by humans.

But there’s another side to consider. Taking a step back, it becomes obvious that OpenAI now has a complete suite of tools capable of generating content from script to screen, check it out:

  • ChatGPT: For generating scripts, social media and thumbnail copy.
  • Sora: Can generate parts of or entire videos.
  • DALL-E: Creation of thumbnails or in-video graphics.

Crazy right?

My Thoughts: Text to video is going to change the video landscape, there’s not doubt about that. The opportunities to leverage tools like Sora are massive.

Sora offers the capability to generate b-roll or supplementary footage without the need for on-site filming. As this technology evolves, similar to GPT-4, perhaps we’ll be able to upload images or videos of our products, and Sora could generate videos from that, enabling the creation of unique workspaces.

There might even be a future where, similar to Synthesia, we can create digital clones of ourselves and have the ability to generate entire videos (A-Roll of ourselves talking, and B-Roll showcasing our products) all within the Sora model. It’s difficult to know exactly what the future holds at this point, but I’m incredibly excited to see it unfold and to experiment with it.

Video below is an example from the OpenAi’s website and created using the prompt: Step-printing scene of a person running, cinematic film shot in 35mm.

Could you tell if it was Ai generated?

Leave a Reply