OpenAI recently unveiled its text-to-video model ‘Sora’. According to the company, Sora can generate videos up to a minute long while maintaining visual quality and adherence to the user’s prompt. The moment Sora was unveiled it caught everyone’s attention from its capabilities.
The videos generated by Sora look so real that they make people both amazed and worried. They seem almost like videos made by humans, with characters that look alive and move smoothly.
Also read: OpenAI releases Sora – All you need to know about this latest video generation AI model
Whether it’s scenes of nature or made-up characters, Sora’s videos have lots of detail and look really lifelike. Because they look so real, people are talking a lot about whether it’s okay to make videos like this and what might happen if we do.
Also read: AI sound generator: ElevenLabs will enhance your Sora AI videos
While it’s natural to feel scared about what AI can do, it’s important to remember that this is just the beginning. AI tools like Sora are only going to get better and more advanced over time.
What we’re seeing now is just a small part of what’s possible. So, while it’s good to be cautious, it’s also important to think about how we can use this technology in positive ways and how we can make sure it’s used responsibly.
Campaign India talked to AI ad experts about the creative possibilities presented by Sora and the challenges it could pose from a copyright and quality-of-work perspective.
“All of the existing text-to-video and image-to-video models struggle with common things like framerate and people walking realistically. Sora looks like it’s solved all of that, but also added a level of realism that we’ve never seen before,” Jason Zada, founder of Secret Level, said.
“We’ve been preaching ethically-conscious AI, which includes using actors for performance capture with voice and image. Once we get to a point where we can prompt two generative AI actors to act in a scene, we will move into a dangerous area that threatens the entire filmmaking process, “ Zada added.
“There are also a lot of questions — an entire world of questions regarding misinformation warfare. OpenAI can monitor for this, but if the technology is around, how long before it is replicated? How long before we have AI-generated videos that relay hatred and surgically sway elections?” Henry Daubrez, CEO and chief creative officer at Dogstudio/Dept, said.
Talking about copyright, Henry Cowling– chief innovation officer at MediaMonks– said: When it comes to copyright, we have copyright-protected models to address these challenges. It’s true that how brands choose to engage with technology is an important expression of their values. But the genie is out of the bottle. There’s no going back to a world before LLMs because of copyright alone.