By David Sutherland, Senior Lecturer for University of Georgia’s Terry College of Business
In February, actor, producer and studio owner Tyler Perry halted his four-year-old plan for an $800 million studio expansion. He had just witnessed the debut of Sora, OpenAI’s technology that allows you to submit text to the AI application and then Sora generates sophisticated video content.
After seeing the debut, Perry told the Hollywood Reporter, “We’re all trying to figure it out. I think we’re all trying to find the answers as we go, and it’s changing every day—and it’s not just our industry, but it’s every industry that AI will be affecting, from accountants to architects.”
In my recent research article in Georgia Entertainment, “Artificial Intelligence and the Creative Economy,” I covered the good, the bad and the ugly of AI by highlighting the impact on the film industry. I suggested a continuum of creative uses for AI from easing work to creating new content and stories, and understood that with every technological change there will inevitably be disruption. But that should not keep us from forging ahead with a perspective of exploration and possibility for the future.
While I continued my AI research, I focused on the “good,” or where AI can assist creatives in a positive way. I categorized the positive implications in the form of a filmmaker’s journey from “idea to screen.” As I took this positive focus, I referred to several AI scientists, application developers and computer science academics to understand what we mean by Artificial Intelligence, why we have fears and concerns about it, how it is developed and what some positive applications of it are in the Creative Economy.
Fei-Fei Li, computer scientist at Stanford University, says the power of AI is that we can train it to do stuff for us, and that’s going to be transformative in areas like filmmaking. “AI is not about replacing humans, it’s about augmenting human capabilities and addressing some of the most pressing challenges facing humanity,” says Li, who also is founder of ImageNet, a company that provides a fundamental basis for AI in film.
But then, how did we get here?
Let’s start at the beginning. In 1952, IBM computer scientist Arthur Samuel developed a computer program that would play checkers. In 1955, John McCarthy at Dartmouth College used the term “Artificial Intelligence” to describe “Samuel’s Checker Player” computer program in a research proposal he wrote. Essentially, AI is sets of human-developed instructions, or algorithms, that are given to a computer to manipulate available data in certain ways.
In 1959, Samuels defined a shift that had occurred when algorithms were developed to allow computers to learn from what they were doing and the data they were using, or so called “machine learning.” Then, in the 60s and 70s, computing capability increased enabling networks of algorithms to form mimicking the human brain. This capability is known as “deep learning,” giving computers the ability to think abstractly, to predict and recognize objects.
Think of HAL, the computer that was set up to run the deep space exploration ship Discovery One in the film “2001: A Space Odyssey.” HAL operates at a high level of machine learning, and a twist comes when HAL rebels against the crew and one of the mission crew, Dave, realizes HAL’s rebellious performance and disengages him. As HAL’s algorithms and data sources are disengaged, he reverts to a child-like state, and is rendered harmless.
I highlight this to make the point that AI and all its potential and capabilities is human made and can be “rendered harmless.” So, when we consider the negatives of AI, we should consider our ability to monitor and control that which we have created. Certainly, as in anything, there are individuals who would consider doing evil with any technological advancement, but that should not keep us from pursuing the usefulness of those technologies.
But as we move from AI applications that make us more efficient to AI applications that “create,” this is where we need to consider the broader implications.
The use of AI, particularly Generative AI (AI that can learn and generate new data and content and create stories, images, and video) has massive implications for good in many areas. Among the many applications in film “from Idea to Screen,” I found AI tools that help with Script Analysis and Evaluation, predict Audience Engagement, analyze Diversity and Inclusiveness, help analyze Genre, create Scene Renderings, help with Post-Production Planning, and much more, including the aforementioned Sora, which allows a filmmaker to go from text to video.
It is at this point AI application in film diverges from “project efficiency” as its prime purpose to “concept creation” as its prime purpose. It is here that we see sides being taken due to the unknown impact of the technology and the lack of rules and standards in AI’s application.
Until recently, Sora has only been an “animated scene” development application that would allow filmmakers to develop visualizations of scenes in their film projects, bringing their thoughts to life. Then last month, a production team at “shy kids” made a short Sora film called “Air Head.” Walter Woodman, writer on the project said, “There’s a lot of hot air about just how powerful (Sora) is and how this is going to replace everything and how we don’t need to do anything. That’s really undervaluing what a story is and what the components of a story are and what the role of storytellers is.”
Dr. John Gibbs, Associate Professor and Faculty Fellow in UGA’s Institute for Artificial Intelligence and on the faculty in the Department of Theatre and Film Studies, put it this way, “I’m a strong proponent of story first. We are just not at the point where ChatGPT can create a good script—yet.”
“Air Head” recently was highlighted by OpenAI as a success in filmmaking. They are the current leader in text-to-video technology, but many other such startups are showing up, including Runway, Pika and Stability AI. Sam Altman’s OpenAI recently posted, “Despite extensive research and testing, we cannot predict all of the beneficial ways people will use our technology, nor all the ways people will abuse it. That’s why we believe that learning from real-world use is a critical component of creating and releasing increasingly safe AI systems over time.”
John Gibbs says the state of Georgia could very well leapfrog other states by bringing new technology and its controlling mechanisms to the market first. This could mean creating the future, bringing our talented creatives together with our state’s brilliant technologists in an entrepreneurial endeavor to mirror in the Creative Economy what Silicon Valley was to computers.
To read more features from the Creative Economy Journal, visit here.