A few months ago, I saw a tweet, now lost in the abyss of Twitter (X), envisioning a world in which you and your friend could go to a theater, each put on a headset, and individually input an idea for a movie you’d want to see. Within minutes, the film would be completely generated by A.I., ready for you to watch. You and your friend would leave the theater, each having watched a separate film perfectly catered to your tastes and desires, all created by A.I. This idea garnered plenty of attention, with support and praise from fellow ‘tech bros’ and ‘A.I. artists’, but also backlash and critique from a variety of people.
Of course, this custom-movie headset idea isn’t going to happen any time in the foreseeable future. However, the idea intrigued me– I wanted to try and produce a piece of video content using AI while intervening as little as possible.
First, I explored InVideo AI. I attempted to create a short video answering the following questions: How will AI manifest creatively in the future? How will humans react to this? What will the relationship between human creativity and AI be like? I selected their ‘TikTok Video’ option, as videos on TikTok have a wide range in terms of form and content. I then entered the above questions in the prompt. I did not select any options to specify the narrator’s voice or gender, but I did select the option to use fewer stock videos with the hopes of yielding more A.I. generated imagery. Lastly, I was asked to select the audience for the video. The options given to me by the program were tech-enthusiasts, artists, or futurists. Given that the tech-enthusiasts and artists options would likely yield more biased results (one very pro-technology over artistry and one very pro-artistry over technology), I chose futurist. The result was a monotonous script about the power of A.I. to inspire humans and influence their works as humans influence A.I., paired with bland, typical visuals of faceless, robotic heads and the word “A.I.” in various fonts.
Given that this was so far from my intention, I went another route to create a new video for my final product. I followed a YouTube tutorial by AI Andy on creating a making an “AI MOVIE” using free A.I. resources. The video technically showed how to create a trailer, which aligned with the short nature of this project. Following the tutorial, I asked Chat GPT to write a script for the trailer. I also attempted to ask the program to determine the genders and names of the two characters in the script in order to narrow down the voices and images I would end up using. Next, I used Eleven Labs to create the voiceovers for the narrator, protagonist, and antagonist. I used Chat GPT’s responses about gender and the adjectives used to describe the characters and their voices in order to select voice options from Eleven Labs. I adjusted factors like ‘stability’ and ‘style exaggeration’ to yield somewhat realistic sounding, coherent voiceover. Afterwards, I entered the images listen in Chat GPT’s script to DreamStudio. I had to add some words relating to the script but not in it (“artificial intelligence” over A.I., technology, etc.) to create legible prompts for the program. Additionally, in the style option, I selected “cinematic,” as when I had not selected anything, the images came out in vastly different styles. I used Pika Labs to animate the images, either without any prompt, the language from Chat GPT, or ‘subtle expression,’ as was recommended in the video tutorial to yield soft yet notable movements. Then, I used Suno to create a trailer soundtrack. I fiddled with the settings to get a song that sounded like a piece of music, rather than just choppy sounds stitched together, by using the prompts ‘orchestral’ and ‘triumphant.’ I edited these components together, having to stretch some clips or cut down some gaps in the audio to create a cohesive final product.
While all the material was technically made through A.I. programs, I put together the parts and added subtitles where instead of transcribing audio, I stated the prompts I used in the image and video generator. Lastly, I created a credit sequence to try and accurately attribute each element to its ‘creator.’
I’ve attached both videos to this post, as they both have plenty of interesting similarities and differences that I wanted to share. For my final project, however, the trailer, I pose the following questions: Who made this video? Who did the creative labor, technical labor, etc.? If it were monetized, who would be paid, and who should be paid? What was my role in making the project? What about the role of the A.I.s and developer? What about the roles of the people who trained those A.I.s and acted as moderators?
Given that the prompts for the videos were about the roles of A.I. and humans in creativity in the future, these videos seem to propose possible answers themselves, but in a rather shallow, idealistic way that ignores the deeper issues posed in the above questions.
Leave a Reply