Electromagnetic Spiral film [ML for the Web]

ml-for-web

Making use of Runway's AI Magic Tools to generate a short video art film.

227036037-dc29c8fb-ea5d-4578-ae5c-85aad98ef783_AdobeExpress

Workflow

  1. I started generating an image with text to image an a prompt that read “electromagnetic field” 9e0dcc5e-6cc6-41fc-8f8e-d2e4a5bc3f4e jpg

  2. Then I sent that image to the image variations tool which I ran a few times before landing on an image that spoke to me. Runway 2023-03-22T19 47 00 561Z Image Variation jpg Runway 2023-03-22T19 44 45 977Z Image Variation jpg

  3. From there ran Erase and Replace on one of the variations several times to create a few frames of the image transforming into a spriral erase-and-replace png erase-and-replace png (2) erase-and-replace png (3) black spiral jpg

  4. I then ran image variation to create a change in sequence. Runway 2023-03-22T19 53 33 773Z Image to Image beam of blue light explosion jpg

  5. Before running image to image a few times to manipulate the color of the image with the prompt “blue light fill” Runway 2023-03-22T19 53 56 522Z Image to Image blue light fill jpg

  6. Lasty I brought all of the images together in Frame Interpolation to create a short video.

https://user-images.githubusercontent.com/49932341/227036037-dc29c8fb-ea5d-4578-ae5c-85aad98ef783.mp4

Describe the results of working with the tool, do they match your expectations?

I believe the result is a pretty simple video with a slight computer graphics look and a strong AI generated look. I expected the Frame Interpolation to be more intelligent in creating an organic transistion between frames but it seems to just be a morph transition tool more than anything else. The text to image generations met my expectations in creating images that brought me joy in relation to my unrealistic prompts.

Can you "break" the tool? In other words, use it in a way that it was intended for and what kinds of results do you get?

By inputting concepts into the text-toimage tool rather than actual real life reference points, I often found the AI to be confused and producing unreadable text. Inputted texts such as “Fear”, “love”, and “security” all seems to break the AI’s flow of generation.

Can you find any pro tips in terms of prompt engineering?

If you are looking to generate images of abstract concepts, include more words of real-life reference points that may have actual recorded imagery in a database.

Compare and contrast working with Runway as a tool for machine learning as related to ml5.js, python, and any other tools explored this semester.

Working with AI Magic Tools in Runway felt like a much more passive experience than working in ml5.js for image generation/training and python for text generation in the past. The latter two allow much more control of the process of generation and thus, in my opinion, provide more satisfactory results.