LIONSGATE x RUNWAY x AI x EA | This Week in AI Art 🤔

Cut through the noise, stay informed — new stories every Sunday.

Interesting find of the week. Check out Sougwen Chung, a Chinese-Canadian artist pioneering human-machine collaboration in art. Her "Assembly Lines" project features robotic art assistants that sync with her brainwaves to create paintings together.

In this issue:

If you’re trying to get to inbox zero but you still want to read this later:

FLUX UPDATES

u/Broken-Arrow-D07 shares their experience of successfully generating an AI image using Stable Diffusion on an AMD GPU, which is known to be challenging due to limited software support. They used ComfyUI with FLUX 1 Dev model and xlab's realism LoRA, running on Windows 11 with an RX 6600 XT (8GB VRAM). The workflow included a guidance level of 3.5, 30 steps, and the DPM2 SGM Uniform sampler. To overcome AMD compatibility issues, they implemented Zluda, an open-source CUDA port for AMD's ROCm platform. They also used a GGUF quantized model (likely Q4_0) to manage VRAM limitations. The post provides insights into generation times at various resolutions and steps, with the final high-quality image taking around 800 seconds to generate at 0.2 MP resolution.

u/DawgZter shared a discovery by a HuggingFace developer regarding "True CFG" (Classifier-Free Guidance) for Flux. This implementation supports negative prompting and allows for CFG values greater than 1 without image degradation. The technique, based on the PuLID team's work, involves a modified CFG calculation: noise_pred = neg_noise_pred + true_cfg * (noise_pred - neg_noise_pred). This approach enables the use of negative prompts and potentially improves image quality at higher CFG values. The discovery has sparked discussions about its potential applications, including the possibility of using ControlNet for negative conditioning.

u/nomadoor presents a technique for generating consistent character frames using a combination of Flux and ControlNet. Flux allows for the creation of multiple consistent frames using only prompts, while ControlNet Tile is used to improve layout and pose consistency. The workflow involves generating a grid of images with Flux, then refining them using ControlNet guided by a pre-made pose grid. This method enhances the sprite sheet technique known since Stable Diffusion 1.5, allowing for more reliable pose specification and improved alignment. The poster suggests using Cascadeur, a motion creation software, to create the initial pose grid. This technique can be applied to various subjects, including people, animals, and landscapes.

u/tom83_be shares settings and insights for training LoRA and DoRA models using OneTrainer with the Flux.1 architecture. The post provides detailed configuration screenshots and explanations for various training parameters, including resolution, learning rate, epochs, and LoRA/DoRA-specific settings. Key technical details include using NF4 precision for low VRAM training, setting repeats to 1 and controlling iterations via epochs, and adjusting rank/alpha values. The author notes achieving faster training speeds compared to Kohya-based methods, with 3.5-3.7 s/it at 512 resolution on a 3060 GPU.

u/wonderflex shared a clean and organized Flux pipeline workflow for Stable Diffusion image generation in ComfyUI. The workflow features a non-compact design for easy expansion and modification, including a LoRA stack loader and Ultimate SD Upscale. Key components include an Empty Latent Size Picker for resolution control, ClipTextEncodeFlux for text prompts, and a main pipeline with reroute nodes for efficient connections. The workflow demonstrates best practices for generating lower-resolution images first and then upscaling, which helps avoid artifacts in larger generations. The author also provided instructions for integrating additional nodes like Dynamic Prompts, showcasing the workflow's flexibility.

u/Main_Minimum_2390 shares a new workflow for seamless outpainting using Flux models in ComfyUI. The workflow employs a 5-step node group system that enables precise background and human feature outpainting, artifact and transition fixing, original detail restoration, and final image upscaling. This technique aims to produce high-resolution, professional-quality outputs with seamless extensions of existing images.

LIONSGATE x RUNWAY

Lionsgate, a major Hollywood studio, has taken a groundbreaking step by partnering with AI research firm Runway to develop exclusive artificial intelligence models based on its extensive film and television library. This first-of-its-kind deal for a major studio is expected to pave the way for similar agreements across the industry.

What exactly does this partnership entail, and how might it impact filmmaking?

The partnership aims to integrate AI into Lionsgate's filmmaking workflow, focusing on both pre-production and post-production stages. A key application could be generating advanced pre-visualizations and storyboards, potentially allowing executives to view rough drafts of movies before making greenlight decisions. As Lionsgate vice chair Michael Burns stated, the collaboration is about "augmenting, enhancing, and supplementing our current operations" through AI tools.

Importantly, the agreement stipulates that Runway cannot develop publicly accessible AI models using Lionsgate's content. This restriction ensures that Lionsgate maintains exclusive control over its intellectual property while still benefiting from AI-enhanced production tools.

Sounds good for Lionsgate, but what does Runway get out of this deal?

Beyond the obvious benefits like prestige, marketing, and an industry foothold, Runway's real windfall is far more technical and potentially game-changing. The key lies in the weights trained on Lionsgate's high-quality data. While Runway can't directly preview these models publicly, they can leverage this premium training to build new, enhanced models for the public.

It's a clever workaround: Runway will likely use the knowledge embedded in these weights, combined with refined model architectures honed on Lionsgate's professional content, to create more advanced public-facing AI tools.

Anything else I should know?
  • Artistic Compensation and Rights: the partnership raises important questions about how artists will be compensated when their work is used to train AI models. UTA CEO Jeremy Zimmer expressed concern, saying, "If I’m an artist and I’ve made a Lionsgate movie, now suddenly that Lionsgate movie is going to be used to help build out an LLM for an AI company, am I going to be compensated for that?" This issue is part of ongoing industry-wide discussions about AI and creative rights.

  • Creative Control: it's worth noting that, according to current DGA and WGA contracts, directors and writers have the right to refuse the use of AI technology in their work. This safeguard aims to protect creative vision in the face of new technologies. Concept artist Reid Southern, who worked on the original The Hunger Games, had a clear take on the matter: "This is the first step in trying to replace artists and filmmakers," he tweeted.

As the industry grapples with these rapid technological advancements, the Lionsgate-Runway partnership marks a significant moment at the intersection of artificial intelligence and traditional filmmaking, setting the stage for a new era in content creation.

ELECTRONIC ART (EA) x AI

Electronic Arts (EA) has unveiled a striking shift in its strategic focus. During its recent Investor Day presentation, EA leadership positioned artificial intelligence (AI) not just as a tool, but as "the very core of our business," according to CEO Andrew Wilson.

The company's AI strategy encompasses three main areas: efficiency, expansion, and transformation. EA claims to have over 100 active "novel AI projects" in development, ranging from practical applications to experimental ventures. In terms of efficiency, EA is already using AI to speed up content creation, such as generating stadiums and player likenesses for its upcoming College Football 25 game. For expansion, the company plans to enhance existing games with AI features, promising deeper character development and more authentic sports simulations.

Perhaps most intriguing is EA's vision for "transformation" – using AI to create entirely new gaming experiences. The company demonstrated several experimental projects, including an AI-powered platform for The Sims that allows players to find user-generated content through image-based searches, and a tool called "Script to Scene" that lets developers create game environments using simple text prompts. EA even showcased a prototype of an AI-driven virtual representation of soccer star Jude Bellingham, capable of answering questions in real-time.

While EA's enthusiasm for AI is evident, this strategic shift raises several questions. The environmental impact of increased AI usage, ethical concerns around using personal likenesses in AI models, and the potential impact on creative jobs in game development are all issues that warrant consideration. Moreover, there's a debate to be had about the distinction between AI-generated content and the intentional, creative work of human designers that has long been the hallmark of video game development.

EA’s ‘AI core business’ strategy pivot could propel EA to the forefront of interactive entertainment or dilute its essence as a game creator. Only time will tell.

PUT THIS ON YOUR RADAR

Subscribe to keep reading

This content is free, but you must be subscribed to Diffusion Digest to continue reading.

Already a subscriber?Sign In.Not now

Reply

or to participate.