ComfyUI Official Livestream Recap – June 4, 2025
Flux1-Context API Has Arrived!
Special guest from Black Forest Labs shows off amazing new editing powers
On June 4th, 2025, ComfyUI’s official livestream delivered some big news: the long-awaited Flux1-Context API node has finally dropped! Hosted by Purz and joined by ComfyUI frontend dev Pablo, Flux1-Context creator Dustin from Black Forest Labs (BFL) also made a guest appearance to demo this powerful new feature and talk about its development.
🌟 The Big Reveal: Flux1-Context API Node is Here!
The biggest highlight this week? No question—it’s the release of the Flux1-Context model as an API node, developed by BFL. This isn’t just another text-to-image model. It’s a flow-matching model built for smart image generation and editing using both text and image inputs.
Here’s what makes it so special:
-
Flow Matching: Enables high-level editing and generation that’s smart and controllable.
-
Text + Image Input: Combine prompts and reference images naturally.
-
In-Context Editing: Edit the same image across multiple steps.
-
Precise Object Control: Tweak specific parts of an image accurately.
-
Character Consistency: Keep visual traits consistent across steps.
-
Style Transfer + Preservation: Maintain or apply different styles.
-
Text in Images: Edit text inside images naturally.
-
Composition Control: Manage framing, angles, poses, etc.
-
Fast API: Generates results in just 4–5 seconds!
The open-source version isn’t out yet, but Purz and Dustin hinted that it’s “coming soon.” As part of the evolving Flux family, this model brings a new level of creative control to the table.
🎙️ Guest Spotlight: Dustin from Black Forest Labs
Dustin, the developer behind Flux1-Context, joined the stream and dropped some fascinating behind-the-scenes info:
-
Training Challenges: Building such a versatile model wasn’t easy—they had to train it to understand all kinds of editing tasks and instructions while leveraging knowledge from existing text-to-image models.
-
Quality + Consistency: A lot of fine-tuning went into making sure it delivers high-quality results almost every time.
-
Real Use Cases: Dustin shared that users are already doing cool stuff like restoring old family photos.
-
High-Res in the Future?: High-res versions might come later, but no promises yet.
-
Spatial + Lighting Awareness: The model learned how to understand space and lighting purely through training.
-
ControlNet and LoRA: Once the open-source version is live, ControlNet-style features could be supported. Also, expect “Task LoRAs” for things like bounding-box guided edits.
🧪 Live Demo Time: Real-World Magic with Flux1-Context
One of the coolest parts of the stream? Watching Flux1-Context in action.
The new Image Stitch node was also revealed—it lets you combine multiple images into one input. Here’s how it went:
They generated three separate images using Flux Pro:
-
A cute bunny with transparent background
-
A silk hat on white background
-
A pencil-drawing-style forest
-
Then, they stitched them together using Image Stitch.
-
They gave this stitched image to Flux1-Context with the prompt:
“Put the hat on the bunny and place it in the forest.”
-
Result: A perfect, pencil-style image of a bunny wearing a hat in a forest.
Then they said: “Make it look like a photo”—and Flux1-Context transformed it into a photorealistic version. Next: “Have the bunny reading a newspaper”—done. 🐰📰✨
Purz also showed off fun examples using his own face and his dog, generating realistic party scenes, facial expressions, and even embedded text. He noted that input image quality really affects output, so keep that in mind!
🔧 Other ComfyUI Ecosystem Updates
Aside from Flux1-Context, the livestream shared exciting community updates:
ComfyUI Bounty Program: Help improve ComfyUI and earn rewards! Tasks, categories, and estimated bounties are listed—great chance to contribute.
-
Performance/Optimization:
-
“T-Cash” leads with 136k+ downloads. Others include “Efficiency Nodes,” “Multi-GPU Optimization,” “Patches 2,” and “Good Guff.”
Model Tools:
-
“Reactor” for face editing is top. Then “IPAdapter,” “AnimateDiff,” “MM Audio” (audio from video), and 3D tools.
-
“App Preset,” “Metadata Extraction,” “Lora Manager,” and “CG Image Filter” are featured.
-
-
Customize the interface—new themes like Slate, Pink, and Atom One Dark are trending on Discord.
-
UI Themes:
-
Style Transfer Guide by Clown Shark Bat Wing:
-
Detailed guide on style transfer, blur suppression, and using custom K Samplers.
-
💬 Community Q&A + What’s Next?
The stream wrapped with lots of viewer questions—like how API credits are counted. Dustin said he enjoys checking out user creations on X (Twitter) using tags like #context.
Purz also apologized for some audio interface issues and promised a better setup next time.
📅 Looking Ahead
Flux1-Context has just opened up a whole new dimension for ComfyUI workflows. While we wait for the open-source release, you can already explore its full power via API—and expect future updates to unlock deeper controls like CFG tweaks and step settings.
The next ComfyUI livestream is planned for Friday, possibly diving deeper into community projects and Clown Shark Bat Wing’s K Sampler work. Stay tuned—the creative AI frontier is just getting started!
This article is based on the official ComfyUI livestream aired on June 4, 2025.
Comments