CG Renders to AI workflow - concept
Embracing the Future of Visual Effects: Integrating ComfyUI and AI into the Creative Workflow
In today's fast-paced visual effects (VFX) world, AI has the potential to be making things easier and more exciting, and ComfyUI is at the heart of this change.
ComfyUI and AI are still growing, and there's a lot more they could do, especially in animation. Using ComfyUI and AI in VFX isn't just about the latest tech. It's about making it easier for artists to bring their wildest ideas to life.
Here's a brief overview of my approach:
💡What is coming from Houdini?
Beauty render: No shaders, Basic light setup
Simple RGB mask(Redshift’s puzzle mask): you can bring as many masks as you want and separate them into channels in ComfyUI
Z-Depth: you will need to adjust the range in Nuke and export as PNG
Wireframe (toon shader): this will be used by ControlNet to constrain and define the boundaries of each element within a scene.
By using masks in conjunction with ControlNet, you can ensure that your creative intentions are accurately reflected in the results, leading to more refined and targeted visual effects.
download my ComfyUI project file here. 🎁
Enhancing With Krea.ai
When you open my file, you may encounter several missing nodes and errors, but there's no need to worry.
To address the missing nodes, simply open the Manager and select 'Install Missing Custom Nodes.' After all the necessary installations are complete, you'll need to restart ComfyUI. Once that's done, everything should be set up correctly.
💡List of the models I have used for this example:
Checkpoint:
ControlNet Models:
ControlNet_Depth_Zoe-SDXL-V1.0
IPAdapter Model:
Clip Vision:
IPAdapter’s huggingface page is here.
-Ardy