Theta Health - Online Health Shop

Comfyui workflow civitai

Comfyui workflow civitai. If you don't need LoRA support, separate seeds, CLIP controls, or hires fix - you can just grab basic v1. 5 + SDXL Base - using SDXL as composition generation and SD 1. rgthree's ComfyUI Nodes. Check out my other workflows. Short version uses a special node from Impact pack. The main goal is to create short 5-panels stories in just one queue. com/gokayfem/ComfyUI_VLM_nodes Download both from the link b My 2-stage (base + refiner) workflows for SDXL 1. Check out my other workflows Put it in "\ComfyUI\ComfyUI\models\sams\"; Download any SDXL Turbo model; (optional) Install Use Everywhere custom nodes; Download, open and run this workflow. BLIP is not human. Controlnet, Upscaler. SD and SDXL and Loras models are supported. Reproducing this workflow in automatic1111 does require alot of manual steps, even using 3rd party program to create the mask, so this method with comfy should be very convenient. I will keep updating the workflow too here. Install ControlNet-aux custom nodes;. You can easily run this ComfyUI Face Detailer Workflow in RunComfy, a cloud-based platform tailored specifically for ComfyUI. This is my simplified workflow that I use with Tower13Studios amazing embeddings and models. From Stable Video Diffusion's Img2Video, with this ComfyUI workflow you can create an image with the desired prompt, negative prompt and checkpoint(and vae) and then a video will automatically be created with that image. Select model and prompts; Set your questions and answers; Check Extra Options and Auto Queue checkboxes in ComfyUI floating menu; Press Queue Prompt; After success, check Auto Queue checkbox again. To use ComfyUI-LaMA-Preprocessor, you'll be following an image-to-image workflow and add in the following nodes: Load ControlNet Model, Apply ControlNet, and lamaPreprocessor: When setting the lamaPreprocessor node, you'll decide whether you want horizontal or vertical expansion and then set the amount of pixels you want to I used this as motivation to learn ComfyUI. Guide image composition to make sense. Locate your models folder. workflow is attached to this post top right corner to download 1/Split frames from video (using and editing program or a site like ezgif. added a default project folder with a default video its 400+ frames original so limit the frames if you have a lower vram card to use the default. CPlus load This workflow is a one-click dataset generator. ComfyUi_NNLatentUpscale. 16. Features. Current Feature: New node: LLaVA -> LLM -> Audio Update the VLM Nodes from github. json. This workflow is just something fun I put together while testing SDXL models and LoRAs that made some cool picture so I am sharing it here. It can be used with any SDXL checkpoint model. To use it, extract and place it in the comfyui/custom_nodes folder. After we use ControlNet to extract the image data, when we want to do the description, This was built off of the base Vid2Vid workflow that was released by @Inner_Reflections_AI via the Civitai Article. Users have the ability to assemble a workflow for image generation by linking In this article, I will demonstrate how I typically setup my environment and use my ComfyUI Compact workflow to generate images. 0 page for more images) An img2img workflow to fill picture with details. ComfyUI-Custom-Scripts. Load the provided workflow file into ComfyUI. This simple workflows makes random chimeraes. 0 R E A D Y ! VAE在ckpt內部,使用像這樣內建CLIP的版本 VAE is inside ckpt, CLIP built in is most convenient : https://civitai. Tile ControlNet + Detail Tweaker LoRA + Upscale = More details This is my first encounter with TURBO mode, so please bear with me. This workflow also contains 2 up scaler workflows. 2 This workflow revolutionizes how we present clothing online, offering a unique blend of technology and creativity. Here's a video showing off the workflow: sdxl comfyui workflow comfyui sdxl The time has come to collect all the small components and combine them into one. Quantization is a technique first used with Large Language Models to reduce the size of the model, making it more memory-efficient, enabling it to run on a wider range of hardware. Use whatever upscale you have. https://github. It is not perfect and has some things i want to fix some day. ComfyUI-YoloWorld-EfficientSAM. New Version ! Moondream LLM for Prompt generation: GitHub: https://github. They will all appear on this model card as the uploads are completed. Credits. Load an image to inpaint into (toImage version) or write prompts to generate it (toGen SDXL Workflow Comfyui-Realistic Skin Texture Portrait. This is the first update for my ComfyUI Workflow. 0 page for comparison images) This is a workflow to strip persons depicted on images out of clothes. It's almost identical to Face Transfer, but for expressions. For more details, please visit ComfyUI Face Detailer Workflow for Face Restore. This workflow use the Impact-Pack and the Reactor-Node. ComfyUI Workflow | ControlNet Tile and 4x UltraSharp for Hi-Res Fix. 2) Batch Upscaling Workflow: Only use this if you intend to upscale many images at once. (Bad hands in original image is ok for this workflow) Model Content: Workflow in json format. SDXL conditioning can contain image size! This workflow takes this into account, guiding generation to: Look like higher resolution images. Thus I have used many time and memory saving extensions like tiled (en/de)coders and kSamplers. For information where download the Stable Diffusion 3 models and where put the In the ComfyUI workflow, we utilize Stable Cascade, a new text-to-image model. NNlatent upscale: Latent upscale on the second and third workflow. There is the node called " Quality prefix " near every model loader. Features : LLM prompting. If you have a file called extra_model_paths. 5 models , all in one ComfyUI-Impact-Pack. This guide will help you install ComfyUI, a powerful and customizable user interface, along with several popular modules. 0 Workflow. Both of my images have the flow embedded in the image so you can simply drag and drop the image into ComfyUI and it should open up the flow but I've also included the json in a zip file. 0 in ComfyUI, with separate prompts for text encoders. Attention: The skin detailer with upscaler workflow is extremely hardware-intensive. cg-use-everywhere. The whole point of the GridAny workflow is being able to easily modify it to your COMFYUI basic workflow download workflow. :: Comfyroll custome node. So I decided to make a ComfyUI workflow to train my LoRA's, and here it is a short guide to it. It generates random image, detects the face, automatically detect image size and creates mask for inpaint, finally inpainting chosen face on generated image. In the most simple form, a ComfyUI upscale In this guide, we are aiming to collect a list of 10 cool ComfyUI workflows that you can simply download and try out for yourself. Like prompting: less is more. Please read SD3 Unbanned: Community Decision on Its Future at Civitai. Here's a ComfyUI workflow for the Playground AI - Playground 2. @pxl. ComfyUI provides some of the most flexible upscaling options, with literally hundreds of workflows and nodes dedicated to image upscaling. Every time you press "Queue Prompt", new specie adds. Models used: AnimateLCM_sd15_t2v. It starts with a photo of a model in an outfit. A ComfyUI workflow for the Stable Diffusion ecosystem inspired by Midjourney Tune. For this study case, I will use DucHaiten-Pony-XL with no LoRAs. In this article, I will demonstrate how I typically setup my environment and use my ComfyUI Compact workflow to generate images. This is the list: Custom Nodes. Install Custom Nodes: You can also search for GGUF Q4/Q3/Q2 models on CivitAI. Install ComfyI2I custom nodes; Download and open this workflow. Jbog, known for his innovative animations, shares his workflow and techniques in Civitai twitch and on the Civitai YouTube channel. Change your width to height ratio to match your original image or use less padding or use a smaller It makes your workflow more compact. delusions. ) are archived in an included zip file. Rembg + Colored diluted mask = Sticker. rgthree-comfy. Nodes/graph/flowchart interface to experiment and create complex Stable Diffusion workflows without needing to code anything. 👍. I implemented FreeU and corrected the upscaler by eliminating the face restore whi Dynamic Prompts ComfyUI. Direction, speed and pauses are tunable. The SD Prompt Reader node is based on ComfyUI Load Image With Metadata Showing an example of how to do a face swap using three techniques: ReActor (Roop) - Swaps the face in a low-res image Face Upscale - Upscales the From Stable Video Diffusion's Img2Video, with this ComfyUI workflow you can create an image with the desired prompt, negative prompt and checkpoint(and vae) and then a video will automatically be created with that image. Output example-15 poses. i wanted to share a ComfyUi simple workflow i reproduce from my hours spend on A1111 with a Hires, Loras, Double Adetailer for face and hands and a last upscaler + a style filter selector. Hello there and thanks for checking out this workflow! — Purpose — This workflow was built to provide a simple and powerful tool for SD3, as it was recently unbanned on CivitAI and the community is making quick progress in correcting the base model's shortcomings!. Step 1: This is a simple workflow to run copaxTimelessxl_xplus1-Q8_0. please pay attention to the default values and if you build on top of them, feel free to share your work :) (check v1. 5 without lora, takes ~450-500 seconds with 200 steps with no upscale resolution (see workflow screenshot from This is pretty standard for ComfyUI, just includes some QoL stuff from custom nodes. Controlnet YouTube Tutorial / Walkthrough: Motion Brush Workflow for ComfyUI by VK! Please follow the creator on Instagram if you enjoy the workflow! https:// To see the list of available workflows, just select or type the /workflows command. 主模型可以使用SDXL的checkpoint。 01/10/2023 - Added new demos and made updates to align with CR Animation nodes release v1. Here is my way of merging BASE models and applying LORAs to them in non-conflicting way using the ComfyUI (grab the workflow itself in the attachment to this ComfyUI Installation Guide for use with Pixart Sigma. NOT the HandRefiner model made specially This workflow is essentially a remake of @jboogx_creative 's original version. This workflow is not for the faint of heart, if you're new to ComfyUI, we recommend selecting one of the simpler workflows above. Afterwards, the Switch Latent in module 8 will automatically switch to the first Latent. An upscaler that is close to a1111 up-scaling when values are between 0. Everything said there also applied here. gguf and model copaxTimelessxl_xplus1-Q4 on comfyUI. Welcome to V6 of my workflows. It is a simple workflow of Flux AI on ComfyUI. Available modes: Depth / Pose / Canny / Tile / Blur / Grayscale / Low quality Instructions: Update ComfyUI to the latest version. It is based on the SDXL 0. Browse comfyui Stable Diffusion models, checkpoints, hypernetworks, textual inversions, embeddings, Aesthetic Gradients, and LORAs Workflows in ComfyUI represent a set of steps the user wishes the system to perform in achieving a specific goal. In the locked state, you can pan and zoom the graph. Upscaling ComfyUI workflow. ComfyUI-Inpaint-CropAndStitch. 5 checkpoint, LoRAs, VAE according 01/10/2023 - Added new demos and made updates to align with CR Animation nodes release v1. Hey this is my first ComfyUI workflow hope you enjoy it! I've never shared a flow before so if it has problems please let me know. Everyone who is new to comfyUi starts from step one! Download Photomaker model and place it in " \ComfyUI\ComfyUI\models\photomaker\ "; Download ViT-B SAM model and place it in "\ComfyUI\ComfyUI\models\sams\ "; Download and open the workflow. Using Topaz Video AI to upscale all my videos. If for some reason you cannot install missing nodes with the Comfyui manager, Download SDXL OpticalPattern ControlNet model (both . In the example, it turns it into a horror movie poster. I only use one group at any given time anyway, in the others I disable the starting element Using the Workflow. (None of the images showcased for this model are Beta 2 - fixed save location for pose and line art. CR Animation Nodes is a comprehensive suite of animation nodes, by the Comfyroll Team. Lineart. The upload contains my setup for XY Input Prompt S/R where I list out a number of detail prompts that I am testing with and their weights. Note: This workflow includes a custom node for metadata. Tiled Diffusion. Included in this workflow is a custom Node for Aspect Ratios. Civitai. x-flux-comfyui. It generates random image, detects the face, automatically detect image size and creates mask for inpaint, finally inpainting chosen This is a simple workflow to generate symmetrical images. 5 for final work SD1. All Workflows were refactored. For this to work correctly you need those custom node install. Tips: Bypass node groups to disable functions you don't need. The contributors to helping me with various parts of this workflow and getting it to the point its at are the following talented artists (their Instagram handles) @lightnlense. Replace your image's background with the newly generated backgrounds and composite the primary subject/object onto your images. This process is used instead of directly using the realistic texture lora because it achieves better and more controllable effects. Keep objects in frame. watch the video and/or s Image to image workflows can get some details wrong, or mess up colors, especially when working with two different models and VAEs. It's enhanced with AnimateDiff and the IP-Adapter, enabling the creation of dynamic videos or GIFs that are customized based on your input images. --v2. Note that Auto Queue checkbox unchecks after the end. txt; Update. Fully supports SD1. For that, it chos This workflow takes an existing movie, and turns it into a movie of another genre. SD1. Tenofas FLUX workflow v. Like, "cow-panda-opossum-walrus". Restart It is possible for this workflow to automatically detect QR and stop when it's readable! Unmute "Test QR to Stop" group; Check "Extra Options" and "Auto Queue" in ComfyUI menu. This part is my exploration on a debugging method that applies to both local debugging (running ComfyUI program on my PC) and remote debugging (running ComfyUI program on a remote server and debugging from my PC). All of which can be installed through the ComfyUI-Manager. Magnifake is a ComfyUI img2img workflow trying to enhance the realism of an image Modular workflow with upscaling, facedetailer, controlnet and LoRa Stack. I used to run ComfyUI on CPU only as I did not have an nVidia graphics card. PatternGeneration version. However, the models linked above are highly recommended. List of Templates. The Face Detailer can 5. It was created to improve the image quality of old photos with low pixel counts. 5 + SDXL Base+Refiner is for experiment only SD1. If for some reason you cannot install missing nodes with the Comfyui manager, here are the nodes used in this workflow: ComfyLiterals, Masquerade Nodes, Efficiency Nodes for ComfyUI, pfaeff-comfyui, MTB S D 3 . T2i workflow with TCD example (give TCD a try) Workflow Input: Original pose images. My complete ComfyUI workflow looks like this: You have several groups of nodes, that I would call Modules, with different colors that indicate different activities in the workflow. A1111 prompt style (weight normalization) Lora tag inside your prompt without using lora loader nodes. Output videos can be loaded into ControlNet applicators and stackers using Load Video nodes. This is also the reason why there are a lot of custom nodes in this workflow. 2024, changed the link to non deprecated version of the efficiency nodes. For this study case, I will use DucHaiten-Pony-XL with no it's essential to have an input reference image in Module 4, otherwise, the workflow won't function properly. SD1. yaml files), and put it into ComfyUI Workflows. Press "Queue Prompt". They can be as simple as loading a model , a ksampler, a positive and negative prompt , and saving or displaying the output, all the way to batch processes generating variable video output from files sourced from the Internet. Some of them have the prompt attached to them, and some include text like that: "<lora:add-detail-xl:1>" or COMFYUI basic workflow download workflow. Workflow Input: Original pose images A1111 Style Workflow for ComfyUI. (check v1. You will need to customize it to the needs of your specific dataset. Installing ComfyUI. 2. What this workflow does. 0. Depth. g. com/models/312519 Simple img2vid workflow: https://civit It's running custom image improvements created by Searge and if you're an advanced user, this will get you a starting workflow where you can achieve almost anything when it comes to still image generation. If you have problems with mtb Faceswap nodes, try this : (i don't do support) This post contains two ComfyUI workflows for utilizing motion LoRAs: -The workflow I used to train the motion lora -Inference workflow for generations For some workflow examples and see what ComfyUI can do you can check out: ComfyUI Examples. As this is very new things are bound to change/break. For this Styles Expans My attempt at a straightforward workflow centered on the following custom nodes: comfyui-inpaint-nodes. With this release, the previous boxing weight-themed workflows (e. 0 page for more images) This workflow automates the process of putting stickers on picture. 5 Demo Workflows. What's new in v4. Flux is a 12 billion parameter model and it's simply amazing!!! This workflow is still far from perfect, and I still have to tweak it several times Version : Alpha : A1 (01/05) A2 (02/05) A3 (04/05) -- (04/05 Simple ComfyUI workflow used for the example images for my model merge 3DPonyVision. These instructions assume you have ComfyUI installed and are familiar with how everything works, including installing missing custom nodes, which you may need to if you get errors when loading the workflow. ComfyUI serves as a node-based graphical user interface for Stable Diffusion. SDXL FLUX ULTIMATE Workflow. Therefore, in this workflow, the faces are detected and the eyes are subtracted, so only the skin is improved while keeping the beautiful SD3 eyes. I adapted the WF received from my friend Olga :) You have to dowload this model execution-inversion-demo-comfyui. You can easily run this ComfyUI Hi-Res Fix Workflow in ComfyUI Cloud, a platform tailored specifically for ComfyUI. How to use. The model includes 2 content below: Demo: some simple workflow for basic node, like load lora, TI, ControlNetetc. You can also find upscaler workflow there. Workflow Output: Pose example images ComfyUI-SUPIR. (optional) Download and use a good model for digital art, like Paint or A-Zovya RPG Artist Tools. Load your own wildcards into the Dynamic Prompting engine to make your own styles combinations. All of which can be installed through the ComfyUI workflow for the Union Controlnet Pro from InstantX / Shakker Labs. 2) This file goes into: ComfyUI_windows_portable\ComfyUI\models\clip_vision. com/models/628682/flux-1-checkpoint Nodes/graph/flowchart interface to experiment and create complex Stable Diffusion workflows without needing to code anything. I've gathered some useful guides from scouring the oceans of the internet and put them together in one workflow for my use, and I'd like to share it with you all. Nodes. Instead, I've focused on a single workflow. By default, the workflow iterates through pre-downloaded models. It covers the following topics: This is a ComfyUI workflow to swap faces from an image. Works with bare ComfyUI (no custom nodes needed). Changed general advice. Aura-SR upscale — Download and open this workflow. If you already know the name of the workflow you want to use, you can copy and paste it directly. When updating, don't forget to include the submodules along with the main repository. Configure the input parameters according to your requirements. fixed batching and re-batching for SAM custom masks. Load this workflow. Change Log. All essential nodes and models are pre-set and ready for immediate use! Plus, you'll find plenty of other great ComfyUI Workflows on the RunComfy website. Introduction. This workflow perfectly works with 1660 Super 6Gb VRAM. Impact Pack. Fixed an issue with the SDXL Prompt Styler in my workflow. Versions. Character Interaction (Latent) (discontinued, workflows can be found in Legacy Workflows) First of all, if you want something that actually works well, check Character Interaction (OpenPose) or Region LoRA. This is an "all-in-one" workflow: https://civitai. That's all for the preparation, now ComfyUI Workflows. All of which can be installed through the ComfyUI-Manager If you encounter any nodes showing up red (failing to load), you can install the corresponding custom node packs through the ' Install Missing Custom Nodes ' tab on the ComfyUI Manager as well. Installation and dependencies. Introduction to This is the workflow I put together for testing different configurations and prompts for models. Comparison of results. This workflow is a brief mimic of A1111 T2I workflow for new comfy users (former A1111 users) who miss options such as Hiresfix and ADetailer. Segmentation results can be manually corrected if automatic masking result leaves more to be desired. Please try SDXL Workflow Templates if you are new to ComfyUI or SDXL. I'm not sure why it wasn't included in the image details so I'm uploading it here separately. It is also compatible with CivitAI automatic metadata population. These two files must be placed in the folder I show you in the picture: ComfyUI_windows_portable\ComfyUI\models\ipadapter. It requires a few custom nodes, including ComfyUI Essentials and my own Flux Prompt Saver node. I try to keep it as intuitive as possible. Workflows in ComfyUI represent a set of steps the user wishes the system to perform in achieving a specific goal. Download Depth ControlNet (SD1. There’s still no word (as of 11/28) on official SVD suppor t ComfyUI-mxToolkit. Efficiency Nodes. Table of contents. Upscale. com ) and reduce to the FPS desired. → full size image here ←. This a workflow to fix hands. After entering this command into the Discord channel, you'll receive a drop down list of workflows currently available in the Salt AI workflow catalog. External Links. ComfyUI_UltimateSDUpscale. @machine. 1? This is a minor update to make the workflow and custom node extension compatible with the latest changes in ComfyUI. My attempt at a straightforward upscaling workflow utilizing SUPIR. Simply add a image (or single frame) and analyze the This is a workflow to generate hexagon grid of images. Answers may come in This workflow template is intended as a multi-purpose templates for use on a wide variety of projects. Run any - If the image was generated in ComfyUI and metadata is intact (some users / websites remove the metadata), you can just drag the image into your ComfyUI window. Flux. Think Diffusion's Stable Diffusion ComfyUI Top 10 Cool Workflows. The code is based on nodes by LEv145. Installation. Install WAS Node Suite custom nodes; Install ControlNet Auxiliary Preprocessors custom nodes; Download ControlNet Lineart model (both . On an RTX 3090, it takes about 10-12 minutes to generate a single image. This ComfyUI workflow is used to test and pick which preprocessors/controlnets will work best for your images. At the end of this post you can find what files you need to run this workflow and the links for downloading them. Heres my spec. com/models/497255 And believe me, training on ComfyUI with these nodes is even easier than using Kohya trainer. Known Issues Abominable Spaghetti Workflow The unmatched prompt adherence of PixArt Sigma plus the perfect attention to detail of the SD 1. 3) This one goes into: ComfyUI_windows_portable\ComfyUI\models\loras. Actually there are many other beginners who don't know how to add LORA node and wire it, so I put it here to make it easier for you to get started and focus on your testing. I am using a base SDXL Zavychroma as my base model then using Juggernaut Lightning to stylize the image . was-node-suite-comfyui. It should be straightforward and simple. Otherwise I suggest going to my HotshotXL workflows and adjusting as above as they work fine with this motion module (despite the lower resolution). Workflow Sequence: Controlnet -> txt2img -> facedetailer -> img2img -> facedetailer -> SD Ultimate Upscaling. If you look into color manipulations, you might also be interested in Rotate This is a simple comfyui workflow that lets you use the SDXL Base model and refiner model simultaneously. These nodes can ComfyUI_essentials. Usage. Install Impact pack custom nodes; Download Photomaker model and place it in " \ComfyUI\ComfyUI\models\photomaker\ "; Boto's SDXL ComfyUI Workflow. It allows you to create a separate background and foreground using basic masking. It will batch-create the images you specify in a list, name the files appropriately, sort them into folders, and even generate captions for you. It generates a full dataset with just one click. Set the number of cats. it will change the image into an animated video using Animate-Diff and ip adapter in ComfyUI. git pull --recurse-submodules. 3. ComfyUI_essentials. Basic txt2img with hiresfix + face detailer. How to load pixart-900m-1024-ft into ComfyUI? 1 - Install the "Extra Models For ComfyUI" package from Comfy Manager; 2 - Download diffusion_pytorch Ah, ComfyUI SDXL model merging for AI-generated art! That's exciting! Merging different Stable Diffusion models opens up a vast playground for creative exploration. safetensors and . 60 based on latent empty images : See : https://civitai. ComfyUI prompt control. I am a newbie who has been using ComfyUI for about 3 days now. Share, discover, & run ComfyUI workflows. If you encounter any nodes showing up red (failing to load), you can install the corresponding custom node packs in most cases through the ' Install Missing Custom Nodes ' tab on (Bad hands in original image is ok for this workflow) Model Content: Pose Creator V2 Workflow in json format. com! Whether you're an experienced user or new to the platform, these workflows offer 6 min read. You can easily run this ComfyUI AnimateDiff Workflow in ComfyUI Cloud, a platform tailored specifically for ComfyUI. Around 12Gb Vram is all you need on your graphic card, so you don't need a RTX 3090 or 4090 Gpu, but it may need 32Gb Ram (set "split_mode" on "true"). -----This is a workflow intended to replicate the BREAK feature from A1111/Forge, Adetailer, and Upscaling all in one go. This doesn't, I'm leaving it for archival purposes. With this workflow you can train LoRA's for FLUX on ComfyUI. Works VERY well!. In this workflow building series, Anyone else having trouble getting their ComfyUI workflow to upload to civit? I'm trying to upload a . Adjust your prompts and parameters as desired. 9 facedetailer workflow by FitCorder, but rearranged and spaced out more, with some additions such as Lora Loaders, VAE loader, 1:1 Download, unzip, and load the workflow into ComfyUI. 04. Initially, I considered using the Playground model for the Face Detailer as well, but after extensive testing, I decided to opt for an SD_1. 5 model with Face Detailer. This is my current SDXL 1. Can be complemented with ComfyUI Fooocus Inpaint Workflow for correcting any minor artifacts. Explore thousands of workflows created by the community. Try adding them to the prompt if you're getting consistently bad results. com/m Simple workflow to animate a still image with IP adapter. You might need to change the nodes in the workflows. Launch ComfyUI and start using the SuperPrompter node in your workflows! (Alternately you can just paste the github address into the comfy manager Git installation option) 📋 Usage: 1. This workflow was created with the initial intent of restoring family photos, but it is not at all limited to that use case. 1. Current Feature: While we're waiting for SDXL ControlNet Inpainting for ComfyUI, here's a decent alternative. How to modify. Feature of daily workflow: Output image selector: Basic output. Select model and prompt; Set Max Time (seconds by default) Check Extra Options and Auto Queue checkboxes in ComfyUI floating menu; Press Queue Prompt; When you want to start a new series of images, press New Cycle button in ComfyUI floating menu and check Auto Queue Just tossing up my SDXL workflow for ComfyUI (sorry if its a bit messy) How can I use SVD? ComfyUI is leading the pack when it comes to SVD image generation, with official S VD support! 25 frames of 1024×576 video uses < 10 GB VRAM to generate. Requirements: Efficiency Nodes. June 24, 2024 - Major rework - Updated all workflows to account for the new nodes. This workflow makes an animation of one picture switching to another. Eg. These workflow are intended to use SD1. All of which can be installed through the ComfyUI-Manager If you encounter any nodes showing up red (failing to load), you can install the corresponding custom node packs through the ' Install Missing Custom Nodes Update: v82-Cascade Anyone The Checkpoint update has arrived ! New Checkpoint Method was released. . ckpt http This ComfyUI Workflow takes a Flux Dev model image and gives the option to refine it with an SDXL model for even more realistic results or Flux if you want to wait a while! Version 4: Added Flux SD Ultimate Upscale This is pretty standard for ComfyUI, just includes some QoL stuff from custom nodes. CivitAI metadatas output. Output example-4 poses. I am fairly confident with ComfyUI but still learning so I am open to any suggestions if anything can be improved. Feel free to post your pictures! I would love to see your creations with my workflow! <333. For beginners on ComfyUi, start with Manager extension from here and install missing Custom nodes works fine ;) Newer Guide/Workflow Available https://civitai. yaml inside This is a small workflow guide on how to generate a dataset of images using ComfyUI. Download the model to models/controlnet. It works exactly the same, but though noodles. Install Custom Scripts custom nodes; Install Allor custom nodes; Install Cyclist custom nodes; Install WAS Node Suite custom Download and open this workflow. To achieve this, I used GPT to write a simple calculation node, you need to install it from my Github. x, SDXL , To show the workflow graph full screen. efficiency-nodes-comfyui. running this workflow (its not working fast but still Reverse workflow: Photo2Anime. I found that SD3 eyes look very good, but the skin textures do not. Quickly generate 16 images with SDXL Lightning in different styles. With this workflow for ComfyUi you can modify clothes on man and woman with different style. I use it to gen 16/9 4k photo fast and easy. The workflow is composed by 4 blocks: 1) Dataset; 2) Flux model loader and training settings; 3) Training progress validate; 4) End of training. " You're ready to run Flux on your I'm new in Comfyui, and share what I have done for Comfyui beginner like me. SDXL Workflow for ComfyUI with Multi This workflow creates movie poster parodies automatically. If you want to play with parameters, I advice you to take a look on the following from the Face Detailer as they are those that do the best for my generations : This is a minor update to make the workflow and custom node extension compatible with the latest changes in ComfyUI. How it works Generate stickers → Remove backg This is a simple workflow to automatically cut the main subject out of image and make a little colored border around it. [If you want the tutorial video I have uploaded the frames in a zip File] Using the Workflow. Need this lora and place it in the lora folder I just reworked the workflow and wrote a user-guide. https://civitai. My ComfyUI workflow that was used to create all example images with my model RedOlives: I see many beautiful and extremely detailed images in Civitai. Demo Prompts. Hand Fix (Leave a comment if you have trouble installing the custom nodes/dependencies, I'll do my best to assist you!) This simple workflow consists of two main steps: first, swapping the face from the source image to the input image (which tends to be blurry), and then restoring the face to make it clearer. It uses marigold depth detection on the original image and creates a new image using controlnet depth map and IP Adapter, with a little bit of help from either BLIP image captioning or your own prompt. This is a ComfyUI workflow base on LCM Latent Consistency Model for ComfyUI. Advanced controlnet: on the second and third workflow for more control over controlnet. If the pasted image is coming out weird, it could be that your (width or height) + padding is bigger than your source image. Disclaimer: this article was originally wrote to present the ComfyUI Compact workflow. Daily workflow: 1 text to image workflow at this moment. XY Grid - Demo Workflows. System Requirements (check v1. The template is intended for use by advanced users. If wished can consider doing an upscale pass as in my everything bagel workflow there. SDXL only. From subtle to absurd levels. Stable Diffusion 3 (SD3) 2B "Medium" model weights! Please note; there are many files associated with SD3. Select the correct mode from the This workflow is very good at transferring the style of image onto another image, while preserving the target image's large elements. Install Impact Pack custom nodes;. 50 and 0. com/models/539936 you must only have one toggle activated, for best use. - If the Let's approach workflow customization as a series of small, approachable problems, each with a small, approachable solution. Workflow in png file. TCD lora and Hyper-SD lora. We constructed our own workflow by referring to various workflows. ComfyUI-WD14-Tagger. Download and open this workflow. 2 Download ViT-H SAM model and place it in "\ComfyUI\ComfyUI\models\sams\ "; Download ControlNet Openpose model (both . Crisp and beautiful images with relatively short creation time, easy to use. Link model: https://civitai. Generate → Mirror latent → Generate → Mirror image (optional) Check out my other workflows It's a workflow to upscale image several times, gradually changing scale and parameters. Just put most suitable universal keywords for the model in positive (1st string) and negative (2nd string). This is a workflow that is intended for beginners as well as veterans. It seamlessly combines these components to achieve high-quality inpainting results while preserving image quality across successive iterations. Older versions are not better or worse, but they are long and expanded. Final Steps: Once everything is set up, enter your prompt in ComfyUI and hit "Queue Prompt. Install WAS Node Suite custom nodes; Instal ComfyMath custom nodes; Download and open this This is a workflow to change face expression. Install ComfyUI Manager and install all missing nodes and models needed for each custom nodes. Check Extra Option s and Auto Queue checkboxes in ComfyUI floating menu, press Queue Prompt. pth and . Background is transparent. Input image use MaskEditor and wait for output image at full resolution. Install Cyclist custom nodes; Install Impact Pack custom nodes (or any other wildcard support), and a wildcard for animals; Download and open this workflow. pshr. Install Masquerade custom nodes; Install VideoHelperSuite custom nodes; Download archive and open Rolling Split Masks workflow; Check "Extra Options" in ComfyUI menu and set 👀IntantID is available with SDXL model. Please note for my videos I also have did an upscale workflow but I have left it out of the base workflows to keep below 10GB VRAM. Install WAS Node Suite custom nodes; Download, open and run this workflow. com/kijai/ComfyUI-moondream This is a simple ComfyUI workflow for the awe This is pretty standard for ComfyUI, just includes some QoL stuff from custom nodes. If you like my model, please Basic LCM workflow used to create the videos from the Shatter Motion LoRA. ControlNet. Locate your ComfyUI install folder. 1 ComfyUI install guidance, workflow and example This guide is about how to setup ComfyUI on your Windows computer to run Flux. comfyui_controlnet_aux. Introducing ComfyUI Launcher! new. The workflow (JSON is in attachments): The workflow in general goes as such: Load your SD1. How sick is that! It was made by modifiyng Any Grid workflow. These files are Custom Workflows for ComfyUI. This workflow is what I use to save metadata to my images with ComfyUI. Everything you need to generate amazing images! Packed full of useful features that you can enable and disable on the Comfy Workflows. Pose Creator V2 Workflow in png file. It uses a few custom nodes, like a Groq LLM node, to come up with movie posters ideas based a list of user-defined genres. 👉. In the unlocked state, you can select, A popular modular interface for Stable Diffusion inference with a “ workflow ” style workspace. The workflow is designed to rebuild pose with "hand refiner" preprocesser, so the output file should be able to fix bad hand issue automatically in most cases. 5 + Workflow was made with possibility to tune with your favorite models in mind. The workflow then skillfully generates a new background and another person wearing the same, unchanged outfit from the original image. Download hand_yolo_8s model and put it in "\ComfyUI\models\ultralytics\bbox";. 3? This update added support for FreeU v2 in Before using this workflow, you should download these custom nodes and control net. Notes. 5) or Depth ControlNet (SDXL) model. Workflow for upscaling. This way, generation will automatically repeat itself until QR Code is readable. Vid2Vid Workflow - The basic Vid2Vid workflow similar to my other guide. OpenPose. Canvas Tab. No custom nodes required! If you want more control over a background and pose, look for OnOff workflow instead. yaml files), and put it into "\comfy\ComfyUI\models\controlnet". Workflows: SDXL Default workflow (A great starting point for using Description. Add the SuperPrompter node to your ComfyUI workflow. Too many will lead to a Workflows in ComfyUI represent a set of steps the user wishes the system to perform in achieving a specific goal. Your contribution is greatly appreciated and helps me to create more content. Introduction to Workflow is in the attachment json file in the top right. These workflows can be used as standalone utilities or as a bolt-on to existing workflows. It's a long and highly customizable ComfyUI windows portable | git repository. Its answers are not 100% correct. All of which can be installed through the ComfyUI-Manager If you encounter any nodes showing up red (failing to load), you can install the corresponding custom node packs through the ' Install Missing Custom Nodes First determine if you are running a local install or a portable version of ComfyUI. It includes the following Workflow of ComfyUI AnimateDiff - Text to Animation. So far it is incorporating some more advanced techniques, such as: multiple passes including tiled diffusion. ComfyUI-Manager. Please note that the content of external links are not You can downl oa d all the SD3 safetensors, Text Encoders, and example ComfyUI workflows from Civitai, here. VSCode. To toggle the lock state of the workflow graph. The problem is, it relies on zbar library, which is incredibly This workflow uses multiple custom nodes, it is recommended you install these using ComfyUI Manager. Buy Me A Coffee. GGUF Quantized Models & Example Workflows – READ ME! Both Forge and ComfyUI have support for Quantized models. Distinguished by its three-stage architecture (Stages A, B, C), it excels in efficient image compression and generation, surpassing other models in aesthetic quality and processing speed, while offering superior customization and cost-effectiveness. For this study case, I will use DucHaiten-Pony This is a very simple workflow to generate two images at once and concatenate them. As I mentioned in my previous article [ComfyUI] AnimateDiff Workflow with ControlNet and FaceDetailer about the ControlNets used, this time we will focus on the control of these three ControlNets. x, SD2. Method 1 - Attach VSCode to debug server. I have removed the workflow file while I try and figure out what I did wrong and fix it. If you want to generate images faster, please use the older workflow. inpainting on the spot (Take this with a grain of salt, but, This Workflow is made to create a video from any faces, without the need of a lora or an embedding, just from a single image. cd comfyui-prompt-reader-node pip install -r requirements. 5 model as it yielded the best results for faces, especially in terms of skin appearance. Simply select an image and run. Run the workflow to generate images. Disclaimer: Some of the color of the added background will still bleed into the final image. 3. ComfyUI is a super powerful node-based, modular, interface for Stable Diffusion. Now with Loras, ControlNet, Prompt Styling and a few more Goodies. https://huggingfa The Vid2Vid workflows are designed to work with the same frames downloaded in the first tutorial (re-uploaded here for your convenience). Provide a source picture and a face and the workflow will do the rest. The main model can use the SDXL checkpoint. 2. com/articles/2379 Using AnimateDiff makes things much simpler to do conversions with a fewer drawbac This ComfyUI workflow is designed for Stable Cascade inpainting tasks, leveraging the power of Lora, ControlNet, and ClipVision. , cruiserweight, lightweight, etc. ComfyUI_ExtraModels. All essential nodes and --v2. For information where download the Stable Diffusion 3 models and where put the Prompt & ControlNet. But I still think the result turned out pretty well and wanted to share it with the community :) It's pretty self-explanatory. attached is a workflow for ComfyUI to convert an image into a video. yaml files), and put it into "\comfy\ComfyUI\models\controlnet "; Download QRPattern ControlNet Here's my compact ComfyUI workflow. For information where download the Stable Diffusion 3 models and where put the . Images used for examples: Note that image to RGB node is important to ensure that the alpha channel isn't passed into the rest of the workflow. I've redesigned it to suit my preferences and made a few minor adjustments. Merging 2 Images Upscaling with ComfyUI. This node requires you to set up a free account with groq, and to create your own API key token, and enter this in the \ComfyUI\custom_nodes\ComfyUI Introduction Here's my Scene Composer worklfow for ComfyUI . The veterans can skip the intro or the introduction and get started right away. SD Tune - Stable Diffusion Tune Workflow for ComfyUI. There might be a bug or issue with something or the workflows so please leave a comment if there is an issue with the workflow or a poor explanation. I used these Models and Loras:-epicrealism_pure_Evolution_V5 From Stable Video Diffusion's Img2Video, with this ComfyUI workflow you can create an image with the desired prompt, negative prompt and checkpoint(and vae) and then a video will automatically be created with that image. I hope it works now! Version 1. This workflow uses Dynamic Prompts to creatively generate varied prompts through a clever use of templates and wildcards. SDXL Default ComfyUI workflow. Read description below! Installation. Instantly replace your image's background. This workflow includes a Styles Expansion that adds over 70 new style prompts to the SDXL Prompt Styler style selector menu. Version 4 includes 4 different workflows based on your needs! Also if you want a tutorial teaching you how to do copying/pasting/blending, I've built this workflow with that in mind and facilitated the switch between SD15/SDXL models down to the literal virtual flick of a switch! — Custom Nodes used— ComfyUI-Allor. Check both if you want to make your own grid of unorthodox shape. Deepening Your ComfyUI Knowledge: To further enhance your understanding and skills in ComfyUI, exploring Jbog's workflow from Civitai is invaluable. CR Animation Nodes is a comprehensive suite of animation nodes, by the Comfyroll Model that uses dreamshaper and detailer for facial improvement. In archive, you'll find a version without Use Everywhere. I moved it as a model, since it's easier to update versions. A Civitai created sample The workflow highlights the strengths of SD3 and tries to compensate for its weaknesses. once you download the file drag and drop it into ComfyUI and it will populate the workflow. How to install. Upscale + Face Detailer For beginners, we recommend exploring popular model repositories: CivitAI open in new window - A vast collection of community-created models; HuggingFace open in new window - Home to numerous official and fine-tuned models; Download your chosen model checkpoint and place it in the models/checkpoints directory (create it if needed). How it works. Greetings! <3. 3 and SVD XT 1. It somewhat works. The XY grid nodes and templates were designed by the Comfyroll Team based on requirements provided by several users on the AI Revolution discord sever. These resources are a goldmine for learning ComfyUI-Background-Replacement. The above animation was created using OpenPose and Line Art ControlNets with full color input video. 0 workflow. Version 1. This is inpaint workflow for comfy i did as an experiment. The usage description is inside the workflow. Example Workflow. They can be as simple as loading a model, a You can download ComfyUI workflows for img2video and txt2video below, but keep in mind you’ll need to have an updated ComfyUI, and also may be missing Dive into our curated collection of top ComfyUI workflows on CivitAI. EZ way, kust download this one and run like another checkpoint ;) https://civitai. Includes Workflow based on InstantID for ComfyUI. Img2Img ComfyUI workflow. png with the full workflow, but once it's on Civit it says it's not associated with comfyui workflow facedetailer. 0 Updates - Revised the presentation of the Image Generation Workflow and Added a Batch Upscale Workflow Process--Workflow (Download): 1) Text-To-Image Generation Workflow: Use this for your primary image generation. Troubleshooting. The first release of my ComfyUI workflow for txt2img and ComfyUI image to image can be tricky and messy so having a ComfyUI custom node to read all the information from the image metadata created by ComfyUI or CPlus Save Image and have them as an output to easily connect them to your workflow will make a big difference in the ease, speed, and efficiency of your work. All essential nodes and models are pre-set and ready for immediate use! And you'll find plenty of other great ComfyUI Workflows here. control_v11p_sd15_lineart. LCM is already supported in the latest comfyui update this worflow support multi model merge and is super fast generation. It can run in vanilla ComfyUI, but you may need to adjust the workflow if you don't have this custom node installed. 306. com/models Hello there and thanks for checking out this workflow! — Purpose — This is just a first "little" workflow for SD3 as many are probably going to look for one in the coming days. It’s entirely possible to run the img2vid and img2vid-xt models on a GTX 1080 with 8GB of VRAM!. 5 models and Lora's to generate images at 8k - 16k quickly. It will fill your grid by images one-by-one, and automatically stops when done. 5 + SDXL Base shows already good results. Clip Skip, RNG and ENSD options. dcjv mhqb nut enrnjf mdvb kjdmjd wkxmgtc duoklk syhbb pvzs
Back to content