Comfyui nodes examples reddit. Tutorial | Guide Locked post.

Comfyui nodes examples reddit. I've added the Structured Output node to VLM Nodes.

  • Comfyui nodes examples reddit Also you can listen the music inside ComfyUI. ComfyUI LayerDivider ComfyUI LayerDivider is custom nodes that generating layered psd files inside ComfyUI, original implement I've been using A1111, for almost a year. com find submissions from "example. I am looking for a way to run a single node without running "the entire thing" so to speak. If you are unfamiliar with break it is part of automatic1111. I would like to see the raw output that node passed to the target node. It grabs all the Keywords and tags, sample prompts, lists the main triggers by count, as well as dowloads sample images from Civitai. Two nodes are selectors for style and effect, each with its own weight control But standard A1111 inpaint works mostly same as this ComfyUI example you provided. In ComfyUI go into settings and enable dev mode options. New People who use nodes say that SD 1. You just tell it directly what to do, and it gives you the output you want. Reply reply Here's an example using the nodes through the A8R8 interface with CN scribble If you've used the ComfyUI_LLMVISION node from u/AppleBotzz, you've been hacked /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. It's basically just a mirror. The goal is to build a node-based Automated Text Generation AGI. Simple way is a multiline text field, or feeding it with a txt file from the wildcards directory in your node folder. 1 and LCM for 12 samples at 768x1152, then using a 2x image upscale model, and consistently getting the best skin and hair details I've ever seen. No, for ComfyUI - it isn't made I've been using ComfyUI as my go to for about a month and it's so much better than 1111. I haven’t seen a tutorial on this yet. - Right click this "new" node and select "Save as component" in the pop up context menu. I see that ComfyUI is a better way to create. for - SDXL. If the node is there and not completely missing try rebuilding the nodes by right click on the node an click recreate node. Note that I am not responsible if one of these breaks your workflows, your ComfyUI-Keyframed: ComfyUI nodes to facilitate parameter/prompt keyframing using comfyui nodes for defining and manipulating parameter curves. When I dragged the photo to ComfyUI, In the bottom left there are two nodes called "PrimitiveNode" (under "Text Prompts" group), Now, if I will go to Add Node->utils->Primitive, it will add a completely different node although the node it self called "PrimitiveNode", Same thing for "CLIP Text Encode" node. I've added the Structured Output node to VLM Nodes. If you've used the ComfyUI_LLMVISION node from u/AppleBotzz, you've been hacked /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will If so, you can follow the high-res example from the GitHub. In this case he also uses the ModelSamplingDiscrete node from the WAS node suite, supposedly for chained loras, however in my tests that node made no difference whatsoever so it can be ignored as well. I'm not very experienced with Comfyui so any ideas on how I can set up a robust workstation utilizing common tools like img2img, txt2img, refiner, model merging A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users The constant noise for whole batch doesn't exist in base comfy yet (there's PR about it), I made a simple node to generate the noise instead, which can then be used as latent input in the advanced/custom sampler nodes with "add_noise" off. Is it possible to do that in ComfyUI? Hey everyone. Share Sort by: Best. You can find the node here. The first example is the panda with a red scarf with less prompt bleeding of the red color thanks to conditionning concat. Here's an example of me using AnyNode in an image to image workflow. Update the VLM Nodes from github. Deforum like Animation using CoMFYUI MTB node Animation New Tutorial, How to rent up to 1-8x 4090 GPUS, install ComfyUI (+Manager, Custom nodes, models, etc). This condenses entire workflows into a single node, saving a ton of space on the canvas. Thanks again for your great suggestion. See the high res fix example, particularly the second pass version. Open comment sort options. Are there any ComfyUI nodes (i. I have Lora working but I just don’t know how to do controlnet with this And, I just don't get how they function. A node hub - A node that accepts any input (including inputs of the same type) from any node in any order, able to: . So is there any suggestion to where to Right I haven't updated but I used it this morning and was working great. I know that several samplers allow for having for example the number of steps as an input instead of a widget you so you supply it from a primitive node and control the steps on multiple samplers at the same time. py", 21K subscribers in the comfyui community. This is the example animation I do with comfy: https: PSA: If you've used the ComfyUI_LLMVISION node from u/AppleBotzz, you've been hacked /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. This is where you put all the nodes that load anything. All the shining dots are connected to the inputs plugged into the UE nodes called Anything Everywhere and Prompts Everywhere. PSA: If you've used the ComfyUI_LLMVISION node from u/AppleBotzz, you've been hacked /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude It worked fine, new nodes were in the menu when I restarted. Comfyui Question: Does anyone know how to use controlnet (one or multiple) with the efficient loader & control net stacker node? A picture example of a workflow will help a lot. pipe( ^^^^^ File "D:\Super SD 2. Soon, there will also be examples showing what can be achieved with advanced workflows. Tutorial | Guide Locked post. The reason why you typically don't want a final interface for workflows because many users will eventually want to apply LUTs and other post-processing filters. Let's say that I want to transmit the output of a Math node that does a calculation. Hey r/comfyui, I just published a new video going over the recent updates for ComfyUI reaching the end of year. The documentation is remarkably sparse and offers very little in the way of explaining how to it also responds to BPM in the prompt. Get the Reddit app Scan this QR code to download the app now. Skip to main content. use the WAS suite number counter node its the shiz primitive nodes arent fit for purpose, they need to be remade as they are buggy anyway. It'll parse the You need all the files to use the model. and remember sdxl does not play well with 1. I only started making nodes today! I made 3 main things, all of them have workflow examples present: A node to provide regular and scaled resolutions to other nodes, with a switch between sd15 and sdxl, I made it cause previous I had to attach a bunch of type conversions and operations and switches together to get the same result. Disable all nodes. It uses a LLM (OpenAi API or Local LLM) to generate code that creates any node you can think of as long as the solution can be written with code. That will give you a Save(API Format) option on the main menu. in map_node_over_list results. CopyPaste from my Wish List post: . reddit. The Checkpoint selector node can sometimes be a pain as it's not a string but some custom nodes want a string. The new update to Efficiency added a bunch of new nodes for XY Plotting, and you can add inputs on the fly. PSA: If you've used the ComfyUI_LLMVISION node from u/AppleBotzz, you've been hacked /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, It's installable through the ComfyUI and lets you have a song or other audio files to drive the strengths on your prompt scheduling. About 16GB in total for internlm. The Python node, in this instance, is effectively used as a gate. 4 and tiles of 768x768. I created CRM custom nodes for ComfyUI 1 (This post is addressed to ComfyUI users unless you're interested too of course ^^) Hey guys ! The other day on the comfyui subreddit, I published my LoRA Captioning custom nodes, very useful to create captioning directly from ComfyUI. ComfyUI nodes for inpainting/outpainting using the new LCM model Workflow Included the original dreamshaper model. After you click it, you should be able to paste it into a ComfyUI window using Ctrl+V I just re-ran it and it still works - only default nodes are used. still wired up the same. Python - a node that allows you to execute python code written inside ComfyUI. I'm not sure that custom script allows you to select a new checkpoint but what it is doing can be done manually with more nodes. There are also Efficiency custom nodes that come pre-combined with several related things in one node, such as both prompts and the resolution and model choice in one, etc. Hi all, sorry if this seems obvious or has been posted before, but i'm wondering if there's any way to get some basic info nodes. But I highly suggest learning the nodes, it's actually a Been playing around with ComfyUI and got really frustrated with trying to remember what base model a lora uses and its trigger words. 5 for the moment) 3. Anyway am a nooby and this is how I approach Comfy. Step 2: Download this sample Image. Internet Culture (Viral) Amazing I found it extremely difficult to wrap my head around initially but after a few days of going through example nodes and the ComfyUI source I started being productive. I did a plot of all the samplers and schedulers as a test at 50 steps. You can connect the input and output on the node to any input or output on any other node. This changes everything for me. I tested it with ddim sampler and it works Something like this. Install Missing Nodes can't always find the missing node in the package list. The third example is the anthropomorphic dragon-panda with conditionning average. Hope you like some of The way any node works is that the node is the workflow. I don't know why you don't want to use manager, if you install nodes with manager, a new folder is created in the custom_nodes folder, if something is messed up after installation, you sort folders by modification date and remove the last one you installed. I like all of my models individually, but you can get some really awesome styles out of experimenting with it and trying out Welcome to the unofficial ComfyUI subreddit. The @ComfyFunc decorator inspects your function's annotations to compose the appropriate node definition for ComfyUI. That will get you up and running with all the ComfyUI-Annotation example nodes installed and you can start editing from there. My mind's busted. Essentially provides a ComfyUI A set of nodes have been included to set specific latents to frames instead of just the first latent. Or, at least, kinda. When you launch ComfyUI, the node builds itself based on the TXT files contained in the custom-lists subfolder, and creates a pair for yeh go for it, check it first tho, dont think i recked it :P replaced one node and moved one to a different group replaced the primitive string node so I could reroute the conncetion with a WAS string node. The workflow posted here relies heavily on useless third-party nodes from unknown extensions. Step 3: Update ComfyUI Step 4: Launch ComfyUI and enable Auto Queue (Under Extra Options) Step 5: Drag and drog and sample image into ConfyUI Step 6: The FUN begins! If queue didn't start automatically, press Queue Prompt A celebrity or professional pretending to be amateur usually under disguise. I've been trying to do something similar to your workflow and ran into the same kinds of problems. If you still experience the same issue after disabling these nodes, let me know, and I’ll share any additional nodes I disabled. Here's a very interesting node 👍 However, I have three small criticisms to make: You need to run the workflow once to get the node number for which you want information and then a second time to get the information (or two more times if you make a mistake). Sample from Stable Audio V2. git folder afterwards, otherwise you will be saving two copies of every file and wasting a This is not overkill. Read the nodes installation information on github. - Hold left CTRL, drag and select multiple nodes, and combine them into one node. try civitai . ai/profile/neuralunk?sort=most_liked. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper 27 votes, 10 comments. I am at the point where I need to filter out images based on a tag list. Here are some sample workflows with XY plot for different use cases which can be explored. For example, the Checkpoint Loader is plugged to every Sampler in that workflow already! Without all the noodles! - In the top-left corner: THE LOADER. /r/StableDiffusion is back open ComfyUI-paint-by-example. For example, a professional tennis player pretending to be an amateur tennis player or a famous singer smurfing as an unknown singer. 35 -> 0. Having a computer science background, I feel that the potential for ComfyUI is huge if some basic branching and looping components are added, to unleash the creativity of developers. I love downloading new nodes and trying them out. Just write a regular Python function, annotate the signature fully, then slap a \@ComfyFunc decorator on it (The \ shouldn't actually be there, reddit's just being a pain and wants to turn any unescaped @ into a u/). com" url:text search for "text" in url selftext:text search for "text" in self post contents self:yes (or self:no) include (or exclude) self posts https://www. What are your favorite custom nodes (or node packs) and what do you use them for? So you want to make a custom node? You looked it up online and found very sparse or intimidating resources? I love ComfyUI, but it has to be said: despite being several months old, its documentation surrounding custom nodes is god Like a lot of you we've struggled with inconsistent (or nonexistent) documentation so we built a workflow to generate docs for 1600+ nodes. The easiest way is to just git clone the huggingface repo, but if you do that, make sure you delete the large blobs in the . So when I saw the recent Generative Powers of Ten : r/StableDiffusion (reddit. Another day tomorrow. conflict with UE nodes (Anything Everywhere) White areas appear, causing the UI to break when zooming in or out. These tools do make use of WAS suite. Best. ) I hope you'll enjoy the custom nodes. Node-Red (an event driven node based programming language) has this functionality so it could defintely work in a node based environment such as ComfyUI . append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "D:\ComfyUI\nodes. I provide one example JSON to demonstrate how it works. (for example midjourney image) to a face mocap (For this i know there are tools like controlnet) but all of this for a video. I show a couple of use case and go over general usage. 4 -> 0. This is great! For quite a while, I kept wishing for a "hub" node. I am thinking of the scenario, where you have generated, say, a 1000 images with a Welcome to the unofficial ComfyUI subreddit. Fernicles SDTools V3 - ComfyUI nodes First off, its a good idea to get the custom nodes off git, specifically WAS Suite, Derfu's Nodes, and Davemanes nodes. Here's a basic example of using a single frequency band range to drive one prompt: Workflow Welcome to the unofficial ComfyUI subreddit. Short version: You screenshot a reddit announcement and have a reddit account, did you post this question (about the safetensor) in response to it? Collab Example (for anyone following this that needs it) In case you didn't find the I'm looking for a way to be more organized with naming and for example, append the name of a source video into the final video. It would be great to have a set of nodes that can further process the metadata, for example extract the seed and prompt to re-use in the workflow. Honestly wouldn't be a bad idea to have an a1111 similar node workflow for easier onboarding. Are these options hidden Here are approx. yaml. If you suspect that the Workspace Manager custom node suite is the culprit, try disabling it via the ComfyUI Manager, restart ComfyUI, reload the browser, and see if it makes a difference. Please share your tips, tricks, and workflows for using this software to create your AI art. 150 workflow examples of things I created with ComfyUI and ai models from Civitai Moved my workflow host to: https://openart. I should be able to skip the image if some tags are or are not in a tag list. extra_model_paths. Don’t know about other problems, although the first time I used supir it told me my comfyui was too old and I had to update, but that didn’t cause problems for me last week. e extensions) that you know of that have a button on them? I was thinking about making my extension compatible with comfyUI but I am at a loss when it comes to placing a button on a node. This specific image is the result from repeated upscaling from: 512 -> 1024 -> 2048 -> 3072 -> 4096 using a denoise strength of 1. Or check it out in the app stores     TOPICS. \Super SD 2. Thanks a lot for this amazing node! I've been wanting it for a while to compare various versions of one image. Nodes are not always better, for many task yes, but nodes can also makes things way more complicated, for example try creating some shader effects using node based shader editor - some things are such that a few lines code become a 44 votes, 54 comments. It uses the amplitude of the frequency band and normalizes it to strengths that you can add to the fizz nodes. Yes, current SDXL version is worse but it is the step forward and even in current state perform quite well. Do you have any example images to show what difference the samplers can make? If you've used the ComfyUI_LLMVISION node from u/AppleBotzz, you've been hacked /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Eliminates all the boilerplate and redundant information. Re face & hand refiners, the reason why I insist on using the SD 1. masquerade-nodes-comfyui. You will see a modal to publish this new node as a "Pack". This is a question for any node developer out there. GitHub repo and ComfyUI node by kijai (only SD1. I'm working on the upcoming AP Workflow 8. After each step the first latent is down scaled and composited in the second, which is downscaled and composited with the third, etc As it stands for now, I have seen you post about it several times that you are now able to "let chatgpt write any node I want" but then your example is just addition of integers. So as long as you use the same prompt and the LLM gets to the same conclusion, that’s the whole workflow. Are there specialized ControlNet nodes that I don't know about? An example SC workflow that uses ControlNet would be helpful. This workflow by Antzu is a good example of prompt scheduling, I have installed all missing nodes with ComfyUI Manager and been to this page but there is very Hey everyone! Looking to see if anyone has any working examples of break being used in comfy ui (be it node based or prompt based). and movement animation node moved into the movement group because all the connections were there anyway think thats all I changed. start with simple workflows . 0 + other_model If you are familiar with the “Add Difference” option in other UIs this is how to do it in ComfyUI. However, the other day I accidentally discovered this: comfyui-job-iterator (ali1234/comfyui-job-iterator: A for loop for ComfyUI (github. Although it can handle the recently released controlnet Tiled, i choose not to use it in this /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Warning. The Sampler also now has a new option for seeds which is a nice feature. com) video, I was pretty sure the nodes to do it already exist in comfyUI. 1 Turbo model Front-end improvements like group nodes, undo/redo, rerouting primitives Experimental AnyNode does what you ask it to do. Check the examples inside the code, there is one using regular post request and one using websockets. 0 -> 0. For example: swapping out one loader to another loader. \Data\Packages\ComfyUI\custom_nodes\was-node-suite-comfyui And it has Then find example workflows . image-resize-comfyui. ltx_interpolation. The custom node suites I found so far either lack the actual score calculator, don't support anything but CUDA, or have very basic rankers (unable to process a batch, for example, or only accepting 2 inputs instead of infinite ones). I made Steerable Motion, a node for driving videos with batches of images. Now, you can obtain your answers reliably. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. 0 and want to add an Aesthetic Score Predictor function. 5 so that may give you a lot of your errors. For now, only text generation inside ComfyUI with LLaMA models like vicuna-13b-4bit-128g In the Image is a workfow (untested) to enhance Prompts using text generation. Totally newbie in node development and I'm hitting a wall. install ComfyUI in the cloud (+Manager, Custom nodes, models, etc). Tutorial video showing how to use the new node for ComfyUI called AnyNode. The work-flow takes a couple of prompt nodes, pipes them through a couple more, concatenates them, tests using Python and ultimately adds to the prompt if the condition is met. If you want to try it, you can nest nodes together in ComfyUI (use the NestedNodeBuilder custom node). For example with the “quality of life” nodes there is one that enable to chose between your pictures from the batch which one you want to process further. For example if you use the cg-use-everywhere nodes, you do it all the time. Top. comfy_clip_blip_node. This extension should ultimately combine the powers of, for example, AutoGPT, babyAGI, and Jarvis. We wrote about why and linked to the docs in our blog but this is really just the first step in us site:example. py", line 62, in sample result = self. 5 BrushNet is the best inpainting model at the moment. My reasearch didnt yield much result so I might ask here before I start creating my custom nodes. You can with inpact or inspire nodes (image list) if you have the vram /r/StableDiffusion is back open after the protest of Updated node set for composing prompts. Get the Reddit app Scan this QR code to download the app now Is there any real breakdown of how to use the rgthree context and switching nodes. mp4 Also, the Nodes Library is pretty neat and clear. if a box is in red then it's missing . comfyui manager will identify what is missing and download for you . Save your workflow using this format which is different than the normal json workflows. More info: https://rtech. For and find the Node Copy button in the Generation Data section. I'm a basic user for now but I want the deep dive. . 0\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-LCM\nodes. Unless someone did a node with this option, you can’t. I'm currently exploring new ideas for creating innovative nodes for ComfyUI. so I wrote a custom node that shows a Lora's trigger words, examples and what base model it you can right click a node in comfy ui and break out any input into different nodes, we use multi purpose nodes for certain things because they are more flexible and can be cross linked into multiple nodes. Please keep posted images SFW. The node itself (or better, the LLM inside of it) writes the python code that runs the process. I made a tiled sampling node for ComfyUI that i just wanted to briefly show off. Identify the useful nodes that were executed. A. Maybe the problem is figuring out if a node is useful? It could be more than just the nodes that output an image. Mirrored nodes, where if you change anything in the node or it's mirror the other linked node will reflect the changes. b1: responsible for the larger areas on the image b2: responsible for the smaller areas on the image s1: responsible for the details in b2 s2: responsible for the details in b1 So s1 belongs to b2 and s1 to b2. py", line 1286, in sample return common_ksampler A ComfyUI node can convert multiple photos into a coherent video, even unrelated images, and also provide A a sample workflow. Iterate through all useful nodes, walk backwards through the graph enabled all the parent nodes. sd-dynamic-thresholding. I might do things a bit differently these days but it should be a good starting point for your own experiments. Filter and sort from their properties (right-click on the node and select "Node Help" for more info). Welcome to the unofficial ComfyUI subreddit. 19K subscribers in the comfyui community. It would require many specific Image manipulation /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Yes. So I gave it already, it is in the examples. Trying to make a node that selects terms for a prompt (similar with the Preset Text but with different terms per node). Reply reply More replies More replies More replies /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. If you've used the ComfyUI_LLMVISION node from u/AppleBotzz, you've been hacked Welcome to the unofficial ComfyUI subreddit. It lacks a vital feature on the nodes list: Which custom node package contains a particular node? That's a drop-dead feature, IMHO. FWIW, i was using it WITH the PatchModelAddDownscale node to generate with RV 5. It could be that the impact Welcome to the unofficial ComfyUI subreddit. As you get comfortable with Comfyui, you can experiment and try editing a workflow. You type what you want it's function to be in your ComfyUI Workflow. 0\ComfyUI Welcome to the unofficial ComfyUI subreddit. In general, renaming slots can make your workflow much easier to understand, just like a good programmer will name their variables carefully in order to maximize code readability. If you've ever been lookking for a specific type of node that doesn't exist yet, or if there's a particular functionality you've been missing in your projects, I'd love to hear about it! /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. com)) . Sorry if I seemed greedy, but for Upscale Image Comparing, I think the best tool is from Upscale. Seems like a tool that someone could make a really useful node with A node that could inject the trigger words to a prompt for lora, show a view of sample images, or all kinds of things etc. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the Welcome to the unofficial ComfyUI subreddit. For example, one that shows the image metadata like PNG info in A1111, or better still, one that shows the LORA info so i can see what the trigger words and training data was etc. The options I can't find anywhere now are how to enable auto-queue , and how to clear the full queue . I messed with the conditioning combine nodes but wasn't having much luck unfortunately. Fast Groups Muter & Fast Groups Bypasser Like their "Fast Muter" and "Fast Bypasser" counterparts, but collecting groups automatically in your workflow. For example, I like to mix Excelsior with Arthemy Comics, or Sketchstyle, etc. Any advice would be appreciated. Can't find any examples. You can extract entities, numbers, classify prompts with given classes, and generate one specific prompt. Plus quick run-through of an example ControlNet workflow. If you find it confusing, please post here for help or create an Issue in GitHub. 0 web site: Soulful Boom Bap Hip Hop instrumental, Solemn effected Piano, SP-1200, low-key swing drums, sine wave bass, Characterful, Peaceful, Interesting, well-arranged composition, 90 BPM So far drum beats are good, drum+bass too. An example workflow can be found here . com/r/comfyui/s/JQVkyMTM5w 2. A few new nodes and functionality for rgthree-comfy went in recently. media which can zoom in and move around simultaneously, making it easy to check details of big images. What I meant was tutorials involving custom nodes, for example. (There may be additional nodes not included in this list. But I never used a node based system and also I want to understand the basics of ComfyUI. A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) Workflow Included Share For the record, you can multi select nodes for update in the custom nodes manager (if you want to update only a selection of nodes for example, and not all of them at once) It's a little counter intuitive as the "select all" check box is by default disabled I put an example image/workflow in the most recent commit that uses a couple of the main ones, and the nodes are named pretty easily so if you have the extension installed you should be able to just skim through the menu and search the ones that aren't as straightforward. New /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Since Loras are a patch on the model weights they can also be merged into the model: You can also subtract models weights and add them like in this example used to create an inpaint model from a non inpaint model with the formula: (inpaint_model - base_model) * 1. support My comfyUI backend is an API that can be used by other apps if they want to do things with stable diffusion so chainner could add support for the comfyUI backend and nodes if they wanted to. I can not find any decent examples or explanations on how this works or best ways to implement it. Note: Reddit is dying due to terrible leadership from CEO /u/spez Welcome to the unofficial ComfyUI subreddit. 10K subscribers in the comfyui community. Then there are many ways to feed each wildcard. Like they said though, a1111 will be better if you don't understand how to use the nodes in comfy. Just reading the custom node repos' code seems to show the authors have a lot of knowledge on how Comfyui works and how to interface with it, but I am a bit lost (in the large amount of code in ComfyUI's repo and the large amount of custom node repos) as to how to get started. The most interesting innovation is the new Custom Lists node. which ComfyUI supports as it is - you don't even need custom nodes. B. Any node that is part of a branch that is not useful is disabled. I have two string lists in my node. You can add additional descriptions to fields and choose the attributes you want it to return. But I also recommend getting Efficiency nodes for ComfyUI and the Quality of Life Suit. (stuff that really should be in main rather than a plugin but eh, =shrugs= ) With some nervous trepidation, I release my first node for ComfyUI, an implementation of the DemoFusion iterative mixing sampling process. Two nodes are used to manage the strings: in the input fields you can type the portions of the prompt, and with the sliders you can easily set the relative weights. 23 votes, 10 comments. Please understand me when I find this amusing. So i need a way to have a video (face performance) analyze it with controlnet Welcome to the unofficial ComfyUI subreddit. I ended up building a custom node that is very custom for the exact workflow I was trying to make, but it isn't good for general use. The video has to be an activity that the person is known for. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. So far I love the speed and lower ram requirement. So instead of having a single workflow with a spaghetti of 30 nodes, it could be a workflow with 3 sub workflows, each with 10 nodes, for example. \ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-StableAudioSampler module for custom nodes: No module named 'stable_audio_tools' \ComfyUI_windows I was getting frustrated by the amount of overhead involved in wrapping simple Python functions to expose as new ComfyUI nodes, so I decided to make a new decorator type to remove all the hassle from it. 5 checkpoints, is that they are the only one compatible with the ControlNet Tile that I Here are my findings: Neutral value for all FreeU options b1, b2, s1 and s2 is 1. ComfyUI_TiledKSampler. Or check it out in the app stores It is looking great, but in my opinion improving Welcome to the unofficial ComfyUI subreddit. Batch on the latent node offers more options when working with custom nodes because it is still part of the same workflow. I have developed custom nodes in the past, and I have very good hands-on and theoretical experience with LLMs. These are just a few examples. example: Is tag "2girl" in list --> do not save. Both have amazing options for automation, prepping and manipulation of your prompt/settings. yk-node-suite-comfyui. Only the LCM Sampler extension is needed, as shown in this video. Not unexpected, but as they are not the default values in the node, I mention it here. It aims to be a high-abstraction node - it bundles together a bunch of capabilities that could in theory be seperated in the hopes that people will use this combined capability as a building block and that it simplifies a lot of potentially complex settings. Sometimes the devs update and change the nodes display dictionaries and the workflows can't display them properly anymore. IPAdapter with use of attention masks is a nice example of the kind of tutorials that I'm looking for. Custom Nodes/extensions: ComfyUI is extensible and many people have written some great custom nodes for it. *Note: I'm not exactly sure which custom node is causing the issue, but I was able to resolve the problem after disabling these custom nodes. This tutorial does a good job breaking it down. To create this workflow I wrote a python script to wire up all the nodes. A checkpoint is your main model and then loras add smaller models to vary output in specific ways . 5 -> 0. It doesn't have all the features and for that I do occasionally have to switch back, but the node style editor in Comfy is so much clearer and being able to save and swap layouts is amazing. Is tag "looking at viewer" in list --> save. The nodes list is great, but it is not useful for finding a custom node unless that node's name contains text related to its package name. example 2023-11-10 08:22 -a--- Oh hey wait is this a post about the Style Loader on Comfyui Node being stupid and not finding my styles Seems relevant here: I wrote a module to streamline the creation of custom nodes in ComfyUI. Creating such workflow with default core nodes of ComfyUI is not possible at the moment. For example you can do side-by-side and compare workflows: one with only base and one with base + lora and see the difference. Resource - Update Get the Reddit app Scan this QR code to download the app now. \COMFYUI\ComfyUI_windows_portable\ComfyUI\nodes. The video covers: New SD 2. Something laid out like the webui. Something the community could share their node setups with, as right now having to go look up and check tutorials, or example layouts for things outside of basic generationon various githubs is such a pain, especially once you start finding all the Welcome to the unofficial ComfyUI subreddit. This is great for prompts so you don't have to manually change the prompt in every field (for upscalers for LLaVA -> LLM -> AudioLDM-2 Example workflow in the examples folder inside github. zgcsb rjrgu llmk pawe tetnxcd ffbpnxz oyby rgv vmfo yokto