Comfyui workflow png example reddit. And above all, BE NICE. The problem I'm having is that Reddit strips this information out of the png files when I try to upload them. The workflow is kept very simple for this test; Load image Upscale Save image. its the kind of thing thats a bit fiddly to use so using someone elses workflow might be of limited use to you. But reddit will strip it away. You can use () to change emphasis of a word or phrase like: (good code:1. OP probably thinks that comfyUI has the workflow included with the PNG, and it does. Svelte is a radical new approach to building user interfaces. Merge 2 images together with this ComfyUI workflow: View Now: ControlNet Depth Comfyui workflow: Use ControlNet Depth to enhance your SDXL images: View Now: Animation workflow: A great starting point for using AnimateDiff: View Now: ControlNet workflow: A great starting point for using ControlNet: View Now: Inpainting workflow: A great starting I think it was 3DS Max. . Click this and paste into Comfy. here i just use: futuristic robotic iguana, extreme minimalism, white porcelain robot animal, details, build by Tesla, Tesla factory in the background I'm not using breathtaking, professional, award winning, etc, because it's already handled by "sai-enhance" I totally agree. 5 Lora with SDXL, Upscaling Future tutorials planned: Prompting practices, post processing images, batch trickery, networking comfyUI in your home network, Masking and clipseg awesomeness, many more. AnimateDiff in ComfyUI is an amazing way to generate AI Videos. 157 votes, 62 comments. Really chaotic images or images that actually benefit from added details from the prompt can look exceptionally good at ~8. github. That way the Comfy Workflow tab in Swarm will be your version of ComfyUI, with your custom nodes. Flux Schnell is a distilled 4 step model. com or https://imgur. Run any ComfyUI workflow w/ ZERO setup (free & open source) Try now Hello Fellow ComfyUI users, this is my workflow for testing different methods to improve image resolution. Those images have to contain a workflow, so one you've generated yourself for example. I cant load workflows from the example images using a second computer. Some commonly used blocks are Loading a Checkpoint Model, entering a prompt, specifying a sampler, etc. I can load the comfyui through 192. All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. I generated images from comfyUI. This workflow can use LoRAs, ControlNets, enabling negative prompting with Ksampler, dynamic thresholding, inpainting, and more. I conducted an experiment on a single image using SDXL 1. 0 version of the SDXL model already has that VAE embedded in it. 150 workflow examples of things I created with ComfyUI and ai models from Civitai Moved my workflow host to: https://openart. I had to place the image into a zip, because people have told me that Reddit strips . This repo contains examples of what is achievable with ComfyUI. Any ideas on this? Give it a folder of images of outfits (with, for example, outfit1. Plus there a ton of extensions which provide plenty ease of use cases. And the documentation uses a highly technical language, with no examples to make it worse. Belittling their efforts will get you banned. If the term "workflow" has been used to describe node graphs for a long time then that's unfortunate because now it has become entrenched. This makes it potentially very convenient to share workflows with other. Apparently the dev uploaded some version with trimmed data But generally speaking, workflows seen on GitHub can also be used. Hope you like some of them :) Workflow. Please keep posted images SFW. If I drag and drop the image it is supposed to load the workflow ? I also extracted the workflow from its metadata and tried to load it, but it doesn't load. This probably isn't the completely recommended workflow though as it has POS_G and NEG_G prompt windows but none for POS_L, POS_R, NEG_L, and NEG_R which is part of SDXL's trained prompting format. Generate one character at a time and remove the background with Rembg Background Removal Node for ComfyUI . ComfyUI breaks down a workflow into rearrangeable elements so you can easily make your own. png) 29 comments See full list on github. ComfyUI could have workflow screenshots like example repo has to demonstrate possible usage and also variety of extensions. Just wanted to share that I have updated comfy_api_simplified package, and now it can be used to send images, run workflows and receive images from the running ComfyUI server. It'll add nodes as needed if you enable LoRAs or ControlNet or want it refined at 2x scale or whatever options you choose, and it can output your workflows as Comfy nodes if you ever want to - If the image was generated in ComfyUI and metadata is intact (some users / websites remove the metadata), you can just drag the image into your ComfyUI window. Share, discover, & run thousands of ComfyUI workflows. From the ComfyUI_examples, there are two different 2-pass (Hires fix) methods, one is latent scaling, one is non-latent scaling Now there's also a `PatchModelAddDownscale` node. true. The png files produced by ComfyUI contain all the workflow info. Download & drop any image from the website into ComfyUI, and ComfyUI will load that image's entire workflow. Where can one get such things? It would be nice to use ready-made, elaborate workflows! For example, ones that might do Tile Upscle like we're used to in AUTOMATIC 1111, to produce huge images. You can then load or drag the following image in ComfyUI to get the workflow: This is the most well organised and easy to use ComfyUI Workflow I've come across so far showing difference between Preliminary, Base and Refiner setup. but mine do include workflows for the most part in the video description. png" in the file list on the top, and then you should click Download Raw File, but alas, in this case the workflow does not load. You can then load or drag the following image in ComfyUI to get the workflow: Flux Schnell. Increasing the sample count leads to more stable and consistent results. I found it very helpful. I'll do you one better, and send you a png you can directly load into Comfy. But for a base to start at it'll work. So, i added reverse image search that queries a workflow catalog to find workflows that produce similar looking results. Comfy Workflows Comfy Workflows. The TL;DR version is this: it makes a image from your prompt without a LoRA, runs it through ControlNet, and uses that to make a new image with the LoRA. hopefully this will be useful to you. You can also easily upload & share your own ComfyUI workflows, so that others can build on top of them! :) Why I built this: I just started learning ComfyUI, and really like how it saves the workflow info within each image it generates. Potential use cases include: Streamlining the process for creating a lean app or pipeline deployment that uses a ComfyUI workflow Creating programmatic experiments for various prompt/parameter values Here is the workflow for ComfyUI updated to a folder on google drive with both json and png of some of my workflows example by @midjourney_man - img2vid No refiner. Unfortunately, Reddit strips the workflow info from uploaded png files. EDIT: For example this workflow shows the use of the other prompt windows. Once the final image is produced, I begin working with it in A1111, refining, photobashing in some features I wanted and re-rendering with a second model, etc. If the term "workflow" is something that has only been used exclusively to describe ComfyUI's node graphs, I suggest just calling them "node graphs" or just "nodes". The sample prompt as a test shows a really great result. In this Guide I will try to help you with starting out using this and give you some starting workflows to work with. This workflow is entirely put together by me, using the ComfyUI interface and various open source nodes that people have added to it. Jul 6, 2024 · What is ComfyUI? ComfyUI is a node-based GUI for Stable Diffusion. PS: If someone has access to Magnific AI, please can you upscale and post result for 256x384 (5 jpg quality) and 256x384 (0 jpg quality) . Hello, I'm wondering if the ability to read workflows embedded in images is connected to the workspace configuration. Breakdown of workflow content. And you need to drag them into an empty spot, not a load image node or something. From what I see in the ControlNet and T2I-Adapter Examples, this allows me to set both a character pose and the position in the composition. Welcome to the unofficial ComfyUI subreddit. Upcoming tutorial - SDXL Lora + using 1. be/ppE1W0-LJas - the tutorial. If you asked about how to put it into the PNG, then you just need to create the PNG in ComfyUI and it will automatically contain the workflow as well. I tried to find either of those two examples, but I have so many damn images I couldn't find them. json files into an executable Python script that can run without launching the ComfyUI server. png) Give it a folder of OpenPose poses to iterate over Create a list of emotion expressions. I have a client who has asked me to produce a ComfyUI workflow as backend for a front-end mobile app (which someone else is developing using React) He wants a basic faceswap workflow. ive got 3 tutorials that can teach you how to set up a decent comfyui inpaint workflow. Most workflows you see on GitHub can also be downloaded. I have a workflow with this kind of loop where latest generated image is loaded, encoded to latent space, sampled with 0. hey guys, i always love seeing a cool image online and trying to reproduce it, but trying to find the original method or workflow is troublesome since google‘s image search just shows similar looking images. pngs of metadata. I'm facing a problem where, whenever I attempt to drag PNG/JPG files that include workflows into ComfyUI—be it examples The API workflows are not the same format as an image workflow, you'll create the workflow in ComfyUI and use the "Save (API Format)" button under the Save button you've probably used before. Ignore the prompts and setup I think perfect place for them is Wiki on GitHub. txt containing a prompt describing the outfit in outfit1. com and then post a link back here if you are willing to share it. Example: Starting workflow. Please share your tips, tricks, and workflows for using this software to create your AI art. I am personally using it as a layer between telegram bot and a ComfyUI to run different workflows and get the results using user's text and image input. Need help with FaceDetailer in ComfyUI? Join the discussion and find solutions from other users in r/StableDiffusion. Instead, I created a simplified 2048X2048 workflow. I can load workflows from the example images through localhost:8188, this seems to work fine. A group that allows the user to perform a multitude of blends between image sources as well as add custom effects to images using a central control panel. the example pictures do load a workflow, but they don't have a label or text that indicates if its version 3. You can find the Flux Schnell diffusion model weights here this file should go in your: ComfyUI/models/unet/ folder. K12sysadmin is for K12 techs. u/wolowhatever we set 5 as the default but it really depends on the image and image style tbh - I tend to find that most images work well around Freedom of 3. io/ComfyUI_examples/flux/flux_schnell_example. We would like to show you a description here but the site won’t allow us. Posted by u/Kinfolk0117 - 37 votes and 7 comments A somewhat decent inpainting workflow in comfyui can be a pain in the ass to make. 1:8188 but when i try to load a flow through one of the example images it just does nothing. To add content, your account must be vetted/verified. 0. There is the "example_workflow. - If the image was generated in ComfyUI, the civitai image page should have a "Workflow: xx Nodes" box. This was really a test of Comfy UI. Also, if this is new and exciting to you, feel free to post ComfyUI - Ultimate Starter Workflow + Tutorial Heya, ive been working on this workflow for like a month and its finally ready, so I also made a tutorial on how to use it. 1 or not. A text file with multiple lines in the format "emotionName|prompt for emotion" will be used. Each time I do a step, I can see the color being somehow changed and the quality and color coherence of the newly generated pictures are hard to maintain. As a pogrammer, the workflow logic should be relatively easy to understand, but the function of each node cannot be inferred by simply looking at its name. Hi everyone, I've been using SD / ComfyUI for a few weeks now and I find myself overwhelmed with the number of ways to do upscaling. So OP, please upload the PNG to civitai. K12sysadmin is open to view and closed to post. If you have any of those generated images in original PNG, you can just drop them into ComfyUI and the workflow will load. Here are approx. 8). Aug 2, 2024 · You can then load or drag the following image in ComfyUI to get the workflow: This image contains the workflow (https://comfyanonymous. I was confused by the fact that I saw in several Youtube videos by Sebastain Kamph and Olivio Sarikas that they simply drop png's into the empty ComfyUI. 5 from 512x512 to 2048x2048. Otherwise, please change the flare to "Workflow not included" First of all, sorry if this has been covered before, i did search and nothing came back. But the workflow is dead simple- model - dreamshaper_7 Pos Prompt - sexy ginger heroine in leather armor, anime Neg Prompt - ugly Sampler - euler steps - 20 cfg - 8 seed - 674367638536724 That's it. I'm using the ComfyUI notebook from their repo, using it remotely in Paperspace. A lot of people are just discovering this technology, and want to show off what they created. Hi Antique_Juggernaut_7 this could help me massively. comfy uis inpainting and masking aint perfect. If you can't see that button, you need to check the 'enable dev mode options'. https://youtu. That's because the base 1. So download the workflow picture and dragged it to comfyui but it doesn't load anything, looks like the metadata is not complete. Ending Workflow. Im trying to do the same as high res fix, with a model and weight below 0. Just download it, drag it inside ComfyUI, and you’ll have the same workflow you see above. 168. An All-in-One FluxDev workflow in ComfyUI that combines various techniques for generating images with the FluxDev model, including img-to-img and text-to-img. First of all, sorry if this has been covered before, i did search and nothing came back. Hello there. it is a simple way to compare these methods, it is a bit messy as I have no artistic cell in my body. You can construct an image generation workflow by chaining different blocks (called nodes) together. It took me hours to get one I'm more or less happy with, where I feather the mask ( feather nodes usually don't work how I want to, so I use mask2image, blur the image, then image2mask ), 'only masked area' where it also apply to the controlnet ( applying it to the controlnet was probably the worst part ), and Using just the base model in AUTOMATIC with no VAE produces this same result. I noticed that ComfyUI is only able to load workflows saved with the "Save" button and not with "Save API Format" button. If you want to post and aren't approved yet, click on a post, click "Request to Comment" and then you'll receive a vetting form. Whereas traditional frameworks like React and Vue do the bulk of their work in the browser, Svelte shifts that work into a compile step that happens when you build your app. Just started with ComfyUI and really love the drag and drop workflow feature. No attempts to fix jpg artifacts, etc. Just my two cents. 2) or (bad code:0. 5 noise, decoded, then saved. For example I just glance at my workflows and pick the one that I want, drag and drop into ComfyUI and I'm ready to go. com ComfyUI Examples. And my workflow itself for something like SDXL with Refiner upscaled to 4kx4k is super simple. Dragging a generated png on the webpage or loading one will give you the full workflow including seeds that were used to create it. My ComfyUI workflow was created to solve that. About a week or so ago, I've began to notice a weird bug - If I load my workflow by dragging the image into the site, it'll put the wrong positive prompt. ai/profile/neuralunk?sort=most_liked. It works by converting your workflow. For your all-in-one workflow, use the Generate tab. Remove 3/4 stick figures in the pose image. A1111 has great categories like Features and Extensions that simply show what repo can do, what addon out there and all that stuff. 0 and ComfyUI to explore how doubling the sample count affects performance, especially on higher sample counts, seeing where the image changes relative to the sampling steps. === How to prompt this workflow === Main Prompt ----- The subject of the image in natural language Example: a cat with a hat in a grass field Secondary Prompt ----- A list of keywords derived from the main prompts, at the end references to artists Example: cat, hat, grass field, style of [artist name] and [artist name] Style and References Welcome to the unofficial ComfyUI subreddit. yimesrnufnakdlxxhnhgibsqsvbdlltgxstncxjtypayxmey