Decorative
students walking in the quad.

Draw mask comfyui reddit

Draw mask comfyui reddit. So from what I can tell, ComfyUI seems to be vastly more powerful than even Draw Things (which has a lot of configuration settings). Edit: And rembg fails on closed shapes, so it's not ideal Welcome to the unofficial ComfyUI subreddit. png file, and then R, G, B and Alpha can all mask different areas. My mask images. png. Txt-to-img, img-to-img, Inpainting, Outpainting, Image Upcale, Latent Upscale, multiple characters at once, LoRAs, ControlNet, IP-Adapter, but also video generation, pixelization, 360 image generation, and even Live painting! I don't know if there is a node for it (yet?) in ComfyUI, but I imagine that under the hood, it would take each colored region and make a mask of each color, then use attention coupling on each mask with the associated regional prompt. I think the later combined with Area Composition and ControlNet will do what you want. TLDR, workflow: link. Comfy Workflows Comfy Workflows. Create a black and white image that will be the mask. Seems very hit and miss, most of what I'm getting look like 2d camera pans. In ComfyUI, the easiest way to apply a mask for inpainting is: use the "Load Checkpoint" node to load a model. Use a "Mask from Color" node and set it to your first frame color. You can also select non-face bbox models and facedetailer will detail hands etc ComfyUI is not supposed to reproduce A1111 behaviour I found the documentation for ComfyUI to be quite poor when I was learning it. The method is very simple; you still need to use the ControlNet model, but now you will import your hand-drawn draft. use the "Load Image" node to load a source image to modify. You can do it with Masquerade nodes. A somewhat decent inpainting workflow in comfyui can be a pain in the ass to make. It depends how you made the mask in the first place. Currently, there are many extensions (custom nodes) available for background removal in ComfyUI, such as Easy-use, mixlab, WAS-node-suite, Inspyrenet-Rembg, and others. Wanted to share my approach to generate multiple hand fix options and then choose the best. It animates 16 frames and uses the looping context options to make a video that loops. Layer copy & paste this PNG on top of the original in your go to image editing software. Finally, the story text image output from module 9 was pasted on the right side of the image. You can choose your preferred drawing software, like Procreate on an iPad, and then import the doodled image into ComfyUI. It needs a better quick start to get people rolling. Feed this over to a "Bounded Image Crop with Mask" node, using our sketch image as the source with zero padding. This workflow generates an image with SD1. They don't have to literally be single pixels, just small. Invoke AI has a super comfortable and easy to use regional prompter thats based on simply drawing, was wondering if there's such one in comfyui, even if it's an external node? suuuuup, :Dso, with set latent noise mask, it is trying to turn that blue/white sky into a space ship, this may not be enough for it, a higher denoise value is more likely to work in this instance, also if you want to creatively inpaint then inpainting models are not as good as they want to use what exists to make an image more than a normal model. In fact, there's a lot of inpainting stuff you can do with comfyui that you can't do with automatic1111. 4. Current Situation. one Mask after the other. In fact, from inpainting to face replacement, the usage of masks is prevalent in SD. A lot of people are just discovering this technology, and want to show off what they created. 0 for ComfyUI - Now with support for Stable Diffusion Video, a better Upscaler, a new Caption Generator, a new Inpainter (w inpainting/outpainting masks), a new Watermarker, support for Kohya Deep Shrink, Self-Attention, StyleAligned, Perp-Neg, and IPAdapter attention mask Source image. Even if you set the size of the masking circle to max and go over it close enough so that it appears to be fully masked, if you actually save it to the node and Yeah there are tools that do this , I can’t check them right now but I can later if you remind me. The Krita plugin is great but the nodal soup part isn't there so I can't change some things. (And if you wanted 4 masks in one image, draw over a transparent background in a . At least that's what I think. Overall, I've had great success using this node to do a simple inpainting workflow. Alternatively you can create an alpha mask on any photo editing software. If you do a search for detailer, you will find both segs detailer and mask detailer. Inpaint is pretty buggy when drawing masks in a1111. Outline Mask: Unfortunately, it doesn't work well because apparently you can't just inpaint a mask; by default, you also end up painting the area around it, so the subject still loses detail IPAdapter: If you have to regenerate the subject or the background from scratch, it invariably loses too much likeness Still experimenting with it though. "SEGS" is the format that Impact pack uses to bundle masks with additional information. I make them 512x512, but the size isn't important. Aug 25, 2024 · Hello, ComfyUI community! I'm seeking advice on improving the background removal process in images. Imagine you have a 1000px image with a circular mask that's about 300px. That way, if you take just the red channel from the mask, it'll give you just the red man, and not the background. Step One: Image Loading and Mask Drawing. Combine both methods: gen, draw, gen, draw, gen! Always check the inputs, disable the KSamplers you don’t intend to use, make sure to have the same resolution in Photoshop than in ComfyUI. 75s/it with the 14 frame model. 0 for ComfyUI - Now with a next-gen upscaler (competitive against Magnific AI and Topaz Gigapixel!) and higher quality mask inpainting with Fooocus inpaint model Join the largest ComfyUI community. Mar 10, 2024 · comfyui_facetools. This node pack was created as a dependency-free library before the ComfyUI Manager made installing dependencies easy for end-users. Reproducing the behavior of the most popular SD implementation (and then surpassing it) would be a very compelling goal I would think. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. It's a more feature-rich and well-maintained alternative for dealing Welcome to the unofficial ComfyUI subreddit. Thanks everyone always asks about inpainting at full resolution, comfyUI by default inpaints at the same resolution as the base image as it does full frame generation using masks. Unless you specifically need a library without dependencies, I recommend using Impact Pack instead. Right click on any image and select Open in Mask Editor. Belittling their efforts will get you banned. Hi amazing ComfyUI community. Discord Sign In. For some reason this isn't possible. Try drawing them over a black background, though, not a white background. Mask detailer allows you to simply draw where you want it to apply the detailing. I suppose that does work for quick and dirty masks. Use the mask tool to draw on specific areas, then use it for input to subsequent nodes for redrawing. i think, its hard to tell what you think is wrong. For these workflows we use mostly DreamShaper Inpainting. 86s/it on a 4070 with the 25 frame model, 2. It includes literally everything possible with AI image generation. If something is off I can redraw the masks as needed, one by one or only one. Import the image at the Load Image node. These custom nodes provide a rotation aware face extraction, paste back, and various face related masking options. And you can't use soft brushes. So, has someone…. I am working on a piece which requires me to have mask which reveals a texture. 💡 Tip: Most of the image nodes integrate a mask editor. A way to draw inside comfyui? Are there any nodes for sketching/drawing directly in comfyui? Of course you can always take things into an external program like photoshop, but i want to try drawing simple shapes for controlnet or paint simple edits before putting things into inpaint. Any way to paint a mask inside Comfy or no choice but to use an external image editor ? It's not released yet, but i just finished 80% of features. Share, discover, & run thousands of ComfyUI workflows. But when Krita plugin happened I switched to that. But basically if you are doing manual inpainting make sure that the sampler producing your inpainting image is set to fixed that way it does inpainting on the same image you use for masking. This will set our red frame as the mask. What else is out there for drawing/painting a latent to be fed into ComfyUI other than the Photoshop one(s)? Welcome to the unofficial ComfyUI subreddit. If you're using the built in mask editor, just use a small brush and put dots outside the area you already masked. Step Two: Building the ComfyUI Partial Redrawing Workflow. It's not that slow, but I was wondering if there was a more direct Latent with 'fog' background -> Latent Mask node somewhere. The flow is in shambles right now so I'll just share this screengrab. To blend the image and scroll naturally, I created a Border Mask on top: Mask. This comprehensive tutorial covers 10 vital steps, including cropping, mask detection, sampler erasure, mask fine-tuning, and streamlined inpainting for incredible results. Below is the effect image generated by the AI after I imported a simple bedroom line drawing: Welcome to the unofficial ComfyUI subreddit. Yet, there is no mask node as a common denominator node. So far (Bitwise mask + mask) has only 2 masks and I use auto detect so mask can run from 5 too 10 masks. an alternative is Impact packs detailer node which can do upscaled inpainting to give you more resolution but this can easily end up giving you more detail than the rest of As for the rest, if memory serves the mask segm custom_node has a couple of extra install steps which are easy to follow & if you load the workflow & see redded out nodes just go to the ComfyUi node manager in the side float menu & click install missing nodes then reset & you should be good to go. Is this more or less accurate? While obviously it seems like ComfyUI has big learning curve, my goal is to actually make pretty decent stuff, so if I have to put the time investment into Comfy, that's fine to me. I kinda fake it by loading any image, than drawing mask on it, than convert mask to image and than send that image to controlnet. And I never know what controlnet model to use. And above all, BE NICE. It took me hours to get one I'm more or less happy with, where I feather the mask ( feather nodes usually don't work how I want to, so I use mask2image, blur the image, then image2mask ), 'only masked area' where it also apply to the controlnet ( applying it to the controlnet was probably the worst part ), and TLDR: THE LAB EVOLVED is an intuitive, ALL-IN-ONE workflow. There are many detailer nodes not just facedetailer. [Load image] -> [resize to match image being generated] -> [image-to-mask] -> [gaussian blur mask] to soften edges Then use [invert mask] to make a mask that is the exact opposite and [solid mask] to make a pure white mask. I learned about MeshGraphormer from this youtube video of Scott Detweiler, but felt like simple inpainting does not do the trick for me, especially with SDXL. Basically though you’d be using a mask, you’d right click on the load images and draw the mask, then there is a node to snip it and stitch it back in … pretty sure the node was something like “stitch”. Share art/workflow . 5, then uses Grounding Dino to mask portions of the image to animate with AnimateLCM. The first issue is the biggest for me though. In addition to a whole image inpainting and mask only inpainting, I also have workflows that I was wondering if there is anyway to create Mask in depth in comfyUI. I have this working, however to mask the upper layers after the initial sampling I VAE decode them and use rembg, then convert that to a latent mask. I'm not sure exactly what it stores, but i always draw a mask, send it to MaskToSEGS where I can set the crop factor to determine region for context, then to SEGS Detailer. use the "Load Image (as Mask)" to load the grayscale mask image, specifying "channel" as "red". Thanks. I use the "Load Image" node and "Open in MaskEditor" to draw my masks. I need to combine 4 5 masks into 1 big mask for inpainting. ) Learn how to master inpainting on large images using ComfyUI and Stable Diffusion. Does anyone know why? I would have guessed that only the area inside of the mask would be modified. Please keep posted images SFW. What is the rationale behind the drawing of the mask? I don't want to break my drawing/painting workflow by editing csv files, calculating rectangle areas. Release: AP Workflow 8. Welcome to the unofficial ComfyUI subreddit. Here i add one of my PNG so you can see the whole workflow : Here I come up against two problems: Look into Area Composition (comes with ComfyUI by default), GLIGEN (an alternative area composition), and IPAdapter (custom node on GitHub, available for manual or ComfyUI manager installation). . The mask editor suck. Please share your tips, tricks, and workflows for using this software to create your AI art. This workflow, combined with Photoshop, is very useful for: - Drawing specific details (tattoos, special haircut, clothes patterns, …) Load the upscaled image to the workflow, use ComfyShop to draw a mask and inpaint. Turns out drawing "^" shaped masks seems to work a bit better than rectangles (especially for smaller masks) because it implies the leg positioning. Uh, your seed is set to random on the first sampler. Would you pls show how I can do this. In this example, it will be 255 0 0. Regional prompting makes that rather simple all in one image, with multiple hand drawn masks all in app(my most complicated involved 8 hand drawn masks), sure I can paint a mask with an outside app, but why would I bother when it's built into an app in automatic1111. Release: AP Workflow 7. After completing all the integrations, I output via AnythingAnywhere. <edit 2> Actually now I understand what it's doing. Does anyone else notice that you cannot mask the very bottom of the image with the right-click masking option? And I'm not talking about the mouse not being able to 'mask' it there. If you spent more than a few days in comfyui, you will recognize that there is nothing here that cannot be done with the already available nodes. The workflow that was replaced: When Canvas_Tab came out, it was awesome. One thing about human faces is that they are all unique. You can see how easily and effectively the size/placement of the subject can be controlled simply by drawing a new mask. I believe it does mostly the same things as OP's node. How can I draw regional prompting like invokeAIs regional prompting (control layers) that allows drawing the regional prompting rather than typing numbers? Title says it all. For the specific workflow, please download the workflow file attached to this article and run it. Is there "drawing" node for comfyui that would be bit more user friendly? Like ability to zoom in on parts you are drawin on, colors etc. I want to create a maks which follows the contours of thr subject (a lady in my case). This will take our sketch image and crop it down to just the drawing in the first box. I want to be able to use canny, ultimate SD upscale while inpainting, AND I want to be able to increase batch size. As i can't draw the second mask on the result of the first character image (the goal is to do it in one workflow) i draw it on the original picture and i send this mask only in the new VAE Encode (for Inpainting). You can paint all the way down or the sides. It doesn't replace the image (although that might seem to be what it's doing visually), it's saving a separate channel with that mask, so you get two outputs (image and mask) from that one node. For example, the Adetailer extension automatically detects faces, masks them, creates new faces, and scales them to fit the masks. But one thing I've noticed is that the image outside of the mask isn't identical to the input. A transparent PNG in the original size with only the newly inpainted part will be generated. qmetl cjxrdu ufyv dkptpw hwk bak utzvtjy wrk aqmilp rfzwsyi

--