Best sdxl upscaler reddit. 35, Ultimate … 22 votes, 25 comments.


Best sdxl upscaler reddit Hello Friends, Could someone guide me on efficiently upscaling a 1024x1024 DALLE-generated image (or any resolution) on a Mac M1 Pro? I'm quite new to this and have been using the "Extras" tab on Automatic1111 to upload and upscale images without entering a prompt. 5 checkpoint, this NMKD TLDR: Best settings for SDXL are as follows. Reply reply For latent upscalers you need at least 0. 5 and in my experience 0. Upscaler -- here's where you'll get some disagreement. 5, using one of ESRGAN models usually gives a better result in Hires Fix. Please share your tips, tricks, and Posted by u/Striking-Long-2960 - 102 votes and 24 comments If you're using SDXL, you can try experimenting and seeing which LoRAs can achieve a similar effect. Ultimate SD Upscaler, padding 512, blur 24, tile size 768 or 1024. However, I have updated the workflow because 4x_NMKD-Siax_200k is good for detailed art and photorealistic (for photos too) but this result is not enough good for art or anime. Meanwhile, SD1. I also see the Automatic1111 Ultjmate SD Upscaler extension. Latent upscalers are pure latent data expanders and don't do pixel-level interpolation like image upscalers do. 3 GB VRAM via OneTrainer - Both U-NET and Text Encoder 1 is trained - Compared 14 GB config vs slower 10. 3) and sampler without "a" if you dont want big changes from original. Your results show up in your output directory There’s a custom node that basically acts as Ultimate SD Upscale. The SD1. Best way to upscale with automatic 1111 1. ComfyUI Workflow 4x upscaler, variable prompter (SD1. 20K subscribers in the comfyui community. Change the model from the SDXL base to the refiner and process the raw picture in img2img using the Ultimate SD upscale extension with the following settings: 235745af8d, VAE: sdxl_vae. 3, no added noise or other changes. 429x. Then add control net tile, canny and play with the settings of both. 5 does not use Positional Encoding. py:357: UserWarning: 1Torch was not compiled with flash attention. The left side is my "control group" - ESRGAN upscaler, denoise 0. for your case, the target is 1920 x 1080, so initial recommended latent is 1344 x 768, then upscale it to 1. I'm about to downvote it too. ) But in this post the OP is using the leaked SDXL 0. Most I had never tried before. This applies to HiRes fix above. If you have a very small face or multiple small faces in the image, you can get better results fixing faces after the upscaler, it takes a few seconds more, but much better results (v2. Then get to decide for themselves how to proceed, though you too are helping drive visibility, for the greater good, as he is doing good and this here, well, it will without question result in more money in his The noise you're seeing from the latent upscaler is from giving it the same role in the workflow as the image upscaler. 5 or XL checkpoints. it should have total (approx) 1M pixel for initial resolution. The SDXL uses Positional Encoding. Please share your tips, tricks, and workflows for using this software to create your AI art. I’ve just been using Hires Fix, and am totally unfamiliar with the first two. 5 or SDXL images using SD1. Because these are made in Japan and are less prone to failure. I've tested many of the upscalers mentioned here using XYZ Plot and find that 8x_NMKD-Faces_160000_G works best for skin or faces and 4x-UltraSharp works best for most everything else. 5, also have IPadapter and controlnet if needed). I think you’d just pipe the latent image into the sampler node that receives the sdxl model?. 3 GB Config - More Info In Comments Best of Reddit; Topics; Content Policy Tried it with SDXL-base and SDXL-Turbo. 5 denoising and for best results closer to 0. you can drag the output images to the input so you can quickly do a 4x by doing a 2x run and then dragging the output to the input and running again. But, if the eyes are both on the same wide tile, the changes to both eyes will tend to be consistent AP Workflow v3. true. 5 & SDXL/Turbo) Resource - Update I created my first workflow for ComfyUI and decided to share it with you since I found it quite helpful for me. 5 is good at adding detail while retaining coherence with upscales in img2img, but it can often get confused by concepts generated in SDXL. I was wondering what the best way to upscale sdxl images are now? With 1. 0 Base SDXL 1. 6? I tried the old method with Controlnet, ultimate upscaler with 4x-ultrasharp , but it returned errors like ”mat1 and mat2 shapes cannot be multiplied” (SDXL) is mostly subscription based? How long will they last and whats It use upscaler and then use sd to increase details. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. 8 strength with DPM++ 2M SDE SGMUniform sampler at 8 steps and cfg of 1. (SDXL) with only 10. However, when upscaling Flux images with an SD1. Explore new ways of using Würstchen v3 architecture and gain a unique experience that sets it apart from SDXL and SUPIR (Scaling-UP Image Restoration) based on LoRA and Stable Diffusion XL(SDXL) framework released by XPixel group, helps you to upscale your image in no time. The right upscaler will always depend on the model and style of image you are generating; Ultrasharp works well for a lot of things, but sometimes has artifacts for me with very In SD 1. 7-0. 25-0. For example, if a seam divides a face, one tile may move an eye slightly up, while another may move the other eye slightly down. 5 era) but is less good at the traditional ‘modern 2k’ anime look for whatever reason. You have a bunch of custom things in here that arent necessary to demonstrate "TurboSDXL + 1 Step Hires Fix Upscaler", and basically wasting our time trying to find things because you dont even provide And it is often less good than normal image upscaler because : - It add much more details, and with that the possibility of defects in the picture ; don't go crazy with the final resolution - It only works at high denoising, and so is not good for doing a fine upscale of a picture which details you want to preserve 28 votes, 15 comments. 7, for non-latent upscalers you will get best results under 0. Then I send the latent to a SD1. I own a Maxwell CD-R pro which are also by Taiyo yuden, and those things are also the best ones and reads games fast. Your knowledge gained from other resources (for example, resolutions around 1024 are good enough for SDXL) is wrong. Here's a sample I made while experimenting with Hires. Give an upscaler model an image of a person with super smooth skin and it will output a higher resolution picture of smooth skin, but give that image to a ksampler (using a low 4x-UltraSharp is a decent general purpose upscaler. The model is trained on 20 million high-resolution Posted by u/July7242023 - 10 votes and 2 comments Here's a link to The List, I tried lots of them but wasn't looking for anime specific results and haven't really tried upscaling too many anime pics yet. 4 works best. Please keep posted images SFW. . personally, I won't suggest to use arbitary initial resolution, it's a long topic in itself, but the point is, we should stick to recommended resolution from SDXL training resolution (taken from SDXL paper). Use the 8 step lora at 0. Even the best upscaler model, while considerably faster than rendering the image anew, will only increase detail resolution that are already present in the source image. They are completely different. But it is extremely light as we speak, so much so the Civitai guys probably wouldn't even consider that NSFW at all. But in SDXL, I find the ESRGAN models tend to oversharpen in places to give I'm sure this has been done to death, but here is a comparison of the different upscalers for some wants-to-be-photorealistic content. I mostly go for realism/people with my gens and for that I really like 4x_NMKD-Siax_200k, it handles skin texture quite well but does some weird things with hair if the upscale factor is too large. Set low denoise (~0. View community ranking In the Top 1% of largest communities on Reddit. The two tools do different things under the hood and are not interchangeable 1-to-1. 35, Ultimate 22 votes, 25 comments. 0 for ComfyUI (SDXL Base+Refiner, XY Plot, ControlNet XL w/ OpenPose, Control-LoRAs, Detailer, Upscaler, Prompt Builder) Tutorial | Guide I published a new version of my workflow, which should fix the issues that arose this week after some major changes in some of the custom nodes I use. Welcome to the unofficial ComfyUI subreddit. This is pretty much your best bet. 0 faces fix QUALITY), recommend if you have a good GPU: Under 4K: generate base SDXL size with extras like character models or control nets -> face / hand / manual area inpainting with differential diffusion -> Ultrasharp 4x -> unsampler -> second ksampler with a mixture of inpaint and tile controlnet (I Step 1 - Text to image: Prompt varies a bit from picture to picture, but here is the first one: high resolution photo of a transparent porcelain android man with glowing backlit panels, closeup on face, anatomical plants, dark swedish Definitely the best if you just wanna prompt stuff without thinking too hard. Sdxl is good at different styles of anime (some of which aren’t necessarily well represented in the 1. Caveat: I still prefer SDXL-Lightining This new comparison now should be more accurate with seeing which is the best realistic model that still retains pony capabilities, and how it compares to realistic SDXL not In relation to the previous point, I recommend using Clarity Upscaler combined with tools like Upscayl, as this achieves much better results. It's hard to suggest feedback without knowing your workflow, image style is a big factor that will determine the best upscaler and workflow. What methods are ppl using to say create 4k+ resolutions? SDXL can indeed generate a nude body, and the model itself doesn't stop you from fine-tuning it towards whatever spicy stuff there is with a dataset, at least by the looks of it. Anyone have this CD-R brand. Reply reply I’ve been seeing news lately about ControlNet’s tile model for upscaling. The right side uses the Siax upscaler and the above settings. The more anyone engages with me and my opinions on the matter, the more people are introduced to who he is and what he has to offer. SwinIR_4x shows stable average results in all tests. 5 we had control net and tiling etc, which last I checked isn't viable with sdxl. 0 Refiner Automatic calculation of the steps required for both the Base and the Refiner models Quick selection of image width and height based on the SDXL training set XY Plot ControlNet with the XL OpenPose Which upscaler do you use to upscale your latent before passing it to the second ksampler? I wouldn't HAVE to use an SD15 checkpoint for that second ksampler, right? What are the benefits you see in using an SD15 there instead of the same original SDXL checkpoint? Also, would you happen to have a clean workflow that demonstrates this idea? The upscaler can blend seams, but it can't account for the differing ways things are changed between tiles. SDXL 1. I've found that SDXL doesn't work great with super-high upscales in img2img -- it tends to smooth out textures and lose coherence. Because as far as reliability CD-R goes. safetensors, Denoising strength: 0. 5 workflow because my favorite checkpoint analogmadness and random loras on civitai is mostly SD1. Both of them give me errors as "C:\Users\shyay\Desktop\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\attention. Also top tip that I didn't realise until reading the wiki properly. 5. - LDSR 2x scaling is implemented as downsampling to half res then The 4X-NMKD-Superscale-SP_178000_G model has always been my favorite for upscaling SD1. This is really interesting. 9 model to act as an upscaler. (There’s custom nodes for pretty much everything, including ADetailer. Sounds like the multipurpose choice? Experimental LCM Workflow "The Ravens" for Würstchen v3 aka Stable Cascade is up and ready for download. Normal SDXL workflow (without refiner) (I think it has beter flow of prompting then SD1. vngdt lytjkgt hjlyh hxqsmeq fttx fkzc ozlhah gimkz hpg yadmyk