Character LoRAs - Want more character variety? Then check This List!
๐Ÿ‘ป Ghost Signatures! Want to know how to easily remove them? - Check Here!
Description

ZoinksNoob is pretty decent at show accurate style, but there were still plenty of issues to be fixed. Having two characters kissing was also as problematic as ever. Still, I think it turned out well.
Created using the ZoinksNoob model available on HuggingFace
If you want to support me:
BuyMeACoffee

safe16908 ai composition1111 generator:zoinksnoob427 prompter:tyto4tme4l180 rainbow dash3818 twilight sparkle4991 pegasus9159 pony24029 unicorn11681 g443307 bed5588 bed sheet251 bedroom2458 curtains769 duo8507 duo female1975 eyes closed2356 female47660 full body690 horn19451 indoors6416 kiss on the lips217 kissing725 legs in air316 lesbian1871 lying down5087 lying on bed832 lying on top of someone38 mare15327 on back2396 on bed2995 pillow2029 ship:twidash50 shipping3607 show accurate807 side view514 unicorn twilight1071 window2775

Comments

Syntax quick reference: **bold** *italic* ||hide text|| `code` __underline__ ~~strike~~ ^sup^ ~sub~

Detailed syntax guide

Hmm I see. Will have to try Forge out since I never seen it. I been using ComfyUI through inference cause it felt the most straight forward/friendly. Tried stability diffussion webUI too but that was giving me horrible results in general for some reason.
Thanks for the answer!
tyto4tme4l

Something of an artist
@Montaraz13
Yes, it matters a lot. I use Forge WebUI (also via Stability Matrix), I know inpainting is more tricky in ComfyUI. I use options similar to the ones in the screenshot below. Important things to note:
  1. Masked content - what should be used under the mask, I always leave it at โ€œoriginalโ€ to use the original image
  2. Inpaint area - โ€œWhole pictureโ€ for major elements dependent on other things around it (limbs, patterns, etc.), โ€œOnly maskedโ€ for small, detailed, independent elements like eyes, cutie marks, jewelry, etc.
  3. Denoising strength - how much do you want to change the original masked content, I usually set it to values in range of 0.2 - 0.4, depending on the situation
  4. Sampling method, resolution, CFG Scale - leave them the same as in the original image
Yeah inpainting is the big one I used before in NovelAI. I had mixed results with local AI though. Often it will just fill the selected zone with blurred colors or not connecting things at all.
There is no extra addon or step you do besides painting the zone you want remade right? (Also iโ€™m doing it on Inference/stability matrix UI for ComfuUI, not sure if that matters)
tyto4tme4l

Something of an artist
@Montaraz13
I use inpainting a lot, quite often combined with crude manual editing in GIMP. At the end of the creation process, I also manually clean up any imperfections such as artifacts and discolored inpainting areas.
Inpainting is very powerful and easy to use, you basically tell the model to fix the selected area and most of the time it works great.
That is really nice. I love show accurate when it is pulled out well. Do you fix things yourself in photoshop and such or do you use some kind of process with the AI itself? I heard about several stuff for fixing errors but have yet to figure them out