Character LoRAs - Want more character variety? Then check This List!
๐Ÿ‘ป Ghost Signatures! Want to know how to easily remove them? - Check Here!
Description

ZoinksNoob is pretty decent at show accurate style, but there were still plenty of issues to be fixed. Having two characters kissing was also as problematic as ever. Still, I think it turned out well.
Created using the ZoinksNoob model available on HuggingFace
If you want to support me:
BuyMeACoffee

safe16594 ai composition1039 generator:zoinksnoob423 prompter:tyto4tme4l177 rainbow dash3764 twilight sparkle4872 pegasus9031 pony23607 unicorn11472 g442414 bed5489 bedroom2399 bedsheets239 curtains745 duo8352 duo female1916 eyes closed2289 female46711 full body677 horn19141 indoors6201 kiss on the lips204 kissing708 legs in air314 lesbian1818 lying down4993 lying on bed805 lying on top of someone37 mare15061 on back2350 on bed2942 pillow1981 ship:twidash50 shipping3506 show accurate797 side view498 unicorn twilight1055 window2682

Comments

Syntax quick reference: **bold** *italic* ||hide text|| `code` __underline__ ~~strike~~ ^sup^ ~sub~

Detailed syntax guide

Hmm I see. Will have to try Forge out since I never seen it. I been using ComfyUI through inference cause it felt the most straight forward/friendly. Tried stability diffussion webUI too but that was giving me horrible results in general for some reason.
Thanks for the answer!
tyto4tme4l

Something of an artist
@Montaraz13
Yes, it matters a lot. I use Forge WebUI (also via Stability Matrix), I know inpainting is more tricky in ComfyUI. I use options similar to the ones in the screenshot below. Important things to note:
  1. Masked content - what should be used under the mask, I always leave it at โ€œoriginalโ€ to use the original image
  2. Inpaint area - โ€œWhole pictureโ€ for major elements dependent on other things around it (limbs, patterns, etc.), โ€œOnly maskedโ€ for small, detailed, independent elements like eyes, cutie marks, jewelry, etc.
  3. Denoising strength - how much do you want to change the original masked content, I usually set it to values in range of 0.2 - 0.4, depending on the situation
  4. Sampling method, resolution, CFG Scale - leave them the same as in the original image
Yeah inpainting is the big one I used before in NovelAI. I had mixed results with local AI though. Often it will just fill the selected zone with blurred colors or not connecting things at all.
There is no extra addon or step you do besides painting the zone you want remade right? (Also iโ€™m doing it on Inference/stability matrix UI for ComfuUI, not sure if that matters)
tyto4tme4l

Something of an artist
@Montaraz13
I use inpainting a lot, quite often combined with crude manual editing in GIMP. At the end of the creation process, I also manually clean up any imperfections such as artifacts and discolored inpainting areas.
Inpainting is very powerful and easy to use, you basically tell the model to fix the selected area and most of the time it works great.
That is really nice. I love show accurate when it is pulled out well. Do you fix things yourself in photoshop and such or do you use some kind of process with the AI itself? I heard about several stuff for fixing errors but have yet to figure them out