Character LoRAs - Want more character variety? Then check This List!
๐Ÿ‘ป Ghost Signatures! Want to know how to easily remove them? - Check Here!
Description

ZoinksNoob is pretty decent at show accurate style, but there were still plenty of issues to be fixed. Having two characters kissing was also as problematic as ever. Still, I think it turned out well.
Created using the ZoinksNoob model available on HuggingFace
If you want to support me:
BuyMeACoffee

safe17648 ai composition1238 generator:zoinksnoob458 prompter:tyto4tme4l187 rainbow dash4018 twilight sparkle5256 pegasus9643 pony25195 unicorn12430 g445924 bed5921 bed sheet271 bedroom2637 curtains841 duo8991 duo female2062 eyes closed2539 female50581 full body713 horn20417 indoors6875 kiss on the lips225 kissing757 legs in air344 lesbian1994 lying down5360 lying on bed900 lying on top of someone39 mare16010 on back2518 on bed3195 pillow2162 ship:twidash50 shipping3820 show accurate926 side view584 unicorn twilight1096 window3017

Comments

Syntax quick reference: **bold** *italic* ||hide text|| `code` __underline__ ~~strike~~ ^sup^ ~sub~

Detailed syntax guide

Hmm I see. Will have to try Forge out since I never seen it. I been using ComfyUI through inference cause it felt the most straight forward/friendly. Tried stability diffussion webUI too but that was giving me horrible results in general for some reason.
Thanks for the answer!
tyto4tme4l

Something of an artist
@Montaraz13
Yes, it matters a lot. I use Forge WebUI (also via Stability Matrix), I know inpainting is more tricky in ComfyUI. I use options similar to the ones in the screenshot below. Important things to note:
  1. Masked content - what should be used under the mask, I always leave it at โ€œoriginalโ€ to use the original image
  2. Inpaint area - โ€œWhole pictureโ€ for major elements dependent on other things around it (limbs, patterns, etc.), โ€œOnly maskedโ€ for small, detailed, independent elements like eyes, cutie marks, jewelry, etc.
  3. Denoising strength - how much do you want to change the original masked content, I usually set it to values in range of 0.2 - 0.4, depending on the situation
  4. Sampling method, resolution, CFG Scale - leave them the same as in the original image
Yeah inpainting is the big one I used before in NovelAI. I had mixed results with local AI though. Often it will just fill the selected zone with blurred colors or not connecting things at all.
There is no extra addon or step you do besides painting the zone you want remade right? (Also iโ€™m doing it on Inference/stability matrix UI for ComfuUI, not sure if that matters)
tyto4tme4l

Something of an artist
@Montaraz13
I use inpainting a lot, quite often combined with crude manual editing in GIMP. At the end of the creation process, I also manually clean up any imperfections such as artifacts and discolored inpainting areas.
Inpainting is very powerful and easy to use, you basically tell the model to fix the selected area and most of the time it works great.
That is really nice. I love show accurate when it is pulled out well. Do you fix things yourself in photoshop and such or do you use some kind of process with the AI itself? I heard about several stuff for fixing errors but have yet to figure them out