Character LoRAs - Want more character variety? Then check This List!
๐Ÿ‘ป Ghost Signatures! Want to know how to easily remove them? - Check Here!
Description

ZoinksNoob is pretty decent at show accurate style, but there were still plenty of issues to be fixed. Having two characters kissing was also as problematic as ever. Still, I think it turned out well.
Created using the ZoinksNoob model available on HuggingFace
If you want to support me:
BuyMeACoffee

safe16836 ai composition1097 generator:zoinksnoob426 prompter:tyto4tme4l180 rainbow dash3808 twilight sparkle4950 pegasus9129 pony23939 unicorn11636 g443123 bed5568 bed sheet249 bedroom2440 curtains766 duo8465 duo female1949 eyes closed2339 female47441 full body687 horn19380 indoors6379 kiss on the lips217 kissing718 legs in air316 lesbian1840 lying down5067 lying on bed826 lying on top of someone38 mare15257 on back2390 on bed2985 pillow2015 ship:twidash50 shipping3560 show accurate804 side view509 unicorn twilight1068 window2760

Comments

Syntax quick reference: **bold** *italic* ||hide text|| `code` __underline__ ~~strike~~ ^sup^ ~sub~

Detailed syntax guide

Hmm I see. Will have to try Forge out since I never seen it. I been using ComfyUI through inference cause it felt the most straight forward/friendly. Tried stability diffussion webUI too but that was giving me horrible results in general for some reason.
Thanks for the answer!
tyto4tme4l

Something of an artist
@Montaraz13
Yes, it matters a lot. I use Forge WebUI (also via Stability Matrix), I know inpainting is more tricky in ComfyUI. I use options similar to the ones in the screenshot below. Important things to note:
  1. Masked content - what should be used under the mask, I always leave it at โ€œoriginalโ€ to use the original image
  2. Inpaint area - โ€œWhole pictureโ€ for major elements dependent on other things around it (limbs, patterns, etc.), โ€œOnly maskedโ€ for small, detailed, independent elements like eyes, cutie marks, jewelry, etc.
  3. Denoising strength - how much do you want to change the original masked content, I usually set it to values in range of 0.2 - 0.4, depending on the situation
  4. Sampling method, resolution, CFG Scale - leave them the same as in the original image
Yeah inpainting is the big one I used before in NovelAI. I had mixed results with local AI though. Often it will just fill the selected zone with blurred colors or not connecting things at all.
There is no extra addon or step you do besides painting the zone you want remade right? (Also iโ€™m doing it on Inference/stability matrix UI for ComfuUI, not sure if that matters)
tyto4tme4l

Something of an artist
@Montaraz13
I use inpainting a lot, quite often combined with crude manual editing in GIMP. At the end of the creation process, I also manually clean up any imperfections such as artifacts and discolored inpainting areas.
Inpainting is very powerful and easy to use, you basically tell the model to fix the selected area and most of the time it works great.
That is really nice. I love show accurate when it is pulled out well. Do you fix things yourself in photoshop and such or do you use some kind of process with the AI itself? I heard about several stuff for fixing errors but have yet to figure them out