Character LoRAs - Want more character variety? Then check This List!

Posts

For more information, see the search syntax documentation. Search results are sorted by creation date.

Search Results

Creative Corner » LoRA - Characters » Post 3

Creative Corner » LoRA - Characters » Post 2

Creative Corner » LoRA - Characters » Post 1

Creative Corner » LoRA - Characters » Topic Opener

Teaspoon

Thread to compile known character LoRAs



Not all characters are well known (or at all) in PonyV6 or other SDXL models, so LoRAs are often required for quick gens. Multiple LoRAs for lesser known characters already exist, but are not always easy to find. Herein I’d like to list known ones.
Of course, please comment in the thread with the ones you’re aware of, and any additional info you might have for it, like base model (PonyV6 for 99% of them, but still), trigger words, strength, etc. and I’ll add them to the first post(s) the page above as soon as I can.

cleaned up to cut clutter

Creative Corner » Text-to-image prompting » Post 36

mp40

@Lord Waite
Have you done anything else with this? I’m looking for resources on how to build or curate a llm of my own but the “uncensored” model is still denying some of my prompt requests, do I just need to try other jailbreak prompts till somthning works or?

Creative Corner » Generation methods, UI » Post 15

Lord Waite

I believe for ADetailer, you’d be installing the Comfy-Impact Pack. (Which is easiest if you have the ComfyUI Manager installed, of course.)
And yeah, definitely will take a bit of getting used to. If you go under Workflow->Browse Templates, the Image Generation template there is pretty much the default one. Change the model to pony v6, change the width and height, put a prompt in, and hit Queue…
Watching some videos will probably help, just some things won’t match because the ui has changed.
Posted Report

Creative Corner » Generation methods, UI » Post 14

Thoryn

Latter Liaison
Finally got ComfyUI to work on my end, but the UI looks a bit overwhelming, so need to set aside many consecutive hours one day to really dig into it.
And I must be blind, because I couldn’t find ADetailer in it. Will look more into it later though.
Posted Report

Creative Corner » Text-to-image prompting » Post 35

Lord Waite

I always feel like it helps to have a bit of a base understanding on how the models work on these things.
Initially, someone created a large dataset of images and descriptions. The descriptions were tokenized, and the images cut up into squares. Then, random noise was generated based on a seed. It took one square, generated random noise based on a seed, and attempted to denoise the noise into the image on the square. Once it got something close, it discarded the square and grabbed another one. At the end, all of this was saved in a model.
Now, what happens when you are generating an image is that your prompt is reduced to tokens by a text encoder (XL based models use CLIP-L and CLIP-G), random noise is generated by the specified seed, and then the sampler and noise schedule is how it denoises, with as many steps as you specify.
Some schedulers introduce a bit of noise at every steps, namely the ancestral ones (with an a at the end), and sde, but there may be others. With those ones, the image is going to change more between steps and they’ll be more chaotic. Also, some will take less steps then others to get to a good image, and how long each step takes will vary a bit. I believe some are just better at dealing with certain things in the image, too, so it’ll take some playing around.
Now, the clip text encoder actually can’t cope with anything more than 77 tokens at once, and that includes a start and end token, so effectively 75. So if your prompt is more than 75 tokens, it gets broken up into chunks of 75.
The idea behind “BREAK” is that you are telling it to end the current chunk right there and just pad it out with null tokens at the end. The point is just that you’re making sure that particular part of the prompt is all in the same chunk. I’ve had mixed results on it, so I try doing it that way occasionally, but also don’t a lot of the time. It is going to have trouble with getting confused anyways. This is just an attempt to minimize it a bit.
(Text encoding is one of the differences between model architectures, too. 1.* & 2.* had one clip, XL has two, then when you start getting into things like flux and 3, you start dealing with things like two clips and a t5 encoder, and the t5 encoder accepts more like 154 tokens. I also didn’t get into the vae, which is actually what turns the results into an image…)
Posted Report

Creative Corner » Text-to-image prompting » Post 34

Thoryn

Latter Liaison
I’ve seen some guides mention to use BREAK in prompts to help guide the model. E.g.
Description of scenery
BREAK
Character 1 wearing denim jeans and red sweater sitting on a bench
BREAK
Character 2 wearing black suit with bowtie walking in the background
But I’m not having much success with it, it still gets confused as to who wears/does what.
Any of you using it successfully?
Posted Report

Creative Corner » Text-to-image prompting » Post 33

MareStare

DJ HORN3 took the wheel
The quantity of steps depends on the sampler, for Euler it’s 25+ sampling steps, but sometimes it can be lower. I guess it depends on composition and it’s never constant. I recommend just trying different settings and checking if increasing the steps substantially improves the image
Posted Report

Creative Corner » Text-to-image prompting » Post 32

Creative Corner » Text-to-image prompting » Post 31

MareStare

DJ HORN3 took the wheel
@Zerowinger
You can use full sentences to describe the prompt with Pony Diffusion as well. Citing from their page on civitai the recommended prompt format:
score_9, score_8_up, score_7_up, score_6_up, score_5_up, score_4_up, just describe what you want, tag1, tag2
where tag1, tag2 are simple words/word combinations similar to derpibooru tags like “unicorn, blushing, trio, duo”, etc
Posted Report

Creative Corner » Text-to-image prompting » Post 30

Scarlet Ribbon

True Wildcard
@Zerowinger
Different models are trained in different ways, leading to some models being better for natural language, and others better for tag-based prompting. Pony doesn’t completely fail with natural language prompting, but in my experience it performs much better with tag-based. If you add source_pony to your prompt, you can damn near just use Derpi/Tanta tags to get most of the results you’re looking for.
Posted Report

Creative Corner » Text-to-image prompting » Post 29

Zerowinger

3-3/4" Army Man Fan
@MareStare
So basically, including that string is necessary for higher quality images then? What about the rest of the prompting? On Imagen, I’m used to using full sentences and phrases to describe exactly what I want the output to be, with Pony Diffusion it seems the go to format is to list each individual aspect as a prompt separated by a comma.
Posted Report

Creative Corner » Text-to-image prompting » Post 28

MareStare

DJ HORN3 took the wheel
@Zerowinger
The score_* tags are specific to Pony Diffusion. Their original idea was that you’d be able to write score_7_up tag only (just a single tag), and you’d get an image based on the dataset of images of quality 7 or higher.
However, the way this was implemented during training was wrong, and completely broken. The developers discovered this bug only in the middle of training, at which point fixing that bug would be too expensive (they’d need to restart training from scratch again, which would cost them potentially several tens or even hundreds of thousands of dollars). So, they kept the bug, and made a guideline to include that lengthy score_9, score_8_up, ... etc. string at the start of the prompt to work around it.
There is more detail on this training fiasco in this article: https://civitai.com/articles/4248/what-is-score9-and-how-to-use-it-in-pony-diffusion
Posted Report

Creative Corner » Text-to-image prompting » Post 27

Zerowinger

3-3/4" Army Man Fan
So, I tried out Pony Diffusion on Civitai to some success, and part of the prompt was copy-pasting the score_x score_up prompts that I had seen elsewhere. However, I’m a little confused as to exactly how those prompts work, the whole text to image format is very different to the style I’m familiar with.
Could I get some insider info on just exactly how this format in Pony Diffusion and similar checkpoints works?
Posted Report

Creative Corner » ADetailer » Post 6

Creative Corner » ADetailer » Post 5

derp621

why make a cutie mark-specific ADetailer LORA if the AI is going to keep messing up the cutie marks anyway?
It might be the case that part of the orignal prompt is what is messing up the cutiemark. In these situations ADetailer allows you to add a custom prompt and/or negative prompt to the cutiemark areas it detects
Posted Report

Creative Corner » Pony Wildcards » Post 6

Creative Corner » Pony Wildcards » Post 5

Creative Corner » ADetailer » Post 4

Scarlet Ribbon

True Wildcard
@Sunny
Because if I’m going to generate stuff that’s ‘not good enough’ to do all the manual editing on, but still interesting enough to share with my friends, I’d rather it look better than worse.
Posted Report

Creative Corner » Pony Wildcards » Post 4

FoalFucker

@Scarlet Ribbon
Nothing really, just anything that is specific to ponies/mlp/derpibooru, of anatomy, characters, copyrights, artists, etc. I am going to go through and do it myself for PDXL, I just figured I’d see if anyone had anything they thought was worth sharing.
Posted Report

Creative Corner » ADetailer » Post 3

AIPonyAnon

@Sunny
With the Noob based models at least, it’ll get the cutie mark right some of the time, and if it gets it wrong, then ‘detailing’ it (masked inpainting) will usually get it right after a few tries if it understands the character. For more obscure characters this doesn’t work. For example, with >>34015 I had to manually edit the cutie mark in and then masked inpaint it a few times to blend it in.
Posted Report

Creative Corner » Pony Wildcards » Post 3

Scarlet Ribbon

True Wildcard
@FoalFucker
What specifically are you looking for? I have extremely extensive wild card stuff, but it is primarily smut oriented and it is also a convoluted mess that is designed to work with a specific workflow that I maintain.
Almost everything with my prompter tag on this entire site uses that workflow.
Still, if you have specific things you’re seeking, I can probably fish out something more generically useful.

Creative Corner » Text-to-image prompting » Post 26

Scarlet Ribbon

True Wildcard
@Thoryn
I have the same GPU. I can generate a 1024x1024 image in Comfy UI in less than 15 seconds. I don’t know what was up with automatic1111, but I was getting similarly glacial performance on it.
Strongly recommend you just get rid of it and learn a different front end.
Posted Report

Default search

If you do not specify a field to search over, the search engine will search for posts with a body that is similar to the query's word stems. For example, posts containing the words winged humanization, wings, and spread wings would all be found by a search for wing, but sewing would not be.

Allowed fields

Field SelectorTypeDescriptionExample
authorLiteralMatches the author of this post. Anonymous authors will never match this term.author:Joey
bodyFull TextMatches the body of this post. This is the default field.body:test
created_atDate/Time RangeMatches the creation time of this post.created_at:2015
idNumeric RangeMatches the numeric surrogate key for this post.id:1000000
myMetamy:posts matches posts you have posted if you are signed in. my:posts
subjectFull TextMatches the title of the topic.subject:time wasting thread
topic_idLiteralMatches the numeric surrogate key for the topic this post belongs to.topic_id:7000
topic_positionNumeric RangeMatches the offset from the beginning of the topic of this post. Positions begin at 0.topic_position:0
updated_atDate/Time RangeMatches the creation or last edit time of this post.updated_at.gte:2 weeks ago
user_idLiteralMatches posts with the specified user_id. Anonymous users will never match this term.user_id:211190
forumLiteralMatches the short name for the forum this post belongs to.forum:meta