Description

Image imported from Derpibooru.
Original: https://derpibooru.org/images/3027306
Metadata at the time of import:
Field Data
Favorites 54
Upvotes 77
Downvotes 12
Score 65
Comments 9
Uploader Lord Waite
Original description below this line

Fluttershy’s kindly agreed to demonstrate a few different models for us at different sampling steps here.
I felt this might be useful for people using stable diffusion (with the automatic1111 webui) to generate pictures, and people who aren’t interested in that side of things can still enjoy the pictures.
The model used was pony v2: https://huggingface.co/AstraliteHeart/pony-diffusion-v2 . VAE was on automatic. Not sure if that will effect things.
The prompt was:
Fluttershy, ear fluff, explicit, looking back at viewer
beautiful, adorable, cute, show accurate, anatomically correct
sharp focus, intricate detail, absurdres, highres, 8x
The negative prompt was:
3d, anthro, human
out of frame, cropped, blurry, lowres, worst quality, low quality
username, watermark, signature, jpeg artifacts, text, error
bad anatomy, bad proportions, gross proportions, poorly drawn face
CFG scale: 11, Seed: 1099699696, Size: 512x512
Also, the way the chart was generated was through the very useful X/Y plot script.
X was set to sampler: Euler a,DPM++ SDE,DPM Fast,DPM2 a Karras,DDIM
Y was set to steps: 30-120 (+30)
It’s a great way to get an idea of how setting changes will affect things, and as you can see from the chart, sometimes things you wouldn’t expect to change do.
The CFG scale is another useful one to mess with, and “Prompt S/R” is particularly useful because you can list a series of phrases separated by commas, and it will look for the first one in the list in your prompt, and substitute each phrase in the list in turn. (For example, if I had listed all of the main 6 there, each of them would have taken a turn modeling.)
Anyways, hope that helps a few people…

Comments

Syntax quick reference: **bold** *italic* ||hide text|| `code` __underline__ ~~strike~~ ^sup^ ~sub~

Detailed syntax guide

Background Pony #AF7D
@Background Pony #AF7D
It has issues with minor characters and more niche content/fetishes. Ideally, it would just be plug and play and you could just use more natural language (instead of tags) for prompts and have a really good understanding of MLP characters and content. We’re not quite there yet, so we have to make do as newer models are trained to address the shortcomings of previous ones.
If you have the GPU and VRAM for it (~8GB), you can train embeddings/Textual Inversion or use the Dreambooth extension for the AI to get a better idea of a character(s). You can also merge models to improve it or to have it understand certain concepts/fetishes better. The tools are there, but just require more work for it to get settled in place.
Fair, though it is where the discussions are happening at the moment.

Imported from Derpibooru - Posted by EpsilonWolf
Background Pony #AF7D
@Background Pony #AF7D
When you are on txt2img/img2img/etc, there’s a script pulldown at the bottom. Choose X/Y plot, and it will give you fields where you can choose the x type & value and y type & value. I chose “Sampler” for X, and typed in the names of the samplers separated by commas for the values. For y, I chose “steps”, and typed in 30-120 (+30), which tells it to go from 30 to 120 in 30 step intervals. Doing this automatically runs generates all of them and creates the chart.
The other options in X/Y plot basically work the same way. Another one worthy of note is “Prompt S/R”, as you type a list of phrases, and it substitutes each in turn for the first in the prompt. (For this prompt, a fun one to do would be: Fluttershy,Rainbow Dash,Rarity,Twilight Sparkle,Pinkie Pie,Applejack …)

Imported from Derpibooru - Posted by Lord Waite
Background Pony #AF7D
@Background Pony #AF7D
Yeah, I’ve got the basics down, and am getting a feel for things. I do end up feeling like there ought to be a way to tell whether it actually understands words and phrases you are typing in beyond just experimenting with taking them out, and it feels like while the pony v2 model knows the mane 6 and the cmc, it doesn’t know a lot of characters beyond that (say, Spike, Silver Spoon, Diamond Tiara, Gilda, Gabby, etc…).
I’m also not much for discord, which is why I was hoping for an actual forum. Ah well, I’ll keep it in mind.
@Background Pony #AF7D
Yeah, you can get away with a lot more low-end then people might think. Though upgrading to something more modern improves those speeds a lot. But then, speed is one reason things like DDIM and DPM Fast can be useful…

Imported from Derpibooru - Posted by Lord Waite
Background Pony #AF7D
@Background Pony #AF7D
There are already two pony models that are free to use with SD: Cookie’s and Pony Diffusion (aka. Purplesmart.ai). Purplesmart is being actively developed and is working on new models that they will release publicly. I wish NAI and others were this way, but that’s their business model.
@Background Pony #AF7D
The Voldy guide is the go to for setting things up. Pretty straightforward and tells you everything you need to know for getting it set up. If its not stated in the install, its not worth worrying about.
The Purplesmart community is pretty strong in having a lot of resources to help people along, either with generating images or setting up a local install. Might want to check there is you need help or advise. AFAIK, its the only centralized area for talking about this kind of stuff in MLP community

Imported from Derpibooru - Posted by EpsilonWolf
Background Pony #AF7D
@Background Pony #AF7D
you don’t even need a decent card, even a GTX1650 (which is what I’m using) or even a GTX1050 is good enough for this stuff. Ok, you’ll wait 4-15 minutes per image (based on SD settings) but it’s still very much fully usable and completely free.
I just wish people would do what is done here - post seeds, or at least the tags and model used.

Imported from Derpibooru - Posted by dummy1234
Background Pony #AF7D
@Background Pony #AF7D
I’ve definitely noticed quite a lot of NovelAI around here, and you can get pretty good results with just straight Stable Diffusion for free, as long as you have a decent nvidia graphics card.
It’d probably help if there was more discussion going on around here about how to set it up and get good results, though. I’m pretty good at picking these things up quickly, but figuring out things like good phrases to use in prompts for ponies specifically can be hard to find. (Though trying typing derpibooru tags in helps…)

Imported from Derpibooru - Posted by Lord Waite
Background Pony #AF7D
We need more people to start using the open models instead of relying on paid solutions like NovelAI

Imported from Derpibooru - Posted by dummy1234
Background Pony #AF7D
One interesting thing on this chart was how consistent DDIM is. Trying it at lower values, first full Fluttershy picture is at 6, and while 8-10 have issues, they are cute would be pretty good with a little editing. It might be a good sampler to use for quick generation, since most take longer to have good output.

Imported from Derpibooru - Posted by Lord Waite