Posts
Search Results
Creative Corner » Samplers » Topic Opener
Hey, so what’s the deal with Stable Diffusion samplers? I’ve used Notebook LLM to get a summary of their differences and aims, but does anyone have any practical insight using them?
Heres a notebooklm podcast where it breaks down in depth the differences https://notebooklm.google.com/notebook/0676cdde-a432-4322-94d9-99f2e99e2603/audio
And a abridged briefing doc in layman’s terms
Euler: "Basic Steps" or "Simple Subtraction" - This sampler works by simply removing noise in each step. It's like taking away a layer of blur to reveal the image. It is also one of the fastest and most straightforward samplers
Euler a: "Creative Variation" or "Adding Noise" – This one is like the "Basic Steps" sampler, but with a bit of random noise added back in at each step. This makes the results more unpredictable, like a creative filter that produces a slightly different image each time [2, 3]. This also means that the image will keep changing as more steps are added [4, 5].
DPM2: "Smart Prediction" - This sampler uses a smarter way to predict how to remove the noise, making it more accurate [6].
DPM2 Karras: "Smart Prediction, Better Colors" – Like the previous one but with improved color quality.
DPM++ 2M: "Advanced Smart Prediction" – An improved version of "Smart Prediction" that uses some extra information from previous steps to make better predictions [6, 7].
DPM++ 2M Karras: "Advanced Smart Prediction, Better Colors" – Like the previous one but with improved color quality [6, 7].
DPM++ SDE: "Detailed Stochastic Prediction" – This sampler uses a more complex math model to understand noise, resulting in detailed images [7]. It is stochastic, meaning that it introduces a degree of randomness, which can lead to varied results [8].
DPM++ SDE Karras: "Optimized Detailed Stochastic Prediction" - This is a version of the previous sampler that is optimized for better performance and image quality [7].
DPM fast: "Fast DPM" - This sampler is designed to be fast, but may require more steps and is not generally recommended [9-11].
DPM adaptive: "Self-Adjusting Detail" - Instead of using steps, this sampler adjusts itself based on a setting that changes the image's contrast and saturation [4, 12].
Heun: "Two-Step Correction" - This sampler works by predicting the image, checking the prediction, and then combining both for a better result [13]. It uses a weighted average of two noise estimates [14].
LMS: "Artistic Style" or "Painterly" – This sampler uses information from previous steps to create an image with a more artistic or painterly style [15]. It can struggle with generating detailed characters or animals [16].
LMS Karras: "Artistic Style, Better Colors" – Like the previous one but with improved color quality [16].
DDIM: "Photorealistic Detail" or "Smooth Solver" – This sampler uses a special method to generate images which are often photorealistic and highly detailed. It was widely used, but is now considered outdated by some [17, 18].
PLMS: "Quick Estimator" – This sampler quickly estimates the noise and removes it, but it is not generally recommended because it is slower and produces worse results [9, 19].
LCM: "Fast Refiner" or "Single Step Image" – This sampler can produce good images very quickly, in as little as one step. It uses a special technique to refine the image in its latent space [20].
Restart: "Noise Reset" or "Iterative Correction" - This sampler is like restarting the image generation by adding a lot of noise, and then starting again with the denoising process. It does this several times [21, 22].
UniPC: "Smart Combination" or "Flexible Solver" – This sampler is designed to combine information in a way that it can be used with many different models. It can also change its level of accuracy to work faster [23].
These names aim to be more intuitive, focusing on the core action or result of each sampler. For example, instead of “DPM++ 2M,” you get “Advanced Smart Prediction,” which better describes what the sampler does, without needing to know the underlying math. The “Karras” variants are noted for their improved color quality. The ancestral samplers have “variation” or “adding noise” in their names. The goal is to provide a better, more intuitive understanding of how each sampler functions.
Creative Corner » LoRA - Characters » Post 11
@Background Pony #D6A2
Here’s a discussion that talks about lora and how to create them!
Here’s a discussion that talks about lora and how to create them!
Creative Corner » Text-to-image prompting » Post 42
@mp40
No problem.
No problem.
Nice thing about ollama is that it’s a service running, so you install that and install whatever models you want though it, then you can use any program that talks to ollama to interact with the models. (And there are ComfyUI nodes that can talk to it.)
Ollama itself is over here, as it lists of all the various models you can install with it (Though bear in mind the size. 1b, 3b, or 8b is fine. Don’t download 70b models…):
https://ollama.com/
https://ollama.com/
You can technically talk to the models directly with ollama, but that’s chatting through a command line, so you really do want another program to use with it as an interface.
I personally am using Open WebUI with it:
https://docs.openwebui.com/
https://github.com/open-webui/open-webui
https://docs.openwebui.com/
https://github.com/open-webui/open-webui
When installing it with docker, you can choose to install a version that has ollama as well, but I did them separately.
Open WebUI gives you a nice web interface where you can chat with any of the models you install, and even has a way to set it up to talk to ComfyUI, so you can send text from a chat directly to comfyui to generate an image using it as a prompt. It’s fairly fun to play with.
Creative Corner » Text-to-image prompting » Post 41
Thanks for the info! I was using a much simpler setup – I had Mistral small installed via pinoko and was just trying jailbreak prompts, even though I thought Mistral small was uncensored.
Creative Corner » Text-to-image prompting » Post 40
@mp40
Oh, also, one thing worth mentioning is that I think the longer the system prompt is, the more likely it is for the system prompt to start going out of the context window. I’ve noticed that since the instructions to uncensor it are at the beginning, it tends to start becoming censored again if you put too much in the system prompt.
Oh, also, one thing worth mentioning is that I think the longer the system prompt is, the more likely it is for the system prompt to start going out of the context window. I’ve noticed that since the instructions to uncensor it are at the beginning, it tends to start becoming censored again if you put too much in the system prompt.
Creative Corner » Text-to-image prompting » Post 39
@mp40
No problem. It’s one of these spots where I really need to play more with it, and there might be better ways to do some of it, but that’s what was getting me results.
No problem. It’s one of these spots where I really need to play more with it, and there might be better ways to do some of it, but that’s what was getting me results.
I remember one oddity was that autocomplete on what I was typing kept giving I can’t talk about this topic type lines, but the actual response was uncensored.
There could easily be better models, too. I just remember trying two or three and this was the one that was giving decent results.
Creative Corner » Text-to-image prompting » Post 38
@Lord Waite
Thanks!
Thanks!
Creative Corner » LoRA - Characters » Post 10
@Adusak90
I would hope so!
I would hope so!
Added links to the static pages to the OP,
https://tantabus.ai/pages/lorachar
https://tantabus.ai/pages/lorachar-minor
https://tantabus.ai/pages/lorachar-bg
https://tantabus.ai/pages/lorachar
https://tantabus.ai/pages/lorachar-minor
https://tantabus.ai/pages/lorachar-bg
There are a lot of characters, still have to go through several dozens that I’m already aware. And I need to generate some 60 preview images, the default for many tend to be anthro / borderline NSFW…
But hopefully in a week or two this’ll make a nice useful list of character LoRAs.
Creative Corner » LoRA - Characters » Post 9
If this is going to eventually bloom into more background pony art, as an enjoyer of these mares you’re doing a god’s work right here, I applaud and cheer for your noble mission.
Creative Corner » Text-to-image prompting » Post 37
@mp40
I haven’t done more with it, but with ollama, the key was making a custom “Modelfile” file, and creating a model from that Modelfile.
I haven’t done more with it, but with ollama, the key was making a custom “Modelfile” file, and creating a model from that Modelfile.
What you can do is copy the modelfile of an existing model and modify it.
So, first:
ollama pull rolandroland/llama3.1-uncensoredto install the modelfile you are going to base it off of. Then, if you run:
ollama show rolandroland/llama3.1-uncensored --modelfileIt’ll print out on the console that models Modelfile, so just copy that to a file named Modelfile.
Then change the FROM section to say:
FROM rolandroland/llama3.1-uncensored:latestand add a section at the bottom that says:
SYSTEM"""<your prompt here>"""And just write a prompt for how the AI is going to act there. You basically want to describe to it what its purpose is, and let it know that it’s uncensored and can describe sexual acts and such, tell it not to add in disclaimers, tell it the exact format that a prompt should be in and the type of words it should use, and give it a few examples of real prompts.
(I’d give one here, but looking at it, I really want to clean it up and improve it. I was explicitly telling it to add the line of score tags, then a source and rating tag, then a description, then several paragraphs of danbooru tags.)
Then run:
ollama create <new mode name> --file ModelfileKeep the modelfile, try using the model you generated, and if you want to tweak it, do:
change the modelfile, and rerun the create command.
ollama rm <model>change the modelfile, and rerun the create command.
That’s basically how to do it, in any case, the key is going to be playing with creating a prompt until something sticks, and basing it off the right model, as I remember trying it with a different model or two and not having as much luck…
Creative Corner » LoRA - Characters » Post 8
Will be adding G5 and OCs to that page… tomorrow maybe.
Please do list more if you know of them!
Creative Corner » LoRA - Characters » Post 7
@tyto4tme4l
I find this one gives weird aspect ratio screwups on occasion, which is mildly annoying.
I find this one gives weird aspect ratio screwups on occasion, which is mildly annoying.
Creative Corner » LoRA - Characters » Post 6
@Background Pony #D6A2
“Low Rank Adaptation”
“Low Rank Adaptation”
Very short version: it’s a model mod that adds/modifies information in the main model, so you can prompt stuff the main model didn’t have.
Creative Corner » LoRA - Characters » Post 5
Velvet Remedy (Fallout: Equestria)

Base Model: PonyV6
Trigger words: velvet remedy, pony, fallout equestria
Base Model: PonyV6
Trigger words: velvet remedy, pony, fallout equestria
Creative Corner » LoRA - Characters » Post 4
Background Pony #D6A2
what is a lora?
Creative Corner » LoRA - Characters » Post 3
(reserved… there’s a lot)
Creative Corner » LoRA - Characters » Post 2
(reserved)
Creative Corner » LoRA - Characters » Post 1
(reserved)
Creative Corner » LoRA - Characters » Topic Opener
Thread to compile known character LoRAs
Not all characters are well known (or at all) in PonyV6 or other SDXL models, so LoRAs are often required for quick gens. Multiple LoRAs for lesser known characters already exist, but are not always easy to find. Herein I’d like to list known ones.
Of course, please comment in the thread with the ones you’re aware of, and any additional info you might have for it, like base model (PonyV6 for 99% of them, but still), trigger words, strength, etc. and I’ll add them to the first post(s) the page above as soon as I can.
cleaned up to cut clutter
Creative Corner » Text-to-image prompting » Post 36
@Lord Waite
Have you done anything else with this? I’m looking for resources on how to build or curate a llm of my own but the “uncensored” model is still denying some of my prompt requests, do I just need to try other jailbreak prompts till somthning works or?
Have you done anything else with this? I’m looking for resources on how to build or curate a llm of my own but the “uncensored” model is still denying some of my prompt requests, do I just need to try other jailbreak prompts till somthning works or?
Creative Corner » Generation methods, UI » Post 15
I believe for ADetailer, you’d be installing the Comfy-Impact Pack. (Which is easiest if you have the ComfyUI Manager installed, of course.)
And yeah, definitely will take a bit of getting used to. If you go under Workflow->Browse Templates, the Image Generation template there is pretty much the default one. Change the model to pony v6, change the width and height, put a prompt in, and hit Queue…
Watching some videos will probably help, just some things won’t match because the ui has changed.
Creative Corner » Generation methods, UI » Post 14
Finally got ComfyUI to work on my end, but the UI looks a bit overwhelming, so need to set aside many consecutive hours one day to really dig into it.
And I must be blind, because I couldn’t find ADetailer in it. Will look more into it later though.
And I must be blind, because I couldn’t find ADetailer in it. Will look more into it later though.
Creative Corner » Text-to-image prompting » Post 35
I always feel like it helps to have a bit of a base understanding on how the models work on these things.
Initially, someone created a large dataset of images and descriptions. The descriptions were tokenized, and the images cut up into squares. Then, random noise was generated based on a seed. It took one square, generated random noise based on a seed, and attempted to denoise the noise into the image on the square. Once it got something close, it discarded the square and grabbed another one. At the end, all of this was saved in a model.
Now, what happens when you are generating an image is that your prompt is reduced to tokens by a text encoder (XL based models use CLIP-L and CLIP-G), random noise is generated by the specified seed, and then the sampler and noise schedule is how it denoises, with as many steps as you specify.
Some schedulers introduce a bit of noise at every steps, namely the ancestral ones (with an a at the end), and sde, but there may be others. With those ones, the image is going to change more between steps and they’ll be more chaotic. Also, some will take less steps then others to get to a good image, and how long each step takes will vary a bit. I believe some are just better at dealing with certain things in the image, too, so it’ll take some playing around.
Now, the clip text encoder actually can’t cope with anything more than 77 tokens at once, and that includes a start and end token, so effectively 75. So if your prompt is more than 75 tokens, it gets broken up into chunks of 75.
The idea behind “BREAK” is that you are telling it to end the current chunk right there and just pad it out with null tokens at the end. The point is just that you’re making sure that particular part of the prompt is all in the same chunk. I’ve had mixed results on it, so I try doing it that way occasionally, but also don’t a lot of the time. It is going to have trouble with getting confused anyways. This is just an attempt to minimize it a bit.
(Text encoding is one of the differences between model architectures, too. 1.* & 2.* had one clip, XL has two, then when you start getting into things like flux and 3, you start dealing with things like two clips and a t5 encoder, and the t5 encoder accepts more like 154 tokens. I also didn’t get into the vae, which is actually what turns the results into an image…)
Showing results 151 - 175 of 328 total
Default search
If you do not specify a field to search over, the search engine will search for posts with a body that is similar to the query's word stems. For example, posts containing the words winged humanization, wings, and spread wings would all be found by a search for wing, but sewing would not be.
Allowed fields
| Field Selector | Type | Description | Example |
|---|---|---|---|
author | Literal | Matches the author of this post. Anonymous authors will never match this term. | author:Joey |
body | Full Text | Matches the body of this post. This is the default field. | body:test |
created_at | Date/Time Range | Matches the creation time of this post. | created_at:2015 |
id | Numeric Range | Matches the numeric surrogate key for this post. | id:1000000 |
my | Meta | my:posts matches posts you have posted if you are signed in. | my:posts |
subject | Full Text | Matches the title of the topic. | subject:time wasting thread |
topic_id | Literal | Matches the numeric surrogate key for the topic this post belongs to. | topic_id:7000 |
topic_position | Numeric Range | Matches the offset from the beginning of the topic of this post. Positions begin at 0. | topic_position:0 |
updated_at | Date/Time Range | Matches the creation or last edit time of this post. | updated_at.gte:2 weeks ago |
user_id | Literal | Matches posts with the specified user_id. Anonymous users will never match this term. | user_id:211190 |
forum | Literal | Matches the short name for the forum this post belongs to. | forum:meta |