Character LoRAs - Want more character variety? Then check This List!
đŸ‘» Ghost Signatures! Want to know how to easily remove them? - Check Here!

Posts

For more information, see the search syntax documentation. Search results are sorted by creation date.

Search Results

Creative Corner » Diary of a prompter » Topic Opener

truekry

Wizzard

Foreword

Hello dear readers,
you may ask, what is this? The answer would be “a summary of my personal experiences playing with AI to generate images, so far”. I want to share my experiences, discoveries I made and little tricks I came upon. My main target is to get people new to the topic up to speed, and maybe some old veterans can also learn a thing or two.
That said, English is only my second language. If you see a mistake or a passage of text that is just weird, please tell me and I will correct it. You are also invited to share your own experiences and so on.

The Content

Chapter 1 - What you can see

Lets start with a simple, but powerful one: If you get an idea for an image, just stop and think a moment before you prompt. Why? Lets say you want Fluttershy on a bed. Easy! pony, feral pony, my little pony, close up, fluttershy, indoors, bedroom, on bed, lying on side. Boom! done. And you have a wardrobe between the bed and the wall where realistically is not enough space to fit one. Bummer. But why? The problem here is bedroom, believe it or not. While training a model, pictures of bedrooms are tagged as such and the AI learns the concept of them. What have most bedroom pictures in them? A night stand, wardrobes, maybe some pictures on the wall, etc. You get the idea. Nice of the AI to add it automatically, but if we want a picture of just the bed with a pony on it, as in only part of a room, the AI will still try to add these things. We have two options now to avoid this. We could leave bedroom out and add to the prompt stuff like: wooden wall, window with curtains, pictures OR we prompt stuff out we don’t want as a negative: wardrobe, nightstand with a high enough value.
This is one example of course. Lets say you want to prompt a character only visible from behind. Some one like solo, pony, feral pony, bat pony, rear view, grey fur, spread bat wings, yellow eyes, blue mane. This will most likely result in a bat pony seen at an angle from behind or a pony looking over their shoulder. This time yellow eyes are the culprit. The AI will always try to put into a pic what you prompt. If you want the pony to look away from the viewer, you can add looking away with a high value, but it would be way easier to just leave the eyes out.
And this is what I mean. Put only stuff in the prompt of what you wish to be in the picture and mind “implications”. xy-room and so on always result in some implications for example. So stop a moment and think what truly should be visible in your scenario. And now the curve ball and my favorite example: castle bedroom
If you have been paying attention you know what could happen here. We get a luxurious room with stone wall, a big window and outside of it? A castle! Go figure. AI will generate what you give it and castle was definitely in there. luxurious bedroom would be a good alternative to use here for example. A trick that sometimes seems to work, depending on the model, is using “-” or “_”. Like castle_bedroom. It no cure it all but it helps. Same principle, different example for this: Lets say you want Sweetie Belle without her cutie mark. You put cutie mark in the negative but is still doesn’t work? The term blank flank is an option but you probably guessed it: cutiemark, cutie_mark, cutie-mark. To give the curve ball a curve ball, different models used different tags in training. A “Applejack” or “Apple Jack” kind of scenario. Good luck finding that out.

Chapter 2 - The Power of Paint

In chapter 1 we talked about prompting, but sometimes a prompt can get you only so far. AI can’t get new concepts without being retrained, so we as end users have to make do until a new version is out that knows the concept. Like a character from a recent anime. Sure, Loras as little DLCs help us over until PonyV8 - Electric Boogaloo comes out and can generate the whole universe at once, but using too much of them is like Skyrim modding. It gets even more unstable really fast. So normally you want as less Loras as possible. Inpainting is a really powerful tool that not many people seem to use, so here a quick example to make my point:
The picture you see on the left? I turned something like that into the right one. You still don’t need any artistic skills, just a general idea of what you want. I used a simple “technique” called blobbing. And it is as dumb as it sounds. Just open Paint (or any other digital drawing application), set it to the resolution you want and just drop in simple blobs until they “loosely” resemble your idea. Then write the matching prompt and denoise your “sketch” at 0.75 to like 0.9 (so its nearly gone). It will then work as a “guidance” for the AI.
I used the same technique here, just I drew into the generated image, then masked the area out and let it redraw only that part. This was at one point, only a one pony picture. (1 cookie if you can tell which one was the original.)

Interlude

Something you need to be aware of in general is prompt “bleed”. Let say you want a picture with Twilight Sparkle taking a sun bath. Most likely the result will be a dusk picture. The problem is Twilight. As in chapter 1 mentioned, the AI will take what is there. It takes the Twilight as base to make a dusk setting, like the castle example. Its a general issue most models have. Fern, from the popular anime, is a good second example. If I generate her, I also get a lot of fern, the plant. (Frieren and Fern are actually German words and mean “freezing” and “distance” respectively. Japanese love to take german words in their fantasy writing.)
This can be countered by adding the daytime we don’t want in the negative or fern plant in the other case. Just another thing to look out for in general.

Chapter 3 - Fantasy is Dead

So far, I talked about having a general idea, or that you have a picture in mind. What if you don’t? What if you just have the basic need of: “Need cute pone pictures”? This chapter is for you when you don’t get why grown men get exited over some sticks on the internet. Jokes aside, sometimes we have just a vague inkling or are just bored. That’s what the cfg-scale is for. Most models give users a range like 3-7 but what does this actually do? Its basically just a scale how “obedient” the AI is to your prompt. The higher the scale, the less it does on its own. Think like the bleed thing I talked about, just on purpose. This also means part of your prompt will be ignored more likely. I bet you generated a pony before and said yellow mane, blue eyes and got the exact opposite. That could be a too low cfg. I personally consider 3 the lowest, regardless of model. The highest I ever used was 11 back in the SD1.5 days. For SDXL (that includes pony, illustrious, Noob, etc
) it is around 7 to 8. While it can go up to 20, you will never be happy with those results, trust me bro.
So now that we established that we are unimaginative bellends, can we still make pictures? Yes, that what AI is for after all. You want to go with a very simple prompt for this, like solo, pony, feral pony, unicorn, twilight sparkle, lying in tall grass and a cfg at 3 (don’t forget the usual quality words). And hit run. That’s it. This is the monkey with typewriter approach to generation. Generate random number of pictures, eventually one will be Hamlet.
large large
For the second one, I left the grass part out. But as you can see, we still get pictures with absolute minimal input and let the computer do most of the work. (Or maybe even more as it already does.)
I personally only use this approach if I am absolutely out of ideas. Sometimes I struck gold, sometimes something in the results gives me an idea and I prompt with more purpose (4-6 cfg). But this is what most AI haters think is to “good AI pictures”. The downside of this they are somewhat right. These picture will most likely be bland and simple. But, what if could spice this up? What if
 Lora? And here comes the fun part; Throw Loras for special effects at this gives often very interesting results.

Chapter 4 - Fusion

Lets go from simple to complicated. Or to be more precise, detailed. Detailed backgrounds are difficult in a sense that AI has no concept of depth, space and consistency. That’s why you have two beds in bedroom, or five hundred lamps and so on. The AI doesn’t remember what it put in, it just guesses what should be there. And its biggest enemy? Characters. They disrupt the generation and it starts new between the legs, or the right side looks different from the left cause the character splits the picture in half. That’s why most backgrounds “suck” in generated images. But there is a way around it, and all you need is a free image editing tool (Gimp, Krita, Photopea, etc
) and 10-20 minutes extra of your time.
And now hold onto your horses because the trick is; We generate background and character separately. I know, mind blown, take a minute to come back into reality, I will wait. But jokes aside, its not as hard as it sounds. We need 3 prompts for this little stunt. One for the character, one for the background and one that is the combination of the two. Then we get to generating. For the character, just look for a fitting angle and that it has the pose you want. We ignore the background here. (Also lighting and so on.)
If we have what we want, we generate the background. Prompts like no human, scenery, wide shot are our friends here. Here you set the mood and tone. Night time, day time, stuff like that. AI is good at just generating background, since there is no interruption by unshapely creatures.
Now come the human part, aka you. If we have both pictures we want to “marry”, we open the character in our editing tool of choice and use the lasso tool. Just cut her out like Sunset did Twilight in the Equestria Girls move. It doesn’t need to be pixel perfect or anything. Then open the background and slap that bad boy in. Play a little with size, angle, lighting and stuff if you want to (and know how), then save the image and your part is done.
Remember chapter 2? Well we do that now, just with a really low denoise around 0.3 to 0.4 this time and our combined third prompt. Inpainting will clean up our sloppy mess and make it fit together. And if not on the first try, do it again, like at 0.2 to 0.3 this time. And then, we have a picture with detailed background (that makes sense) and a character or two in it. Or TLDR: Photobashing is still a thing.

Chapter 5 - Let it be Light_

Lighting can have a huge impact on any given scene, regardless if it is film or picture. This is also true for AI generated images. There are various forms of lighting, but I will keep it down to the most used ones. I learned about this stuff when I started to get into AI and it can help to make a picture better. But what do I mean by “lighting”? imagine you want to take a picture of your friends at an outing. If they are facing the sun, they are properly lighted, but will squint their eyes to shield them. If they face away, chanced are there isn’t enough light for a good picture, especially with a smartphone camera. So you stand them in an angle to the sun, so they don’t look directly at the sun, but get enough light to make the picture. And now remember, we can control the sun in this case. And we do it by the right prompts:
natural lighting
Uses sunlight or ambient light from the environment.
dramatic lighting
Creates strong contrast between light and shadow.
cinematic lighting
Mimics film lighting with controlled shadows and highlights.
soft lighting
Diffused and even light with gentle shadows.
hard lighting
Sharp, strong light that creates crisp shadows.
rim lighting
Creates a border of light around edges or persons.
volumetric lighting
Also known as light rays or god rays.
Thanks to ChatGPT for the short descriptions of the different techniques
There are more, of course, but these are the most common ones. If you want an outdoor scene, go for natural. If you have an action scene, dramatic or cinematic, and so on. The right light makes a big difference and the AI knows these terms.
You can go further into detail. Lets say we have a pony walking through a forest on a warm summer day. Our prompt could look like this: solo, pony, feral pony, full body, unicorn, rarity, walking towards viewer, outdoors, forest, tall grass, natural lighting, volumetric lighting, dappled sunlight, thick leaf canopy
large

Chapter 6 - Designing a prompt_

We have come so far, learned some basic techniques and even some “pro haxxor moves”. Now its time to talk about the absolute basics. There are people making money with “prompt design” and I never heard anything more silly. Its not a science, just basic logic and a little bit of knowledge how AI works. Here is a basic graphic how it works in our case. (Source @Witty-Designer7316)
The first info we need is the base model we are using. Lets say our model is named “Prefect Pony XL”. The name gives it away but the description on civitai also states it is based on ponyv6. So it should take the same “quality words” as ponyv6. And after a quick check on the sample images, yes it does. So now we can putting our prompt together:
The most important rule is, the more important something is, the further at the beginning of the prompt it should be. That’s why quality words come first. So, depending on the model our first words should look something like this:
score_9, score_8_up, score_7_up, masterpiece, best quality, amazing quality
The next thing should be what we want. Lets say we want Fluttershy with spread wings standing in a lake.
So something like: solo, pony, feral pony, full body, pegasus, Fluttershy, spread wings, partly submerged, wet fur
Now the background: outdoors, forest, lake, tall grass, flowers
And at last, additional stuff like lighting and stuff: natural lighting, dappled sunlight, dusk
This gives us a final prompt:
score_9, score_8_up, score_7_up, masterpiece, best quality, amazing quality, solo, pony, feral pony, full body, pegasus, Fluttershy, spread wings, partly submerged, wet fur, outdoors, forest, lake, tall grass, flowers, natural lighting, dappled sunlight, dusk
But we are not done. Now we come to weights! The make it possible to make parts of the prompt more important than others, to control how much of what we want. The program I use takes “+” and “-” as weights, so for me it would look something like this:
score_9, score_8_up, score_7_up, masterpiece, best quality, amazing quality, solo, pony, feral pony, (full body)-, pegasus, Fluttershy, spread wings, partly submerged, (wet fur)+, outdoors, forest, lake, tall grass, (flowers)-, natural lighting, (dappled sunlight, dusk)++
Most others programs use a syntax like this:
score_9, score_8_up, score_7_up, masterpiece, best quality, amazing quality, solo, pony, feral pony, (full body:0.8), pegasus, Fluttershy, spread wings, partly submerged, (wet fur:1.2), outdoors, forest, lake, tall grass, (flowers:0.8), natural lighting, (dappled sunlight:1.5), (dusk:1.5)
BUT WAIT, THERE IS MORE!
This is of course only 50% of what we need. We still need to tell the AI what we don’t want. So first we check the base model page again and get the default negative prompt quality words: score_4,score_3,score_2,score_1
Then the model site of our merge Perfect Pony XL: ugly, worst quality, bad quality, jpeg artifacts
And normally that would be enough, but I will add a few personal favorite of mine today:
long body, realistic, monochrome, greyscale, artist name, signature, watermark
Giving us the final negative prompt:
score_4,score_3,score_2,score_1, ugly, worst quality, bad quality, jpeg artifacts, long body, realistic, monochrome, greyscale, artist name, signature, watermark
We could add weights here too, but we should try first for now (I use tpony here, another ponyv6 based model):
large
Okay, the general idea is there, but not quite right. And now the hard truth, ponyv6 is a little outdated as is the tpony model I used here. So I exchanged the quality words to fit a popular illustrious model and got this:
large
Better, but still off. I want Fluttershy to look at the viewer so we add looking at viewer and try again. Or better you can try. Happy prompting!

Interlude

We already talked about photo editing software. Another thing you can do that doesn’t require learning or skill is making small edits with the “band-aid-tool”. It is a tool that looks around a marked area and tries to guess the content of said area. I made a video of this on action a little while ago where I use it to remove ghost signatures:
That said, it can do far more than just that. I find it way easier to remove smaller mistakes the same way. Knowing your tools and the options they provide is a valuable skill on its own, so try around!

To be continued


Creative Corner » Do you think there's a need for embeddings that fix ponies? » Topic Opener

CappyAdams

Proud SwarmUI addict
I’ve been thinking quite a bit about making an embedding that fixes stuff like extra/missing legs, bad anatomy, etc, but I can’t tell if there’s a need for that. Should I train an embedding for that, just in case? Please give me some feedback on the idea! :D
Posted Report

General Discussion » Chat Thread » Topic Opener

General Discussion » Locking in fur color and shade » Topic Opener

Richard4Ponies

Changeling Ai enthusiast
Can anybody help me with this? I’m trying to get AI to be consistent about a shade of fur like the image below, but I can’t seem to get a stick
 it’s regularly either ignoring the color I told it to use and using the Canon fur color, or misinterpreting dark fur as black clothes instead.
Darker

Creative Corner » Image to image » Topic Opener

I’m new to this, but what sort of prompt you need to change the style of a preexisting image - for example, taking a paper and pencil drawing and turning it into something that looks digital without changing anything in the image? Is this possible to begin with?

Creative Corner » My LoRAs » Topic Opener

Bendy and Boney

dressed in baloney
Just felt like making a thread about them.
I’ve been taking a break recently, but I have made a lot of MLP character LoRAs (over 160, if I remember correctly).
Maybe post stuff you made with my LoRAs or suggest new ones for me to make? I dunno.

Creative Corner » Show your AI video gens here! (SFW and NSFW are welcome!) » Topic Opener

CappyAdams

Proud SwarmUI addict
I’m curious to see what people come up with, also with what AI can do with ponies in videos! :3
feel free to show off your video gens! (not forcing you to show tho! XD)

General Discussion » Pony Porn [NSFW] » Topic Opener

Site and Policy » A section for commissions? » Topic Opener

There could be a market of people who would rather pay other people to make AI-generated pony-related images for them. People who don’t want to spend the time to learn the intricate ways of coaxing the best images out of a generator, or don’t want to put their money directly into a website that does image generations. Pay someone who’s already familiar with how to manipulate the software.

Site and Policy » Upload limits? » Topic Opener

The DB upload limits on AI (2/day as I recall) were straight up moral panic, since people could just as easily spam-upload their entire scanned sketch book or 50 different angles of their 3D scene. The specific targeting of AI was misguided, but the idea upload limits wasn’t without merit.
I’d like to propose a less restrictive (and fair) version for tantabus. It would help people fight the temptation to upload every generation from a session, and save viewers from seeing a wall of similar images when they first load the site. It shouldn’t feel punitive or be something that most people would run up against in ordinary usage, though. Maybe 10 / day?

Creative Corner » Liquid Rainbow Effect » Topic Opener

truekry

Wizzard

Liquid Rainbow

Foreword
So, around a week ago I started playing around with Illustrious. And coming from PonyV6, I fast learned that it can be a lot more “creative”. Yes, a PonyV6 models knows ponies better, but Illustrious, in my opinion, knows more “concepts”. As I do with new toys, I played around and found the effect mentioned in the title:
Resources
We need 3 things:
  • An Illustrious model. I worked with this one. But I tested it with this one, too. Both work, for others you have to try yourself.
  • The “Velvet’s Mythic Fantasy Styles” Lora in version 2 for illustrious. (This Lora is for a detailed anime style. Any Lora adding detail could work, I like this one.)
  • The “LeIsT0 | Shiiro’s Styles” Lora. (This one helps making the colors “pop”.)
The Prompt
The prompt for the Twilight picture looked something like this:
QUALITY WORDS, 1pony, my little pony, pony, feral pony, unicorn, Twilight Sparkle, casting magic at viewer, shiny fur, limited palette, (glow, liquid glowing magic, glowing liquid purple magic)++, intense expression, dramatic lighting, strong depth of field, wind-blown hair, gritty realism, cinematic shot, dutch angle, (simple background, black background)++, dust particles, dynamic pose, dynamic composition, foreshortening, blurry edges, MORE QUALITY WORDS
Some may be confused about the “++” signs. I work with Invoke, which takes “-” and “+” signs as weights. A “++” is around 1.2. So if you use a different UI, you will have to try around a little. The important parts are simple background, black background for the mat black background to play with. If I lowered the weights, the loras added stuff into it.
The effect itself is achieved with limited palette, dramatic lighting, glow, liquid glowing magic, glowing liquid purple magic. Limited palette makes the AI use as few colors as possible, what is then overridden by the “purple magic” part with a higher weight. Muting any other color beside the one mentioned. If you replace “purple” with “rainbow”, you get the colors of the rainbow. Easy.
Now the lora weights. I used both with good results at 0.8. The trigger words are not in the prompt since Invoke has its own way to add loras that doesn’t need them. If you add a lora, it gets activated, that’s it. You find the trigger words on the civitai page.
I used Euler A at 25 steps with a CFG around 7 for all the pictures here.
That should be it, really. Have fun!
Important
The more in detail you describe the character in the prompt, the more detail gets added, and with that: color. If you want good “shade play” be as brief as possible. (That’s why it works better with known characters since you only have to add their name most of the time.)
More Examples
A few more raw examples:

Side Note
If you use the Fantasy Lora at 0.8-1.0 and the “LeIsT0” Lora at 1, with ‘limited palette’ and no rainbow liquid you get this comic looking style:

Site and Policy » Badges? » Topic Opener

Site and Policy » Site Development Notification and Feedback Thread » Topic Opener

Admin

Administrator
This will mostly be used to highlight features specific to Tantabus, or that could be relevant to all users. For most updates and feedback you should check the thread on derpi, as that’s the primary for Philomena development.
Now with that said


[March 19 2025]

- New search and tag suggestions

  • Go to Settings and Metadata (Fancy tags) and Local (search auto-completion) to enable
  • This will autosuggest tags as you type so long as they exist on an image
  • This will show if a suggestion is aliased
  • For searches, you can set to also keep search history

- CivitAI post scraper

  • The upload “Fetch” function now supports CivitAI /posts/# URLs
  • This will grab the images under a given CivitAI post, and you can select which to use
  • Direct image URLs aren’t supported (API limitation), but every image is part of a Post, even if it’s just one
  • Fetch still uses “artist:” instead of something more appropriate
 we’ll work on that

Creative Corner » Ghost Signature Removal » Topic Opener

truekry

Wizzard
Ghost Signatures are these text artifacts that sometimes pop up in the corners of AI generated images. They are unsightly, and are counted as a generation error on Tantabus. But there is a very easy solution to rid your picture of them before uploading and all you need is your trusty browser:
This quick edit can be performed in a few minutes and most of the time not even one. So help us fight the good cause and get rid of these nasty errors.
If it’s something weird
And it don’t look good
Who ya gonna call?

General Discussion » What are you listening to? » Topic Opener

Creative Corner » NSFW Animations » Topic Opener

Mercurial

Misty = wifey <3
Has anyone been able to get a decent nsfw output with a video model like Hunyuan or the new Wan 2.1?
I tried out Wan2GP i2v with a pony image, but it ended up being terrible with basically none of the expected motion.
And each generation takes like an entire hour for me so I’m not sure it’s even worth trying at this point.
Anyone else had better luck?

General Discussion » Post a random image from your Favorited images [A.I. Edition] » Topic Opener

Creative Corner » Post Your NSFW AI Art... here! » Topic Opener

Creative Corner » Image Preference Sorter » Topic Opener

Teaspoon

poni
So I was looking for a simple pair comparison sort tool and all the ones I could find were ad bloated or just not good.
large large

Why? idk. Often I gen a bunch of stuff with minor differences, or do several inpaintings in one go, or tinker with styles and end up mired in indecision. You can use it for whatever, deciding who’s best pony or which meal you want I guess.

Creative Corner » Best way to generate X Y plots? » Topic Opener

Thoryn

Latter Liaison
I saw a couple of X Y plots ages ago, and wonder what is the best way to generate them, automatically adding image as well as the configuration that changed for them in the plot.
(Things like model, sampler, scheduler, steps, prompt, CFG, size/aspect ratio etc.)

Site and Policy » The tag change post limit problem » Topic Opener

Adusak90

Background pony enjoyer
I’ve been editing the tags in the pictures for some time and it has become a real chore to need to wait for the next day when backlog of pictures gets higher than my ability to edit them (even if I only edit the feral ones). Is there any way to remove that limit to trusted tag editors?
I think 500+ changed tags that are valid (even if we make 10% wrong as the benefit of the doubt) should be enough to get me enough trust to give someone unlimited tag edits.

Creative Corner » Images with character in mirror » Topic Opener

Thoryn

Latter Liaison
I’ve been trying to make images where a character is also seen in a mirror, but don’t get any results that are remotely close to decent.
Is it possible with just text to image prompting, or do I have to use sketching and image to image, external image tools or other tricks?
For the record currently using Pony V6.

Creative Corner » Samplers » Topic Opener

Hey, so what’s the deal with Stable Diffusion samplers? I’ve used Notebook LLM to get a summary of their differences and aims, but does anyone have any practical insight using them?
Heres a notebooklm podcast where it breaks down in depth the differences https://notebooklm.google.com/notebook/0676cdde-a432-4322-94d9-99f2e99e2603/audio
And a abridged briefing doc in layman’s terms
Euler: "Basic Steps" or "Simple Subtraction" - This sampler works by simply removing noise in each step. It's like taking away a layer of blur to reveal the image. It is also one of the fastest and most straightforward samplers 

Euler a: "Creative Variation" or "Adding Noise" – This one is like the "Basic Steps" sampler, but with a bit of random noise added back in at each step. This makes the results more unpredictable, like a creative filter that produces a slightly different image each time [2, 3]. This also means that the image will keep changing as more steps are added [4, 5].

DPM2: "Smart Prediction" - This sampler uses a smarter way to predict how to remove the noise, making it more accurate [6].

DPM2 Karras: "Smart Prediction, Better Colors" – Like the previous one but with improved color quality.

DPM++ 2M: "Advanced Smart Prediction" – An improved version of "Smart Prediction" that uses some extra information from previous steps to make better predictions [6, 7].

DPM++ 2M Karras: "Advanced Smart Prediction, Better Colors" – Like the previous one but with improved color quality [6, 7].

DPM++ SDE: "Detailed Stochastic Prediction" – This sampler uses a more complex math model to understand noise, resulting in detailed images [7]. It is stochastic, meaning that it introduces a degree of randomness, which can lead to varied results [8].

DPM++ SDE Karras: "Optimized Detailed Stochastic Prediction" - This is a version of the previous sampler that is optimized for better performance and image quality [7].

DPM fast: "Fast DPM" - This sampler is designed to be fast, but may require more steps and is not generally recommended [9-11].

DPM adaptive: "Self-Adjusting Detail" - Instead of using steps, this sampler adjusts itself based on a setting that changes the image's contrast and saturation [4, 12].

Heun: "Two-Step Correction" - This sampler works by predicting the image, checking the prediction, and then combining both for a better result [13]. It uses a weighted average of two noise estimates [14].

LMS: "Artistic Style" or "Painterly" – This sampler uses information from previous steps to create an image with a more artistic or painterly style [15]. It can struggle with generating detailed characters or animals [16].

LMS Karras: "Artistic Style, Better Colors" – Like the previous one but with improved color quality [16].

DDIM: "Photorealistic Detail" or "Smooth Solver" – This sampler uses a special method to generate images which are often photorealistic and highly detailed. It was widely used, but is now considered outdated by some [17, 18].

PLMS: "Quick Estimator" – This sampler quickly estimates the noise and removes it, but it is not generally recommended because it is slower and produces worse results [9, 19].

LCM: "Fast Refiner" or "Single Step Image" – This sampler can produce good images very quickly, in as little as one step. It uses a special technique to refine the image in its latent space [20].

Restart: "Noise Reset" or "Iterative Correction" - This sampler is like restarting the image generation by adding a lot of noise, and then starting again with the denoising process. It does this several times [21, 22].

UniPC: "Smart Combination" or "Flexible Solver" – This sampler is designed to combine information in a way that it can be used with many different models. It can also change its level of accuracy to work faster [23].
These names aim to be more intuitive, focusing on the core action or result of each sampler. For example, instead of “DPM++ 2M,” you get “Advanced Smart Prediction,” which better describes what the sampler does, without needing to know the underlying math. The “Karras” variants are noted for their improved color quality. The ancestral samplers have “variation” or “adding noise” in their names. The goal is to provide a better, more intuitive understanding of how each sampler functions.

Site and Policy » 'Hide' Button Isn't Working for Me » Topic Opener

BigBuggyBastage

Go fsck yourself
If I click the Hide button for an image, I expect it to be hidden (unless I click the Show Hidden link at the top of any page) upon refreshing. But it is not doing so for me. The images remain visible.
I’m using the ‘Everything’ filter, but I’m using the exact same for several other ’boorus, and not having this problem.
Any help would be appreciated
and I’m sure it’s something silly I’ve done wrong.
p.s. This is occurring across browsers (chromium & firefox) and OSes (Arch Linux & Win10), with and without extensions.
EDIT: As of now (~ 11 PM UTC 2025-01-12), the Hide functionality IS WORKING NORMALLY. I’m unsure why – literally nothing has changed on my end – though I’m thankful it’s “fixed,” even if temporarily. I’ll keep an eye on it, and report back any change in this.

Site and Policy » We need quality standards » Topic Opener

truekry

Wizzard
Hello and welcome to my little nitpick corner. I know the side is young and all, but we need a quality standard. And I’m not talking about AI mistakes. AI is not it its infancy anymore and it is really easy to generate general good looking ponies these days. (Especially since they don’t have hands.)
You can’t tell me this aren’t gen-dumps. Just by following any guide/video and using a free online gen you get better looking results. I picked these examples at random from the last ten 5-6 pages, but there are so many more.
We need a bare minimum beyond “please only whole ponies, not disfigured garbage”. To quote rule #9:
We strive to create a community that showcases the best of what AI has to offer, from raw generations to artworks that used AI as part of their creation process.

Default search

If you do not specify a field to search over, the search engine will search for posts with a body that is similar to the query's word stems. For example, posts containing the words winged humanization, wings, and spread wings would all be found by a search for wing, but sewing would not be.

Allowed fields

Field SelectorTypeDescriptionExample
authorLiteralMatches the author of this post. Anonymous authors will never match this term.author:Joey
bodyFull TextMatches the body of this post. This is the default field.body:test
created_atDate/Time RangeMatches the creation time of this post.created_at:2015
idNumeric RangeMatches the numeric surrogate key for this post.id:1000000
myMetamy:posts matches posts you have posted if you are signed in. my:posts
subjectFull TextMatches the title of the topic.subject:time wasting thread
topic_idLiteralMatches the numeric surrogate key for the topic this post belongs to.topic_id:7000
topic_positionNumeric RangeMatches the offset from the beginning of the topic of this post. Positions begin at 0.topic_position:0
updated_atDate/Time RangeMatches the creation or last edit time of this post.updated_at.gte:2 weeks ago
user_idLiteralMatches posts with the specified user_id. Anonymous users will never match this term.user_id:211190
forumLiteralMatches the short name for the forum this post belongs to.forum:meta