[Tutorial] Replicating a generation from Stable Diffusion WebUI metadata

derp621

boop
I got a comment on one of my images recently asking what AI I use, which I assume to mean what model. I figure this is as good as a time as any to explain how you can regenerate most of the images I post here from what’s in the description.
Let’s use >>59493 as an example:
simple background, evil, (looking at you), smirk, unicorn, hat, outfit, flawless, sparklemoonilll <lora:Flawless_Sparklemoon_tamers12345:1>, countershading, high quality, detailed, solo, portrait, close up
Negative prompt: lips
Steps: 20, Sampler: Euler a, Schedule type: Automatic, CFG scale: 4, Seed: 4016617097, Size: 1080x1080, Model hash: 9d73bac23a, Clip skip: 2, ADetailer model: face_yolov8n.pt, ADetailer confidence: 0.3, ADetailer dilate erode: 4, ADetailer mask blur: 4, ADetailer denoising strength: 0.4, ADetailer inpaint only masked: True, ADetailer inpaint padding: 32, ADetailer version: 25.3.0, Lora hashes: "Flawless_Sparklemoon_tamers12345: 1032bc97f5da", Downcast alphas_cumprod: True, Version: v1.10.1
This is the same output Stable Diffusion WebUI can be configured to output in a txt file. The first two lines are positive and negative prompts and should be self explanatory. Below that is everything you need to know to regenerate the image.
When using Stable Diffusion WebUI yourself, you can paste the entire output above into the prompt field verbatim then press the ↙️ button to automatically setup almost everything. If there’s a setting WebUI doesn’t know how to setup it will appear in “Override Settings” section.
The only thing the ↙️ button can’t setup for you is the loaded checkpoint which you might not even have yet, so take note of the model and Lora hashes, in this case:
Model hash: 9d73bac23a,
Lora hashes: "Flawless_Sparklemoon_tamers12345: 1032bc97f5da",
Hashes are much better than model names or links for finding specific models. While links and tags can be convenient in the short term they are also subject to changes and errors over time. Hashes on the other hand are a mathematical guarantee that you have the identical model that was used in the original generation.
Since a lot of models used here come from CivitAI that would be a good place to start searching. CivitAI allows you to search for models by their hashes. Simply query the hash itself like this or this.
If you can’t find them on CivitAI you could try other sites like Hugging Face or even a generic web search for the hash. If you can’t find it anywhere it means one of the following:
  1. The model was private and never uploaded to the Internet
  2. The model once existed but has been nuked from the Internet
  3. The model was corrupted or modified by the generator in some way but was still able to generate an image
One last thing to know is even if you have everything to perfectly replicate the image, anomalies between AI stacks (Pytorch version, CUDA vs ROCm etc…) might still lead to different results but they should still be similar enough to be recognisable to one another and obviously, manual editing or inpainting that happens after the image was generated can’t be replicated. For example >>59493 has been uploaded with a transparent background.
Syntax quick reference: **bold** *italic* ||hide text|| `code` __underline__ ~~strike~~ ^sup^ ~sub~

Detailed syntax guide