Natural language has a strong impact on this model as of V7. It won’t understand everything but it can get you places. And then when you can’t squeeze any more out of it, you can use e621 tags to help refine your prompt. (or you can just use tags, either way will work.)
Start your prompt with a short natural language prompt describing what you want, then pad it with e621 tags to refine specific concepts. As this is a v-prediction model, prompt interpretation can be a bit more literal.
Many flavor words and artists from SD 1.5 work again.
PolyFur is trained on MiniGPT-4 captions, so try being really flowery with your prompts and use sentences even.
If an established character isn’t coming out accurately, try increasing the strength of their token and adding a few implied tags that describe their appearance. Keep in mind that characters that aren’t very popular or don’t have many images in FluffyRock’s dataset typically won’t fare that well without a LORA.
Avoid weighting camera angle keywords too strongly, especially close-up.
Resolutions between 576 and 1088 should work reasonably well as that is the range of FluffyRock.