
To enable fp16 (which can cause numerical instabilities with the vanilla attention module on the v2.1 model), run your script with ATTN_PRECISION=fp16 python
Per default, the attention operation of the model is evaluated at full precision when xformers is not installed.
New stable diffusion model ( Stable Diffusion 2.1-v, HuggingFace) at 768x768 resolution and ( Stable Diffusion 2.1-base, HuggingFace) at 512x512 resolution, both based on the same number of parameters and architecture as 2.0 and fine-tuned on 2.0, on a less restrictive NSFW filtering of the LAION-5B dataset. The following list provides an overview of all currently available models.
This repository contains Stable Diffusion models trained from scratch and will be continuously updated with