HOWTO: Lora models with Stable Diffusion
Last updated
Was this helpful?
Last updated
Was this helpful?
LoRA, short for Low-Rank Adaptation, is a technique used to fine-tune large AI models (like language or vision models) efficiently and with fewer resources.
Instead of updating all the parameters in a massive pre-trained model—which is expensive and memory-intensive—LoRA freezes the original model and adds small, trainable layers (called low-rank matrices) to specific parts of the model (like attention layers). These additions learn the task-specific changes, allowing the core model to remain unchanged.
Using LoRA models with Stable Diffusion is a super popular way to customize the style, character, or theme of your image generations without retraining the whole model.
Prerequisites:
Run Stable Diffusion job, replacing the secret value, preset, and any other configuration
Download the model using SDnext interface
Go to Models -> HuggingFace
Generate the first image:
Prompt:
Ghibli style futuristic stormtrooper with glossy white armor and a sleek helmet, standing heroically on a lush alien planet, vibrant flowers blooming around, soft sunlight illuminating the scene, a gentle breeze rustling the leaves
Let's find a trained model on Civit.ai, we need to filter model by our Stable Diffusion version.
Alternative can be HuggingFace search. (tags: Lora, Stable Diffusion, 3.5)
I will use "studio-ghibli-style-lora" Lora model.
Now we need to copy our model into the /Lora directory of our storage volume
After Model Copying, and hitting the refresh button on the Lora tab, we should be able to see our model downloaded.
Click on your Networks -> Lora tab, hit refresh, and click on your Lora model, that will add Lora to youre prompt, and you will be able to generate images using it.
For example we generated ghibli-style stormtrooper.