Instructions to use nitrosocke/elden-ring-diffusion with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use nitrosocke/elden-ring-diffusion with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("nitrosocke/elden-ring-diffusion", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
Adding `safetensors` variant of this model
#13 opened over 1 year ago
by
SFconvertbot
Just curious what version of stable diffusion this was trained on
#11 opened about 3 years ago
by
rayregula
A beautiful girl is dancing, and her clothes have been transformed into Indian garb, complete with feathers, mascots, and stunning makeup. Wide shot, You can't see the face, black suit, red room, low light key, bitcoin, python code at background, AI in cyber body, dark blue, neon, dollars. success, luck, time, lucky, future, luxary dark, High Quality, hyperrealistic, photography, cinematic, volumetric lighting, octane render, arnold render, 3D, Super detailed, Megapixel Cinematic Lighting, Anti-Aliasing, FKAA, TXAA, RTX, SSAO, Post Processing, Post-Production, Tone Mapping, Wide shot CGI, VFX, SFX, Full color, Volumetric lighting, HDR, Realistic, 8k Maybe some reflective surfaces in the room to create interesting shadows. Or perhaps some ambient light from computer monitors or other electronic devices.
#10 opened over 3 years ago
by
ISPA
Update README.md
#9 opened over 3 years ago
by
ISPA
class prompt
👍 3
1
#4 opened over 3 years ago
by
hiero
images used for training
9
#1 opened over 3 years ago
by
Jerold