7B AWQ
Collection
These models are selected for their compatibility with small 12GB memory GPUs. • 203 items • Updated • 2
How to use solidrust/Darewin-7B-AWQ with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="solidrust/Darewin-7B-AWQ") # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("solidrust/Darewin-7B-AWQ")
model = AutoModelForCausalLM.from_pretrained("solidrust/Darewin-7B-AWQ")How to use solidrust/Darewin-7B-AWQ with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "solidrust/Darewin-7B-AWQ"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "solidrust/Darewin-7B-AWQ",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker model run hf.co/solidrust/Darewin-7B-AWQ
How to use solidrust/Darewin-7B-AWQ with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "solidrust/Darewin-7B-AWQ" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "solidrust/Darewin-7B-AWQ",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "solidrust/Darewin-7B-AWQ" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "solidrust/Darewin-7B-AWQ",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'How to use solidrust/Darewin-7B-AWQ with Docker Model Runner:
docker model run hf.co/solidrust/Darewin-7B-AWQ
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("solidrust/Darewin-7B-AWQ")
model = AutoModelForCausalLM.from_pretrained("solidrust/Darewin-7B-AWQ")Darewin-7B is a merge of the following models using LazyMergekit:
pip install --upgrade autoawq autoawq-kernels
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer, TextStreamer
model_path = "solidrust/Darewin-7B-AWQ"
system_message = "You are Darewin, incarnated as a powerful AI."
# Load model
model = AutoAWQForCausalLM.from_quantized(model_path,
fuse_layers=True)
tokenizer = AutoTokenizer.from_pretrained(model_path,
trust_remote_code=True)
streamer = TextStreamer(tokenizer,
skip_prompt=True,
skip_special_tokens=True)
# Convert prompt to tokens
prompt_template = """\
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"""
prompt = "You're standing on the surface of the Earth. "\
"You walk one mile south, one mile west and one mile north. "\
"You end up exactly where you started. Where are you?"
tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt),
return_tensors='pt').input_ids.cuda()
# Generate output
generation_output = model.generate(tokens,
streamer=streamer,
max_new_tokens=512)
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
Base model
mlabonne/Darewin-7B
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="solidrust/Darewin-7B-AWQ")