Locutusque/hercules-v2.0
Viewer β’ Updated β’ 1.31M β’ 296 β’ 24
How to use Locutusque/lr-experiment1-7B with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="Locutusque/lr-experiment1-7B")
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe(messages) # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Locutusque/lr-experiment1-7B")
model = AutoModelForCausalLM.from_pretrained("Locutusque/lr-experiment1-7B")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))How to use Locutusque/lr-experiment1-7B with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "Locutusque/lr-experiment1-7B"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "Locutusque/lr-experiment1-7B",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker model run hf.co/Locutusque/lr-experiment1-7B
How to use Locutusque/lr-experiment1-7B with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "Locutusque/lr-experiment1-7B" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "Locutusque/lr-experiment1-7B",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "Locutusque/lr-experiment1-7B" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "Locutusque/lr-experiment1-7B",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'How to use Locutusque/lr-experiment1-7B with Docker Model Runner:
docker model run hf.co/Locutusque/lr-experiment1-7B
The lr-experiment model series is a research project I'm conducting that I will be using to determine the best learning rate to use while fine-tuning Mistral. This model uses a learning rate of 2e-5 with a cosine scheduler and no warmup steps.
I used Locutusque/Hercules-2.0-Mistral-7B as a base model, and further fine-tuned it on CollectiveCognition/chats-data-2023-09-22 using QLoRA for 3 epochs. I will be keeping track of evaluation results, and will comparing it to upcoming models.
| Tasks | Version | Filter | n-shot | Metric | Value | Stderr | |
|---|---|---|---|---|---|---|---|
| agieval_nous | N/A | none | None | acc | 0.3645 | Β± | 0.0093 |
| none | None | acc_norm | 0.3468 | Β± | 0.0092 | ||
| - agieval_aqua_rat | 1 | none | None | acc | 0.2283 | Β± | 0.0264 |
| none | None | acc_norm | 0.2283 | Β± | 0.0264 | ||
| - agieval_logiqa_en | 1 | none | None | acc | 0.2965 | Β± | 0.0179 |
| none | None | acc_norm | 0.3303 | Β± | 0.0184 | ||
| - agieval_lsat_ar | 1 | none | None | acc | 0.2217 | Β± | 0.0275 |
| none | None | acc_norm | 0.1783 | Β± | 0.0253 | ||
| - agieval_lsat_lr | 1 | none | None | acc | 0.4039 | Β± | 0.0217 |
| none | None | acc_norm | 0.3686 | Β± | 0.0214 | ||
| - agieval_lsat_rc | 1 | none | None | acc | 0.4870 | Β± | 0.0305 |
| none | None | acc_norm | 0.4424 | Β± | 0.0303 | ||
| - agieval_sat_en | 1 | none | None | acc | 0.6408 | Β± | 0.0335 |
| none | None | acc_norm | 0.5971 | Β± | 0.0343 | ||
| - agieval_sat_en_without_passage | 1 | none | None | acc | 0.3932 | Β± | 0.0341 |
| none | None | acc_norm | 0.3835 | Β± | 0.0340 | ||
| - agieval_sat_math | 1 | none | None | acc | 0.3455 | Β± | 0.0321 |
| none | None | acc_norm | 0.2727 | Β± | 0.0301 |
| Groups | Version | Filter | n-shot | Metric | Value | Stderr | |
|---|---|---|---|---|---|---|---|
| agieval_nous | N/A | none | None | acc | 0.3645 | Β± | 0.0093 |
| none | None | acc_norm | 0.3468 | Β± | 0.0092 |