SciThinker-30B

SciThinker-30B is a fine-tuned language model for scientific ideation with high potential impact. Given a seed paper (title and abstract), it proposes a follow-up research idea.

This model is part of the paper: AI Can Learn Scientific Taste.

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "OpenMOSS-Team/SciThinker-30B"

tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)

messages = [
    {"role": "system", "content": "You are a helpful assistant. You first think about the reasoning process in your mind and then provide the user with the answer."},
    {"role": "user", "content": "You are a knowledgeable and insightful AI researcher. You have come across a new research paper with the following title and abstract:\n\nTitle: ...\nAbstract: ...\n\nBased on the core ideas, methods, or findings of this work, engage in heuristic thinking and propose a follow-up research idea. You need not confine yourself to the specific scenario or task of the original paper. You may consider shortcomings of the original method, propose improvements, apply its ideas to other tasks or AI domains, or even introduce entirely new problems and approaches. Aim to formulate an idea with high academic value and potential impact.\n\nIn your response, solely present your proposed title and abstract. Think independently and there is no need to imitate the format of the provided paper's title and abstract, nor to intentionally cite it. You must ensure that no specific numerical results are included in the abstract. You must ensure the abstract is of a moderate length, avoiding excessive length, as if you were writing it for a typical academic paper.\n\nOutput format (strict, no extra text):\nTitle: <your proposed paper title>\nAbstract: <your proposed abstract>"}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True,
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=32768,
    do_sample=True,
    temperature=0.6,
    top_p=0.95,
    top_k=20
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist() 

try:
    index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
    index = 0

thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")

print("thinking content:", thinking_content)
print("content:", content)
Downloads last month
375
Safetensors
Model size
31B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for OpenMOSS-Team/SciThinker-30B

Finetuned
(35)
this model

Collection including OpenMOSS-Team/SciThinker-30B

Paper for OpenMOSS-Team/SciThinker-30B