Dataset Viewer

The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.

YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

This File:

This file provides a suggested template for the information that needs to be included in the created model's model card, with the intention of maintaining organized documentation to facilitate usage by third parties.

Model Name

Required (2–3 sentences): Describe what the model does and the kind of data or setting it was built for, be explicit about whether the model is expected to work with educator or students transcripts data. Use plain language and avoid jargon so someone unfamiliar with the project can understand it. Include domain-specific details that matter for evaluation (for example, whether tutoring is in-person or via chat, the student age group, the subject area, and any other relevant context).

This model [does X] on [domain Y] data. It was developed by [team] as part of [project context if relevant] using data from [where/how it’s used—e.g., chat-based or in-person tutoring sessions] with [target users—e.g., adult learners, middle school students] in [specific subject/setting—e.g., algebra tutoring].


Training Details

Datasets

Required. List every dataset used. For proprietary data, fill in the table below — do not leave rows blank.

Dataset Split Size Source Notes
dataset-name train 50k public
Internal dataset v2 train 12k Shared by partner filtered for PII

Hyperparameters

Required. Fill in the table. Do not omit optimizer betas/epsilon — these matter for reproducibility.

Parameter Value
Learning rate 2e-5
Batch size 32
Optimizer AdamW (β₁=0.9, β₂=0.999, ε=1e-8)
Epochs / Steps 3 epochs
Warmup 500 steps (linear)
Weight decay 0.01

Evaluation

Results

Required. One row per evaluation setting. - Always specify macro vs. micro for F1. - Always name the exact split (test, dev, held-out-2024, etc.) — never just "test set". - Include at least the Macro F1. Including micro F1 for each classification tag is suggested for multiclass models. - If you ran multiple seeds, report mean ± std. Example: 87.3 ± 0.4

Model Dataset Split Metric Score
This model dataset test F1 (macro) 87.3

Limitations

Suggested. Provide limitations of the model from either performance or generalizability.

  • This model is trained primarily on chat-style text, so it may not generalize well to audio transcripts without additional adaptation.
  • It is tuned for elementary school tutoring; performance may degrade for other age groups or advanced subject matter.
  • The model performance is poor for [describe failure mode].

How to Use

Required. Provide a minimal working code snippet. If the model is not on HF Hub for inference, link to the GitHub repo instead.

Message Structure

Specify the needed message structure:

For example, from the TutorCopilot models: Tutor CoPilot models are trained on a message structure of 10 context messages followed by one target message with the special tokens [PRETEXT] and [TEXT] used to demarcate the context and target messages. Messages are formatted as {speaker}: {message}, where the speaker is one of tutor or student, and the message is a lowercased, anonymized version of the message. Names are anonymized with Edu-ConvoKit, replacing student and tutor names with [student] and [tutor], respectively. Models are trained on text with the structure

"[PRETEXT] {context} [TEXT] {target}"

Running instructions

Specify how the model needs to be run:

from transformers import pipeline

classifier = pipeline("text-classification", model="your-org/model-name")
result = classifier("Your input text here.")

Code and Responsibles

Include a link to the GitHub repository if the files for training are in a different repository.

Repository: https://github.com/scale-nssa/your-repo
Maintainers / Contributors: {{Team/project name}} (lead: {{Researcher Name}})


Bias and Fairness

Suggested. If demographic attributes are available, report performance disaggregated by group and note any disparities.

  • Evaluated groups: [e.g., age, gender, native language, region]
  • Metric(s): [e.g., F1 (macro), EM, accuracy]
  • Findings: [brief summary of gaps or parity]
Demographic Split Metric Score
Group A test F1 (macro) 84.2
Group B test F1 (macro) 79.5

License

Suggested - requried for production. State the license and link to it.

This model is released under Apache 2.0.


Citation

Optional Include if this model accompanies a paper or should be cited in academic work.

@misc{yourmodelyear,
  author = {Author Name},
  title  = {Model Name},
  year   = {year},
  url    = {https://huggingface.co/StanfordSCALE/model-name}
}
Downloads last month
33