SSB Screening Model (RTX6000x2)
Model Summary
This model is a lightweight MLP classifier trained on NPZ-encoded inorganic crystal structure features for solid-state battery (SSB) screening proxies. It is intended to prioritize candidate structures, not to replace DFT or experimental validation.
- Architecture: MLP (input_dim=144, hidden_dims=[512, 256, 128], dropout variable by sweep)
- Output: 3-class classification proxy for screening tasks
- Training Regime: supervised training on curated NPZ dataset with class-weighted loss
- Best checkpoint:
checkpoint_epoch45.pt(lowest observed val_loss in logs)
Intended Use
- Primary: ranking/prioritization of SSB electrolyte candidates
- Not intended: absolute property prediction or experimental ground truth replacement
Training Data
- Dataset:
ssb_npz_v1(curated NPZ features) - Split: 80/10/10 (train/val/test)
- Features: composition + lattice + derived scalar statistics (144-dim)
Evaluation
Metrics from the latest run summary:
- Val loss: 0.2857
- Val accuracy: 0.8119
- Holdout accuracy: 0.8096
- F1: 0.8060
- Precision: 0.7672
- Recall: 0.8694
Limitations
- The model is a proxy classifier; it does not predict ground-truth physical properties.
- Performance is tied to the training distribution of
ssb_npz_v1. - Chemical regimes underrepresented in the training set may be poorly ranked.
Training Configuration (abridged)
- Optimizer: AdamW
- LR: sweep (best around ~3e-4)
- Weight decay: sweep (0.005โ0.02)
- Scheduler: cosine
- Batch size: sweep (128โ512)
- Epochs: sweep (20โ60)
- Gradient accumulation: sweep (1โ4)
Citation
If you use this model, please cite the dataset and training pipeline from the Nexa_compute repository.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Evaluation results
- accuracyself-reported0.812
- f1self-reported0.806
- precisionself-reported0.767
- recallself-reported0.869
- val_lossself-reported0.286