Papers
arxiv:2602.15084

TokaMind: A Multi-Modal Transformer Foundation Model for Tokamak Plasma Dynamics

Published on Feb 16
Authors:
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,

Abstract

TokaMind is an open-source foundation model framework for fusion plasma modeling that uses a Multi-Modal Transformer architecture with Discrete Cosine Transform embedding to handle heterogeneous tokamak diagnostic data across multiple modalities.

AI-generated summary

We present TokaMind, an open-source foundation model framework for fusion plasma modeling, based on a Multi-Modal Transformer (MMT) and trained on heterogeneous tokamak diagnostics from the publicly available MAST dataset. TokaMind supports multiple data modalities (time-series, 2D profiles, and videos) with different sampling rates, robust missing-signal handling, and efficient task adaptation via selectively loading and freezing four model components. To represent multi-modal signals, we use a training-free Discrete Cosine Transform embedding (DCT3D) and provide a clean interface for alternative embeddings (e.g., Variational Autoencoders - VAEs). We evaluate TokaMind on the recently introduced MAST benchmark TokaMark, comparing training and embedding strategies. Our results show that fine-tuned TokaMind outperforms the benchmark baseline on all but one task, and that, for several tasks, lightweight fine-tuning yields better performance than training the same architecture from scratch under a matched epoch budget. These findings highlight the benefits of multi-modal pretraining for tokamak plasma dynamics and provide a practical, extensible foundation for future fusion modeling tasks. Training code and model weights will be made publicly available.

Community

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2602.15084
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2602.15084 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2602.15084 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2602.15084 in a Space README.md to link it from this page.

Collections including this paper 1