AtlasPatch: Whole-Slide Image Tissue Segmentation

Segmentation model for whole-slide image (WSI) thumbnails, built on Segment Anything 2 (SAM2) Tiny and finetuned only on the normalization layers. The model takes a power-based WSI thumbnail at 1.25x magnification level (resized to 1024×1024) and predicts a binary tissue mask. Training used segmented thumbnails. AtlasPatch codebase (WSI preprocessing & tooling): https://github.com/AtlasAnalyticsLab/AtlasPatch

Quickstart

Install dependencies:

# Install AtlasPatch
pip install atlas-patch

# Install SAM2 (required for tissue segmentation)
pip install git+https://github.com/facebookresearch/sam2.git

Recommended: use the same components we ship in AtlasPatch. The segmentation service will (a) load your WSI with the registered backend, (b) build a 1.25× power thumbnail, (c) resize it to 1024×1024, (d) run SAM2 with a full-frame box, and (e) return a mask aligned to the thumbnail.

import numpy as np
import torch
from pathlib import Path
from PIL import Image
from importlib.resources import files

from atlas_patch.core.config import SegmentationConfig
from atlas_patch.services.segmentation import SAM2SegmentationService
from atlas_patch.core.wsi import WSIFactory

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

# 1) Config: packaged SAM2 Hiera-T config; leave checkpoint_path=None to auto-download from HF.
cfg_path = Path(files("atlas_patch.configs") / "sam2.1_hiera_t.yaml")
seg_cfg = SegmentationConfig(
    checkpoint_path=None,          # downloads Atlas-Patch/model.pth from Hugging Face
    config_path=cfg_path,
    device=str(device),
    batch_size=1,
    thumbnail_power=1.25,
    thumbnail_max=1024,
    mask_threshold=0.0,
)
segmenter = SAM2SegmentationService(seg_cfg)

# 2) Load a WSI and segment the thumbnail.
wsi = WSIFactory.load("slide.svs")  # backend auto-detected (e.g., openslide)
mask = segmenter.segment_thumbnail(wsi)  # mask.data matches the thumbnail size

# 3) Save the mask.
mask_img = Image.fromarray((mask.data > 0).astype(np.uint8) * 255)
mask_img.save("thumbnail_mask.png")

Preparing the Thumbnail

AtlasPatch generates thumbnails at 1.25× objective power (power-based downsampling) and then resizes to 1024x1024 px

from atlas_patch.core.wsi import WSIFactory

wsi = WSIFactory.load("slide.svs")
thumb = wsi.get_thumbnail_at_power(power=1.25, interpolation="optimise")
thumb.thumbnail((1024, 1024))  # in-place resize to 1024×1024
thumb.save("thumbnail.png")

License and Commercial Use

This model is released under CC-BY-NC-SA-4.0, which strictly disallows commercial use of the model weights or any derivative works. Commercialization includes selling the model, offering it as a paid service, using it inside commercial products, or distributing modified versions for commercial gain. Non-commercial research, experimentation, educational use, and use by academic or non-profit organizations is permitted under the license terms. If you need commercial rights, please contact the authors to obtain a separate commercial license. See the LICENSE file in this repository for full terms.

Citation

If you use this model, please cite our paper:

@article{atlaspatch2025,
  title   = {AtlasPatch: An Efficient and Scalable Tool for Whole Slide Image Preprocessing in Computational Pathology},
  author  = {Alagha, Ahmed and Leclerc, Christopher and Kotp, Yousef and Metwally, Omar and Moras, Calvin and Rentopoulos, Peter and Rostami, Ghodsiyeh and Nguyen, Bich Ngoc and Baig, Jumanah and Khellaf, Abdelhakim and Trinh, Vincent Quoc-Huy and Mizouni, Rabeb and Otrok, Hadi and Bentahar, Jamal and Hosseini, Mahdi S.},
  journal = {arXiv preprint arXiv:2602.03998},
  year    = {2025}
}
Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Paper for AtlasAnalyticsLab/AtlasPatch