Papers
arxiv:2602.07954

Bielik Guard: Efficient Polish Language Safety Classifiers for LLM Content Moderation

Published on Feb 8
· Submitted by
Krzysztof Wróbel
on Feb 12
Authors:
,
,
,

Abstract

Bielik Guard is a compact Polish language safety classifier family with two variants that effectively categorize content across five safety domains while maintaining high efficiency and accuracy.

AI-generated summary

As Large Language Models (LLMs) become increasingly deployed in Polish language applications, the need for efficient and accurate content safety classifiers has become paramount. We present Bielik Guard, a family of compact Polish language safety classifiers comprising two model variants: a 0.1B parameter model based on MMLW-RoBERTa-base and a 0.5B parameter model based on PKOBP/polish-roberta-8k. Fine-tuned on a community-annotated dataset of 6,885 Polish texts, these models classify content across five safety categories: Hate/Aggression, Vulgarities, Sexual Content, Crime, and Self-Harm. Our evaluation demonstrates that both models achieve strong performance on multiple benchmarks. The 0.5B variant offers the best overall discrimination capability with F1 scores of 0.791 (micro) and 0.785 (macro) on the test set, while the 0.1B variant demonstrates exceptional efficiency. Notably, Bielik Guard 0.1B v1.1 achieves superior precision (77.65%) and very low false positive rate (0.63%) on real user prompts, outperforming HerBERT-PL-Guard (31.55% precision, 4.70% FPR) despite identical model size. The models are publicly available and designed to provide appropriate responses rather than simple content blocking, particularly for sensitive categories like self-harm.

Community

Paper author Paper submitter

Bielik Guard is a family of compact Polish-language safety classifiers (0.1B and 0.5B parameters) that accurately detect harmful content across five categories, achieving strong benchmark performance—with the 0.5B model offering the best overall F1 scores and the 0.1B model delivering high precision and low false positives—while enabling nuanced responses rather than simple content blocking.

Smaller model: https://huggingface.co/speakleash/Bielik-Guard-0.1B-v1.1
Larger model: https://huggingface.co/speakleash/Bielik-Guard-0.5B-v1.1

Sign up or log in to comment

Models citing this paper 3

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2602.07954 in a dataset README.md to link it from this page.

Spaces citing this paper 1

Collections including this paper 1