The dataset viewer is not available for this subset.
Exception: HfHubHTTPError
Message: 500 Server Error: Internal Server Error for url: https://huggingface.co/api/datasets/claw-eval/Claw-Eval/paths-info/ca978fd82edb77d52f26f4ccf3f9684a8df84341 (Request ID: Root=1-69fdf7ce-34f236fe3932d07259c3a7a5;eda9b43b-e899-46a3-ab75-39591c8c25b6)
Internal Error - We're working hard to fix this as soon as possible!
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/utils/_http.py", line 409, in hf_raise_for_status
response.raise_for_status()
File "/usr/local/lib/python3.12/site-packages/requests/models.py", line 1026, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: https://huggingface.co/api/datasets/claw-eval/Claw-Eval/paths-info/ca978fd82edb77d52f26f4ccf3f9684a8df84341
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 340, in get_dataset_split_names
info = get_dataset_config_info(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 268, in get_dataset_config_info
builder = load_dataset_builder(
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1353, in load_dataset_builder
builder_instance: DatasetBuilder = builder_cls(
^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 343, in __init__
self.config, self.config_id = self._create_builder_config(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 558, in _create_builder_config
builder_config._resolve_data_files(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 206, in _resolve_data_files
self.data_files = self.data_files.resolve(base_path, download_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/data_files.py", line 822, in resolve
out[key] = data_files_patterns_list.resolve(base_path, download_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/data_files.py", line 775, in resolve
resolve_pattern(
File "/usr/local/lib/python3.12/site-packages/datasets/data_files.py", line 372, in resolve_pattern
for filepath, info in fs.glob(fs_pattern, detail=True, **glob_kwargs).items():
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_file_system.py", line 521, in glob
return super().glob(path, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/fsspec/spec.py", line 604, in glob
allpaths = self.find(root, maxdepth=depth, withdirs=True, detail=True, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_file_system.py", line 556, in find
return super().find(
^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/fsspec/spec.py", line 495, in find
out[path] = self.info(path)
^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_file_system.py", line 719, in info
paths_info = self._api.get_paths_info(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_api.py", line 3371, in get_paths_info
hf_raise_for_status(response)
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/utils/_http.py", line 482, in hf_raise_for_status
raise _format(HfHubHTTPError, str(e), response) from e
huggingface_hub.errors.HfHubHTTPError: 500 Server Error: Internal Server Error for url: https://huggingface.co/api/datasets/claw-eval/Claw-Eval/paths-info/ca978fd82edb77d52f26f4ccf3f9684a8df84341 (Request ID: Root=1-69fdf7ce-34f236fe3932d07259c3a7a5;eda9b43b-e899-46a3-ab75-39591c8c25b6)
Internal Error - We're working hard to fix this as soon as possible!Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Claw-Eval
End-to-end transparent benchmark for AI agents acting in the real world.
Paper | Leaderboard | Code
Dataset Structure
Splits
| Split | Examples | Description |
|---|---|---|
general |
161 | Core agent tasks across 24 categories (communication, finance, ops, productivity, etc.) |
multimodal |
101 | Multimodal agentic tasks requiring perception and creation (webpage generation, video QA, document extraction, etc.) |
multi_turn |
38 | Multi-turn conversational tasks where the agent interacts with a simulated user persona to clarify needs and provide advice |
Fields
| Field | Type | Description |
|---|---|---|
task_id |
string | Unique task identifier |
query |
string | Task instruction / description |
fixture |
list[string] | Fixture files required for the task (available in data/fixtures.tar.gz) |
language |
string | Task language (en or zh) |
category |
string | Task domain |
Usage
from datasets import load_dataset
# Load all splits
dataset = load_dataset("claw-eval/Claw-Eval")
# Load a specific split
general = load_dataset("claw-eval/Claw-Eval", split="general")
multimodal = load_dataset("claw-eval/Claw-Eval", split="multimodal")
multi_turn = load_dataset("claw-eval/Claw-Eval", split="multi_turn")
# Inspect a sample
print(general[0])
Acknowledgements
Our test cases are built on the work of the community. We draw from and adapt tasks contributed by OpenClaw, PinchBench, OfficeQA, OneMillion-Bench, Finance Agent, and Terminal-Bench 2.0.
Citation
If you use Claw-Eval in your research, please cite:
@article{ye2026claw,
title={Claw-Eval: Toward Trustworthy Evaluation of Autonomous Agents},
author={Ye, Bowen and Li, Rang and Yang, Qibin and Liu, Yuanxin and Yao, Linli and Lv, Hanglong and Xie, Zhihui and An, Chenxin and Li, Lei and Kong, Lingpeng and others},
journal={arXiv preprint arXiv:2604.06132},
year={2026}
}
Core Contributors
Bowen Ye(PKU), Rang Li (PKU), Qibin Yang (PKU), Zhihui Xie(HKU), Yuanxin Liu(PKU), Linli Yao(PKU), Hanglong Lyu(PKU), Lei Li(HKU, project lead)
Advisors:
Tong Yang (PKU), Zhifang Sui (PKU), Lingpeng Kong (HKU), Qi Liu (HKU)
Contribution
We welcome any kind of contribution. Let us know if you have any suggestions!
License
This dataset is released under the MIT License.
- Downloads last month
- 3,007