\$OneMillion-Bench: How Far are Language Agents from Human Experts?
Abstract
A new benchmark evaluates language models on complex, real-world professional tasks requiring multi-step reasoning, evidence resolution, and domain-specific decision-making across multiple industries.
As language models (LMs) evolve from chat assistants to long-horizon agents capable of multi-step reasoning and tool use, existing benchmarks remain largely confined to structured or exam-style tasks that fall short of real-world professional demands. To this end, we introduce \OneMillion-Bench OneMillion-Bench, a benchmark of 400 expert-curated tasks spanning Law, Finance, Industry, Healthcare, and Natural Science, built to evaluate agents across economically consequential scenarios. Unlike prior work, the benchmark requires retrieving authoritative sources, resolving conflicting evidence, applying domain-specific rules, and making constraint decisions, where correctness depends as much on the reasoning process as the final answer. We adopt a rubric-based evaluation protocol scoring factual accuracy, logical coherence, practical feasibility, and professional compliance, focused on expert-level problems to ensure meaningful differentiation across agents. Together, \$OneMillion-Bench provides a unified testbed for assessing agentic reliability, professional depth, and practical readiness in domain-intensive scenarios.
Community
Just read through the $OneMillion-Bench report and honestly it reframes how we should be thinking about agentic evaluation.
The core insight is simple but sharp: instead of asking "did the model get the right answer?", ask "how much would a senior professional charge to do this work?" They built 400 tasks across Law, Finance, Healthcare, Natural Science, and Industry — curated by actual domain experts over 2,000+ hours — and priced each task based on real market wages. Total value: over $1M. Hence the name.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- AgentIF-OneDay: A Task-level Instruction-Following Benchmark for General AI Agents in Daily Scenarios (2026)
- AgencyBench: Benchmarking the Frontiers of Autonomous Agents in 1M-Token Real-World Contexts (2026)
- JADE: Expert-Grounded Dynamic Evaluation for Open-Ended Professional Tasks (2026)
- PLawBench: A Rubric-Based Benchmark for Evaluating LLMs in Real-World Legal Practice (2026)
- DR-Arena: an Automated Evaluation Framework for Deep Research Agents (2026)
- ReThinker: Scientific Reasoning by Rethinking with Guided Reflection and Confidence Control (2026)
- SkillsBench: Benchmarking How Well Agent Skills Work Across Diverse Tasks (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper