Scale AI logo
SEAL Logo
SEAL Logo

Leaderboards

SEAL LLM Leaderboards evaluate frontier LLM capabilities. These leaderboards provide insight into models through robust datasets and precise criteria to benchmark the latest AI advancements.

MCP Atlas
Evaluating real-world tool use through the Model Context Protocol (MCP)
View Full Ranking
SWE-Bench Pro (Public Dataset)
Evaluating long-horizon software engineering tasks in public open source repositories
View Full Ranking
SWE-Bench Pro (Private Dataset)
Evaluating long-horizon software engineering tasks in commercial-grade private repositories
View Full Ranking
SciPredict
Forecasting scientific experiment outcomes
View Full Ranking
Humanity's Last Exam
Challenging LLMs at the frontier of human knowledge
View Full Ranking
Humanity's Last Exam (Text Only)
Challenging LLMs at the frontier of human knowledge
View Full Ranking
AudioMultiChallenge
Evaluating spoken dialogue systems in multi-turn interaction
View Full Ranking
Professional Reasoning Benchmark - Finance
Evaluating Professional Reasoning in Finance
View Full Ranking
Professional Reasoning Benchmark - Legal
Evaluating Professional Reasoning in Legal Practice
View Full Ranking
Remote Labor Index (RLI)
Evaluating AI agents ability to perform real-world, economically valuable remote work
View Full Ranking
PropensityBench
Simulating real-world pressure to choose between safe or harmful behavior
View Full Ranking
VisualToolBench (VTB)
Evaluating how LLMs can dynamically interact with and reason about visual information
View Full Ranking
MultiNRC
Multilingual Native Reasoning Evaluation Benchmark for LLMs
View Full Ranking
MultiChallenge
Assessing models across diverse, interdisciplinary challenges
View Full Ranking
Fortress
Frontier Risk Evaluation for National Security and Public Safety
View Full Ranking
MASK
Evaluate model honesty when pressured to lie
View Full Ranking
EnigmaEval
Evaluating model performance on complex, multi-step reasoning tasks
View Full Ranking
VISTA
Vision-Language Understanding benchmark for multimodal models
View Full Ranking
TutorBench
Evaluating model performance on common tutoring tasks for high school and AP-level subjects
View Full Ranking

Frontier AI Evaluations

We conduct high-complexity evaluations to expose model failures, prevent benchmark saturation, and push model capabilities —while continuously evaluating the latest frontier models.

Scaling with Human Expertise

Humans design complex evaluations and define precise criteria to assess models, while LLMs scale evaluations—ensuring efficiency and alignment with human judgment.

Robust Datasets

Our leaderboards are built on carefully curated evaluation sets, combining private datasets to prevent overfitting and open-source datasets for broad benchmarking and comparability.

Run evaluations on frontier AI capabilities

If you'd like to add your model to this leaderboard or a future version, please contact leaderboards@scale.com. To ensure leaderboard integrity, we require that models can only be featured the FIRST TIME when an organization encounters the prompts.