Acing this new AI exam — which its creators say is the toughest in the world — might point to the first signs of AGI | Live Science

Researchers at the Center for AI Safety and Scale AI have published “Humanity’s Last Exam” — a test designed to measure how close today’s most powerful artificial intelligence (AI) models are to meeting or exceeding human-level knowledge across several domains.

The test was launched in January 2025, but scientists outlined the framework and their thinking behind its design for the first time in a new study published Jan. 28 in the journal Nature. It contains a corpus of 2,500 questions across more than 100 subjects, with input from more than 1,000 subject-matter experts from 500 institutions across 50 countries.

The exam consists of multiple-choice and short-answer questions, each of which has a known solution that is “unambiguous and easily verifiable but cannot be quickly answered by internet retrieval.”

Read More

A test for AGI is closer to being solved — but it may be flawed | TechCrunch

A well-known test for artificial general intelligence (AGI) is closer to being solved. But the tests’s creators say this points to flaws in the test’s design, rather than a bonafide research breakthrough.

In 2019, Francois Chollet, a leading figure in the AI world, introduced the ARC-AGI benchmark, short for “Abstract and Reasoning Corpus for Artificial General Intelligence.” Designed to evaluate whether an AI system can efficiently acquire new skills outside the data it was trained on, ARC-AGI, Francois claims, remains the only AI test to measure progress towards general intelligence (although others have been proposed.)

Until this year, the best-performing AI could only solve just under a third of the tasks in ARC-AGI. Chollet blamed the industry’s focus on large language models (LLMs), which he believes aren’t capable of actual “reasoning.”

Read More