AI models outperform human candidates in Japanese university admissions

O.D.
English Section / 22 ianuarie

AI models outperform human candidates in Japanese university admissions

Versiunea în limba română

Advanced artificial intelligence models have achieved spectacular results after being tested on subjects in Japan's 2026 university entrance exam. According to Xinhua news agency, citing the financial daily Nikkei, the model developed by OpenAI took first place in the ranking, obtaining maximum marks in nine subjects. The experiment was carried out jointly by Nikkei and Japanese artificial intelligence startup LifePrompt. The results come in a global context in which AI performance is increasing rapidly, raising increasingly serious questions about the future of education, academic assessment and the role of standardized tests in a world dominated by increasingly sophisticated algorithms.

The ultimate test: 15 subjects, near-perfect scores

The latest models developed by AI giants including OpenAI and Google were put to the test on 15 core subjects in the Japanese university entrance exam, which was held on January 17-18. The best performer was OpenAI's latest-generation model, GPT-5.2 Thinking, which scored an overall score of 96.9 out of 100, with top scores in nine subjects. Google's Gemini 3.0 Pro came in second with a score of 91.4. By comparison, Japan's entrance exam covers a total of 21 subjects across seven disciplines, and the estimated average score of human candidates in the 15 most popular subjects was just 58.1, according to data cited by Nikkei.

A spectacular leap in a single year

This is not the first time that OpenAI models have been evaluated on such academic tests. But the progress made in a short period of time is considered remarkable. If in 2024 the company's models obtained an average of 66 points, in 2025 this increased to 91, and in 2026 they will approach perfection. This rapid evolution illustrates the accelerated pace of development of artificial intelligence, but also its increasing ability to solve complex problems, comparable to those found in national educational assessments.

Strengths and limitations: exact sciences dominate

The analysis carried out by Nikkei and LifePrompt shows that AI models excel especially in mathematics, physics, chemistry and biology, disciplines that involve logical reasoning, calculations and clear problem-solving structures. On the other hand, performances were weaker in Japanese and geography. For example, while the models were able to correctly interpret standardized geometric shapes and graphs, they lost points on questions that involved analyzing world maps. This indicates persistent limitations in recognizing and interpreting irregular and complex graphical information. The results raise major questions about the future of traditional testing systems. If an AI model can significantly outperform average human candidates on a national entrance exam, experts warn that a profound rethinking of how skills are assessed, as well as the role of teachers and the educational process itself, will be necessary. In Japan - one of the countries with the most competitive and rigorous university admissions systems - these results are seen not only as a technological success but also as a wake-up call for future education policies.

Reader's Opinion

Accord

By writing your opinion here you confirm that you have read the rules below and that you consent to them.

www.agerpres.ro
www.dreptonline.ro
www.hipo.ro

adb