يحاول ذهب - حر

NEW LAB PARTNER

May 12, 2025

|

TIME Magazine

Why AI models could help prevent—or cause— the next pandemic

- - BY ANDREW R. CHOW

NEW LAB PARTNER

A NEW STUDY CLAIMS THAT AI MODELS like ChatGPT and Claude now outperform Ph.D.-level virologists in problem-solving in wet labs, where scientists analyze chemicals and biological material. This discovery is a double-edged sword, experts say. Ultra-smart AI models could help researchers prevent the spread of infectious diseases. But nonexperts could also weaponize the models to create catastrophic bioweapons.

The study, shared exclusively with TIME, was conducted by researchers at the Center for AI Safety, MIT’s Media Lab, the Brazilian university UFABC, and the pandemic-prevention nonprofit SecureBio. The authors consulted virologists to create an extremely difficult practical test that measured the ability to troubleshoot complex lab procedures and protocols.

While Ph.D.-level virologists scored an average of 22.1% in their declared areas of expertise, OpenAI’s 03 reached 43.8% accuracy. Google’s Gemini 2.5 Pro scored 37.6%.

Seth Donoughe, a research scientist at SecureBio and a co-author of the paper, says the results make him a “little nervous,” because for the first time in history, virtually anyone has access to a nonjudgmental AI virology expert that might walk them through complex lab processes to create bioweapons.

Listen

Translate

Share

-
+

Change font size