Try GOLD - Free
NEW LAB PARTNER
TIME Magazine
|May 12, 2025
Why AI models could help prevent—or cause— the next pandemic
A NEW STUDY CLAIMS THAT AI MODELS like ChatGPT and Claude now outperform Ph.D.-level virologists in problem-solving in wet labs, where scientists analyze chemicals and biological material. This discovery is a double-edged sword, experts say. Ultra-smart AI models could help researchers prevent the spread of infectious diseases. But nonexperts could also weaponize the models to create catastrophic bioweapons.
The study, shared exclusively with TIME, was conducted by researchers at the Center for AI Safety, MIT’s Media Lab, the Brazilian university UFABC, and the pandemic-prevention nonprofit SecureBio. The authors consulted virologists to create an extremely difficult practical test that measured the ability to troubleshoot complex lab procedures and protocols.
While Ph.D.-level virologists scored an average of 22.1% in their declared areas of expertise, OpenAI’s 03 reached 43.8% accuracy. Google’s Gemini 2.5 Pro scored 37.6%.
Seth Donoughe, a research scientist at SecureBio and a co-author of the paper, says the results make him a “little nervous,” because for the first time in history, virtually anyone has access to a nonjudgmental AI virology expert that might walk them through complex lab processes to create bioweapons.
This story is from the May 12, 2025 edition of TIME Magazine.
Subscribe to Magzter GOLD to access thousands of curated premium stories, and 10,000+ magazines and newspapers.
Already a subscriber? Sign In
Listen
Translate
Change font size
