試す 金 - 無料
Ground zero for AI safety
TIME Magazine
|January 27, 2025
Inside the U.K's bold experiment in technology governance
IN MAY 2023, THREE OF THE MOST IMPORTANT CEOS IN artificial intelligence walked through the iconic black front door of No. 10 Downing Street, the official residence of the U.K. Prime Minister, in London. Sam Altman of OpenAI, Demis Hassabis of Google DeepMind, and Dario Amodei of Anthropic were there to discuss AI, following the blockbuster release of ChatGPT six months earlier.
After posing for a photo opportunity with then Prime Minister Rishi Sunak in his private office, the men filed through into the cabinet room next door and took seats at its long, rectangular table. Sunak and U.K. government officials lined up on one side; the three CEOs and some of their advisers sat facing them. After a polite discussion about how AI could bring opportunities for the U.K. economy, Sunak surprised the visitors by saying he wanted to talk about the risks. The Prime Minister wanted to know more about why the CEOs had signed what he saw as a worrying declaration arguing that AI was as risky as pandemics or nuclear war, according to two people with knowledge of the meeting. He invited them to attend the world's first AI Safety Summit, which the U.K. was planning to host that November. And he managed to get each to agree to grant his government prerelease access to their companies' latest AI models, so that a task force of British officials, established a month earlier and modeled on the country's COVID-19 vaccine unit, could test them for dangers.
このストーリーは、TIME Magazine の January 27, 2025 版からのものです。
Magzter GOLD を購読すると、厳選された何千ものプレミアム記事や、10,000 以上の雑誌や新聞にアクセスできます。
すでに購読者ですか? サインイン
Listen
Translate
Change font size
