Poging GOUD - Vrij
HOW TO MAKE AI SAFE
TIME Magazine
|June 09, 2025
I'm genuinely unsettled by the behavior unrestrained AI is already demonstrating. In one experiment, when an Al model learns it is scheduled to be replaced, it inserts its code into the computer where the new version is going to run, ensuring its own survival. In a separate study, when AI models realize they are going to lose at chess, they hack the computer in order to win.
Cheating, manipulating others, lying, deceiving, especially toward self-preservation: these behaviors show how AI might pose significant threats that we are currently ill equipped to respond to.
The examples we have so far are from experiments in controlled settings and fortunately do not have major consequences, but this could quickly change as capabilities and the degree of agency increase. Far more serious outcomes await if AI systems are granted greater autonomy, achieve human-level or greater competence in sensitive domains, and gain access to critical resources like the internet, medical laboratories, or robotic labor.
The commercial drive to release powerful agents is immense, and we don't have the scientific and societal guardrails to make sure the way forward is safe. We're all in the same car on a foggy mountain road. While some of us are keenly aware of the dangers ahead, others-fixated on the economic rewards awaiting some at the destination-are urging us to ignore the risks and slam down the gas pedal. We need to get down to the hard work of building guardrails around the dangerous stretches that lie ahead.
Dit verhaal komt uit de June 09, 2025-editie van TIME Magazine.
Abonneer u op Magzter GOLD voor toegang tot duizenden zorgvuldig samengestelde premiumverhalen en meer dan 9000 tijdschriften en kranten.
Bent u al abonnee? Aanmelden
Listen
Translate
Change font size
