Several prominent politicians and businesspeople have joined the chorus of voices demanding that global authorities do something about the climate catastrophe and the existential threats posed by artificial intelligence.
An open letter calling for action against the growing threats of climate change, pandemics, nuclear weapons, and uncontrolled artificial intelligence was signed by Virgin Group founder Richard Branson, former UN Secretary-General Ban Ki-moon, and Charles Oppenheimer, grandson of American physicist J. Robert Oppenheimer.
“Determination to resolve intractable problems, not just manage them, wisdom to make decisions based on scientific evidence and reason, and humility to listen to all those affected.” World leaders are urged to adopt a long-term approach to the message.
The state of our planet is critical. The world’s population is in danger from a number of potential dangers. It was published on Thursday and shared with governments throughout the world, according to a spokeswoman, and it stated that our leaders are not responding with the necessary judgment and urgency.
A fast-changing climate, a pandemic that killed millions and cost trillions, conflicts in which the use of nuclear weapons has been openly discussed—the effects of these dangers are now apparent, and they could get worse. The survival of all life on Earth is in danger from some of these dangers.
The signatories emphasized the need for immediate global cooperation in a variety of areas, such as providing funding for the fossil fuel transition, ratifying a fair pandemic treaty, reviving nuclear weapons talks, and establishing the global governance framework necessary to harness the power of artificial intelligence for positive impact.
Thursday, the letter was posted by The Elders, a non-governmental organization (NGO) founded by Branson and former South African president Nelson Mandela to promote world peace and solve human rights issues on a worldwide scale.
The message is further supported by the Future of Life Institute, a charity founded by Jaan Tallinn, co-founder of Skype, and MIT cosmologist Max Tegmark. The institute’s mission is to guide AI and other disruptive technologies toward positive uses and away from potential negative ones.
While technology is not intrinsically “evil,” it is a “tool” that, according to Tegmark and his organization, might cause serious problems if it were to fall into the hands of those with malicious intentions.
Learning from mistakes has always been the old technique for steering towards positive uses when it comes to new technologies, Tegmark told CNBC in an interview. We first came up with fire, and then we came up with the fire extinguisher. “We learned from our mistakes and invented the car, then we invented the seatbelt, traffic lights, and speed limits.”
‘Engineering for Safety’
The “learning-from-mistakes” approach becomes terrible, according to Tegmark, “but when the power of the technology crosses a threshold.” I consider it safety engineering, being a self-proclaimed nerd.
Before sending humans to the moon, we meticulously considered every possible outcome of loading them into explosive fuel tanks and releasing them into an environment where they would be completely alone. In the end, that’s why everything worked out beautifully.
His next remark was: “That wasn’t ‘doomers.'” It was a matter of safety engineering. With nuclear weapons, synthetic biology, and ever-more-powerful AI in the future, we will also require this type of safety engineering.
The letter was sent out in the lead-up to the Munich Security Conference, when diplomats, military chiefs, and government officials would meet to discuss global security amid growing crises like the one between Israel and Hamas and Russia and Ukraine. Tegmark intends to join the gathering to promote the letter’s message.
Last year, the Future of Life Institute and prominent figures like Elon Musk of Tesla and Steve Wozniak of Apple both signed an open letter that urged artificial intelligence research institutions like OpenAI to stop developing models with more advanced capabilities than GPT-4, the most recent model developed by OpenAI’s Sam Altman.
To prevent a “loss of control” of society, which might lead to the elimination of jobs on a massive scale and computers outwitting humans, technologists have urged for a halt in AI development.
Here are some more tech stories you might enjoy if you liked this one: