Dr. Abdullah bin Musa Al Tayer
In classical Arabic, we say to a person today: your hands are octa and your mouth is blowing, and in the dialect of some of the Gulf neighbors, “Bread, you have baked it, you eat it.” Human offenses against himself are many, and his offenses against others of his gender and on a larger scale is not something unusual, for he is an ignorant oppressor. But if he threatens to encroach on the entire human race, that is an outrage that must be curbed by force. Thus said the knowers, and I am not the one who said.
And before amateurs and artificial intelligence enthusiasts get caught in my words and race with the batons of justification and ignorance, based on reading the title, I invite Arab specialists to write objectively on this matter that worries the West in general and Americans in particular, and that what they worry about is not a cure for what is in our chests, because we are in the boat of humanity alike. And the destiny is one.
In late March 2023, more than 1,000 (the number of signatories has now reached more than 27,000) technology leaders, researchers, and other experts working in and interested in artificial intelligence signed an open letter warning of artificial intelligence technologies and that they represent “serious dangers to society and humanity,” and urging The group, which included Elon Musk and Joshua Bengio, ordered the labs to halt development of their most powerful systems for six months so they could better understand the risks behind the technology. “Strong AI technologies should only be developed when we are confident that their effects will be positive and that their risks will be controlled,” they wrote in their appeal, noting that “scientists are currently very weak in their ability to understand the mistakes that very powerful AI technologies can make.” “We need to be very careful,” said one of the letter’s main signatories, University of Montreal artificial intelligence professor Joshua Bengio, who has spent the past four decades developing the technology that drives systems like GPT-4.
In an article for the New York Times, the writer specializing in artificial intelligence, Cade Metz, classified the risks of artificial intelligence in three time scales. In the short term, he believes that it is misleading, as it is difficult to ascertain the validity of the information it generates in answering the questions asked to it, and in the medium term it will affect jobs and a large number of people will lose their jobs in favor of the new technology that will do it faster and perhaps for free or at a low price, and certainly its challenges In this field, it is matched by favorable opportunities that create qualitative jobs because of this technology, but on the long-term level, which is the most dangerous, Kaed Metz believes that the risks are in losing control of these technologies, as “some of the people who signed the letter also believe that artificial intelligence can slip out of our control or destroys humanity. But many experts say this is greatly exaggerated.
Eliezer Yudkovsky, a writer specializing in the field of artificial intelligence, whose opinions are included in university courses for students studying in the field of artificial intelligence, wrote an article in the famous magazine “Time” welcoming the idea of the proposed pause, but it is not sufficient, and called for radical solutions that stop experiments permanently and by force. And he establishes his hardline stance by saying, “If someone were to build an artificial intelligence so powerful that, under present circumstances, I would expect every member of the human race and all biological life on Earth to perish shortly thereafter.”
Eliezer calls for: “a halt to large and new training courses indefinitely and all over the world and without exceptions for governments or armies,” except for problem-solving “in biology and biotechnology, not text-based training from the Internet.” Eliezer demanded the signing of an international multilateral agreement to prevent banned AI activities from moving and moving, tracking of all GPUs sold, and “if intelligence reports that a country has deviated from the agreement… be prepared to destroy any rogue data center with airstrikes.” .
The writer believes that the nuclear countries that are signatories to the Nuclear Non-Proliferation Treaty should be ready to cooperate with each other by using nuclear force to reduce the risks of large exercises on artificial intelligence.
He believes in the relevance of the opinion of specialists, but in the end it is an opinion. Scientists decide on the level of fears and the mechanisms for addressing them. And when the opinion is accompanied by an appeal signed by about 27,000 people from scientists, specialists and interested people, it is a matter of concern about the repercussions of artificial intelligence, whose applications are celebrated obsessively.
Read the Latest World News Today on The Eastern Herald.
Copyright © 2023 The Eastern Herald.
For the latest updates and news follow The Eastern Herald on Google News, Instagram, Facebook, and Twitter.
Help us continue our mission to deliver the latest news and stories by becoming a supporter of our newspaper. Your support will help us to continue to provide high-quality journalism and to ensure that our content remains free and accessible to all. Click here to show your support. Thank you!