The billionaire and acclaimed tech visionary Elon Musk just expressed another dire warning about the potential of artificial intelligence (AI). In a Twitter message (where else, we are to ask today) a few days ago he warned that this technology represents a greater risk to world peace than the current conflict with North Korea. His […]
The billionaire and acclaimed tech visionary Elon Musk just expressed another dire warning about the potential of artificial intelligence (AI). In a Twitter message (where else, we are to ask today) a few days ago he warned that this technology represents a greater risk to world peace than the current conflict with North Korea. His tweet fits seamlessly into a cascade of warnings on AI by the entrepreneur, who just a month earlier had described AI as the “biggest risk we face as a civilization.”
Now, Elon Musk occupies himself with all sorts of future technologies, and we must assume that he knows what he is talking about. After all, he invests a lot of money into it. In addition, he has extensive access to first-hand information on the subject through the “Open AI” project, which he has set up himself and which is intended to foster the “safe” development and application of AI. We should thus take him and his statements seriously, despite that they are diametrically opposed to most of his colleagues in Silicon Valley, which seems to see in AI something like a modern holy grail of technological achievement for the benefit of humanity (which upon closer inspection, however, turns out to be a golden instrument for the value and profit creation of their companies).
Musk is joined by numerous other prominent voices who have expressed similar warnings. Although Stephen Hawking agrees that AI can be of great benefit, he warns that it could destroy mankind. Two years ago Bill Gates said that “in just a few decades AI is strong enough to be a concern.” And already in 1987, Claude Shannon, one of the pioneers of the information age, expressed a concern many people have today about AI: “I can visualize a time in the future when we will be to robots as dogs are to humans.” Finally the technology philosopher, Gray Scott, believes: “There is no reason and no way that a human mind can keep up with an artificial intelligence machine by 2035.”
What most people do not even have on their radar screen: AI is already part of our everyday life. Thus, the number of programs within the Google software suite which use AI has grown from 100 in 2012 to more than 3,000 only four years later. In Tokyo, during the next Olympic Games, AI-controlled, self-driving cars are planned to transport athletes, tourists and locals through the metropolis. Today computers with AI software optimize energy consumption in industrial plants, create cancer diagnoses, write journalistic texts at the touch of a button, advise bank customers in their optimal investment strategy, and support lawyers dealing with complex legal cases. They read customer letters and e-mails from consumers and recognize the degree of annoyance of the sender, and computers in call centers chat on the phone like human employees. But “Watson” can do much more. The AI-computer has, for example, created the trailer for the horror movie “Morgan”, for which he investigated hundreds of trailers of other horror films and thrillers according to composition, cut, content, music, voices and emotions. Another AI system called “Benjamin” wrote the entire plot for the science fiction strip “Sun spring”.
AI has also long moved into the field of medicine. For example, AI algorithms for pattern recognition have become just as good in the diagnosis of skin cancer as skin doctors (both achieved a share of 91% in a study). With the corresponding app on our smartphone, we could examine our skin for dangerous melanomas instead of going to the dermatologist every year. And machines are no longer only helping physicians and genetic engineers to deal with gigantic data sets, they have also become surgical operators. The surgical robot “Smart Tissue Autonomous Robot (STAR)” already surpasses human surgeons in terms of precision during procedures on pigs.
And whoever still thinks that AI cannot be creative should listen to its music or look at its art. AI algorithms are already composing entire concert pieces: the American composer David Cope from the University of California in Santa Cruz developed the software EMI (Experiments in Musical Intelligence) which writes entire symphonies and operas. It has become particularly well known for music that corresponds to the style of Bach’s compositions. In a kind of musical Turing test listeners were not able to distinguish its pieces from real Bach works. And at the Technical University of Delft, within the project The Next Rembrandt AI was trained to paint a picture in the style of Rembrandt. The result was impressive: Art experts were not able to distinguish the AI’s painting from a real piece of work of the master painter. But the creativity if AI goes even further: On IBM’s website ibmchefwatson.com one can create cooking recipes that with the help of AI algorithms are optimized to the user’s personal taste and tailored to his individual needs. Will we soon all cook as well as Michelin star chefs with the help of artificial intelligence? “Cognitive cooking” could be the next major culinary trend.
AI possesses the potential of a breathtaking and equally uncanny technological development. This can also be attributed to the structure of its software: Unlike the early chess computers, today’s AI is based on genuinely learning – one is tempted to say: living – neural networks, which are constructed based on the very architecture of the human brain. These are then trained by imitating the learning and thinking processes of their human models, which will make them better and better with time. Referring to the complex layers of neural networks scientists speak of “deep learning”.
Professionals in the (very multifaceted) Asian game Go have recently been shocked at how fast the computer “AlphaGo” improves its skills. While studying millions of games as well as constantly playing against itself, the learning algorithm had developed Go tactics against which the human world champion was not only powerless, but it also opened up entirely new strategic paradigms for the game previously unknown. The Go experts attest the machine even its own playing personality, which it had apparently acquired in the course of its learning phase. The World’s number one Go player said in May 2017: “Last year, I think the way AlphaGo played was pretty close to human beings, but today I think he plays like the God of Go.”
In view of these developments, as harmless as they may each sound, we should listen carefully when the AI experts themselves express warnings and demand regulation. Leading AI researchers have recently appealed to the public to start dealing with their research. It is they claim “imperative that society now begins to think about how it draws the maximum benefit from AI research”. Machine learning has made “dramatic progress” in the past two decades, which is why AI is one of those technologies that will change our lives drastically. The AI pioneer Stuart Russel draws the drastic image of us humans sitting in a car that is driving towards a cliff hoping that it runs out of gas, before we plunge into the abyss. Like Elon Musk Russel claims that AI can be as dangerous to humanity as nuclear weapons.
It may seem astonishing that representatives of companies that have a strong commercial interest in the future development of AI (such as Musk and Gates) or scientists whose careers depend heavily on the growth in this field (like Russel) articulate such drastic warnings and literally beg for governmental interference and a regulatory framework. These warnings find their roots in a deep concern on the side of the entrepreneurs as well as researchers that policy makers oversee dramatic technological developments, not take them seriously enough, or simply do not understand them. Technologies such as AI are very complex. They produce results that even experts may no longer understand in full. The developments in the field of AI represent a more general issue that shall make us reflect upon: The scientific and technological progress has developed such a rapid and complex dynamics that it not only exceeds the mental, but increasingly also the ethical radar of most people, as well as the imagination of the vast majority of political and social decision-makers.
We should therefore welcome and regard it as an act of great social responsibility on the part of the AI protagonists when they address the public in this form. And it sometimes does require some dramatics – and possibly exaggeration – of the kind Elon Musk likes to convey in order to shift our all attention towards technological advances that are shaping up our entire lives and societies. And that AI and numerous other technologies rather sooner than later will do just that is certain. If we do not listen now, what else does it takes to develop an awareness for the unfolding drama the scientific and technological progress of our days?
- Canaries in the Mines of Democracy – On the status of science in Russia, Turkey, Hungary and Poland
- Dressage of the democratic will – When computer algorithms determine the outcome of elections
- Stephen Hawking – On the death of a scientific pop star
- Science politics as a playground for party politics – On the new leadership of the German Ministry of Research
- Fear of the future – How science and new technologies are threatening our collective psyche
- Dressage of the democratic will – When computer algorithms determine the outcome of elections on
- Fear of the future – How science and new technologies are threatening our collective psyche on
- Science politics as a playground for party politics – On the new leadership of the German Ministry of Research on
- The Quantum Computer – The Holy Grail of the Quantum Revolution 2.0 on
- Beyond the bitcoin crash – What will be the lasting impact of crypto currencies and the blockchain technology? on
- April 2018
- March 2018
- February 2018
- January 2018
- December 2017
- November 2017
- October 2017
- September 2017
- August 2017
- July 2017
- June 2017
- May 2017
- April 2017
- March 2017
- February 2017
- January 2017
- December 2016
- November 2016
- October 2016
- September 2016
- August 2016
- July 2016
- June 2016
- May 2016
- April 2016
- March 2016
- February 2016
- November 2015
- October 2015
- September 2015
- August 2015
- July 2015
- June 2015
- May 2015
- April 2015
- March 2015
- February 2015
- January 2015
- December 2014
- November 2014
- October 2014
- September 2014