Future key technologies and the experts’concerns– On the evolutionary dynamics of the scientific-technological progress in the case of AI

When scientists feel inclined to make a public appeal, we are well advised to listen carefully. Only a few months ago, the two renowned science journals „Science“ and „Nature“ printed two such statements, each by leadingbio-scientistscalling for a legislative framework around the latest technological possibilities to manipulate the DNA in human embryonic cells. Now last week „Science“ printed another call for theregulation of scientific endeavors, this time written by leading researchers in the field of „Artificial Intelligence“ („AI“). Hardly anything illustrates better the effects and consequences of today’s amazing acceleration inthe dynamics of technological progress than the fact that within such a short time we saw three dramatic appeals fordefining a framework forthe associatedsocietal impacts made by those who understand therespective matter best. And what will in the future likely be possible with AI, barely falls short of the drama unfolding by the possible change of the human genome line.

Let us take a look on some background: Computer scientists generally distinguish two types of AI assigning tothosethe attributes „strong“ and „weak“. The „strong AI“ is a form of intelligence which performs acts of creative thinking similar to human beings, which can solve problems accordingly and which at last is characterized by some form of consciousness. The „weak AI“ in contrast is first about solving concrete problems. Its goal is not the imitation of higher intelligence but thesimulation of intelligent behavior by means of mathematics and computer science. This includes problems the solution of which generally require a form of rational intelligence, such as recognizing patterns, language, play chess, detect web pages on the Internet, find proofs of mathematical propositions (by pure trial and error), learn and automatesimple mechanical processes, or as of latest search specific information from large disordered data sets. While despite years of research the strong AI has to date failed to deliver on its basic promises, the weak AI in contrast has made significant progressin recent years and decades. It is the later on which the most recent discussion on AI centers.

AI research possesses a peculiar dynamics, in which phases of euphoric optimism alternate with periods of profound disillusionment about stagnation in the field. Already in the 1950s computer pioneers and researchers articulated an almost limitless expectation with respect to the abilities of computers, only to recognize two decades later, that the structure of sequential information processing in conventional computers (the so-called „von-Neumann „computer architecture“) is completely unsuitable for the development of AI. In the 80s and 90s then significant advances in understanding the dynamics of massive parallel processing of information in our brain on the one hand and the possibilities of the modeling the structures of our thinking organ by so called „neural networks“ on the other hand triggered a new wave of enthusiasm (which in 1993 lead the computer scientist VernorVingeto make his famous forecast that „“within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.“). In 1997 AI received particular attention, as the computer “Deep Blue” of IBM, managed to beat the then world chess champion Garry Kasparov (today even the best human chess players has little chances against to commercially available larger chess computer). This optimismwas replaced only a few years later by the realization that the structure and functionality of our brain has an unmanageable complexity, and the dimensionality of the state space of neural networks can quickly assume astronomical values. Today in turn anyone who follows the discussions among AI researchers quickly gets the impression that their discipline has entered a third phase of excitement. The basis of this new elation is the ability of more and more powerful computers to extract relevant information from large data sets and the possibility to handle increasingly complex information efficiently. The popularized key worddescribing this is „big data“.

The leading AI researchers thus urgently appeal to the public to start dealing with their research. It is, they claim, „imperative that society now begins to think about how it draws the maximum benefit from AI research“. Machine learning hasmade „dramatic progress”in the past two decades, which is why AI is one of those technologies that will change our lives drastically. Eric Horvitz from Microsoft Research, for example, provides an example of a computer program which through the analysis of publicly available data can come to the conclusion that a particular person mightsuffer from certain diseasesin the future. As an example he points out that intelligent algorithms by analyzing personal Twitter and Facebook messages may identify candidates for an incipient depression – even before the personsexperience this themselves. And that is just the beginning, according to Horvitz. Machine learning while potentially “extending our knowledge and providing new tools for enhancing health and wellbeing”makes it increasingly difficult for individuals to know what is known about him or her. Horvitz thus calls for appropriate legislation as „an important part of the legal landscape of the future that will help to preserve freedom, privacy and the greater good.“

In the same issue the AI pioneer Stuart Russel draws the drastic image of us humans sitting in a car that is driving towards a cliff hoping that it runs out of gas, before we plunge into the abyss. He claims that AI can be as dangerous to humanity as nuclear weapons. Accordingly, as Russel points out, computer science students today learn that AI must be based on human values.Intelligent machines that interpret or apply these values incorrectly pose a great danger.

It is at first sight rather surprising that representatives of companies that have a strong commercial interest in the future development of AIarticulate such drastic warnings and literally beg for governmental interference and a regulatory framework. But apparently these warnings find their roots in adeep concernon the side of the researchers that policy makers overseedramatic technological developments, not take them seriously enough, or simply do not understand them. Similar concerns drove the genetic researchers to theirappealseveral months ago. AI like genetic research is a very complex subjectwhich produces results that even experts in the fieldmay no longer understand entirely. The scientific and technological progress has meanwhile developed such a rapid and complex dynamics that it not exceeds only the mental, but increasingly also the ethical radar of most people, as well as the imagination of the vast majority of political decision-makers.

We should therefore welcome and regard it as an act of great social responsibility when scientistsaddress the public in this form. If we do not listen to them now, we have to ask ourselves what else it would take to develop an awareness of the unfolding drama the scientific and technological progress of our days presents. That AI and numerous other technologies will shape our future life is certain. How exactly this will happen largely depends on ourselves. In the case of AI, this will most probably go far beyond what a recent statement by the Californian artist Matt McMullen describes, who announced that he will equip sex dolls with artificial intelligence in order to make them appear more attractive to the user. How the troubled AI researchers think about this is unfortunately not known.

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert

Bitte füllen Sie dieses Feld aus.
Bitte füllen Sie dieses Feld aus.
Bitte gib eine gültige E-Mail-Adresse ein.
Sie müssen den Bedingungen zustimmen, um fortzufahren.