Blog
The discussion about shaping technological progress using the example of artificial intelligence - On the latest proposals of the EU Commission's High Level Expert Group
It dawns more and more on many people that future technologies will likely not only change the nature around us, as they did in the past, but will soon also transform man himself. The possibility of a fundamental change in our biology, our psyche, our perception and our consciousness is already emerging. How we deal […]
It dawns more and more on many people that future technologies will likely not only change the nature around us, as they did in the past, but will soon also transform man himself. The possibility of a fundamental change in our biology, our psyche, our perception and our consciousness is already emerging. How we deal with it and design these possibilities determines the future of humanity, as well as that of our individual freedom. This scares many people. Because they believe that technological progress in all its potency is an autonomous force that simply works without us being able to change much about it. We are irreparably exposed to its developments. But are we really just passive spectators and causalities, over whom new technologies simply roll away? Or can we perhaps not design them ourselves, possibly even for the good of all of us? To answer the last question positively, we need at least the following: 1. knowledge; 2. willingness and courage to get involved; 3. clear principles and goals.
The government and its political leadership are most likely given a leading role in this affair. In reality, however, political agents are usually massively overburdened by the content as well as speed of the technological change. Other parts of the social leadership (intellectuals, scientists, cultural workers, journalists, churches, educators, etc.) often prove to be equally helpless in the face of the challenges of technological progress. In reality, developments are often left to the forces of the free market. There exists even a theoretical foundation for that stance: Classical economics paints the picture of a market that if only left alone automatically creates optimal conditions (for example, prosperity for all). Thus the almost 250-year-old metaphor of the “invisible hand” still serves as a legitimation principle for the view that a market can only lead society as a whole to maximum prosperity if the exchange of goods and services and other economic activities (in our context, technological development) can unfold unrestrictedly. However, it has long been known that this ideal of an economic model has little to do with how real markets present themselves. Numerous forces prevent the economic equilibrium propagated by economists to develop: Externalization of costs (violation of the polluter pays principle), conflicts of interest and corruption, distortions of competition, information asymmetries among market participants, and last but not least massive cognitive distortions (which have long since produced their own branch of economic research, behavioral economics). Rather, the free forces of the market are subject to a capitalist logic of exploitation, in which it is important for the individual agent to generate as much profit as possible. Solving social problems, on the other hand, is not part of any objective function of economic agents. How can these then be part such a function for the market as a whole?
A touchstone for the “invisible hand” of the free market is ecology. It is becoming increasingly obvious that our economic system is failing miserably in the face of one of the most important problems of our time, the threat of climate change. External ecological costs simply do not show up in any economic target function. We therefore threaten to fail as a species on a question that could hardly be more important for our long-term survival. Another litmus test for global society will be how we shape technological progress.
But aren’t the processes around the technological progress far too confusing in that we can confront these with our desire for shaping them? Such fatalism could itself be fatal. Rather, required are global coordination between interest groups, states and power blocs in order to avoid developments that would lead to fatal developments for mankind as a whole and to enable those with benefits to all human beings. Basis of such must thereby remain a democratic process: Only a high level of social diversity, decentralized information and decision-making structures, and social diversity create a sufficiently high level of functional capacity and thus the necessary insights and decision-making power within society, politics and the economy. Only democracy on a global level enables coordination between interest groups, states and power blocs in order to avoid developments that lead humanity as a whole to fail.
An initiative of the EU Commission, the establishment of a High-Level Expert Group on Artificial Intelligence (AI HLEG), might show what this could look like concretely. In December 2018 the group addressed the public with a paper entitled “Ethical Foundations for a Trustworthy AI”. “Artificial Intelligence (AI) is one of the most transformative forces of our time, and is bound to alter the fabric of society”, the document states at the very beginning. The experts then name the principles towards a responsible development of AI technologies. In particular, a “human centric” approach is needed according to the group in unison. The aim is to shape the development of AI in such a way that the individual human being, his dignity and freedom are protected, democracy, law and order and civil rights are upheld, and equality, the protection of minorities and solidarity are promoted. Basic guidelines for any AI technology must therefore be: 1. Do good; 2. Do no harm; 3. Preserve human agency; 4. Be fair; and 5. Operate transparently.
Sounds great, does it not? The concrete proposals include a wide range of technical and non-technical measures. Do here perhaps emerge the first leaves of the delicate plant of a genuine social discussion on the so important questions of future technology design as called for above? Some critics remain skeptical, such as the well-known television scientist Ranga Yogeshwar. For him, the Commission’s proposals only represent an ethics “that is instrumentalized (…) and reduced to an advertisement slogan”. “It becomes the fig leaf of profiteering,” Yogeshwar continues. These are extremely harsh words against the proposals drawn up by the experts. One would expect them to be defended with equally tough arguments. Unfortunately, this is not the case. In particular, Yogeshwar centers his complaints on the fact that among the 52 members of the group is an American company, i.e. Google. Based on this he goes on to suggest that “this push resembles a Trojan horse, with Europe on it, but at its core the non-European corporations gain access to the European AI market of the future”. The whole argument is accompanied by the creepy (albeit correct in detail) list of many of Google’s (and Facebook’s) misdemeanors in terms of consumer protection on the European market. What Yogeshwar fails to mention, however, is that Google is the only non-European company on the list. And does it really make sense to leave the world’s leading AI company out of this so important discussion just because it is not European? By accusing the whole paper of such unfair intentions, the author has gone a little far and might hardly be able to avoid himself of being accused of instrumentalizing. Yogeshwar pledges the reader to take a closer look at the composition of the committee. Indeed, we should do that. In addition to more than 20 representatives of apart from Google exclusively European industrial companies, there are also 19 representatives from academic research, six consumer and employee protection organizations, and four government-related institutions. One would think that this is a sufficiently heterogeneous group to bring to the table a very broad spectrum of points of view and opinions on this as important as controversial topic, and that it is not just an exercise in capitalist interest enforcement under the guise of noble ethical principles.
In fact, the document states at the crucial point where it deals with possible critical points of a future AI technology that there is no consensus within the group yet. There is still a lot of heated discussion on the topics of identification and tracking of individual users through AI algorithms, the interaction between people and AIs, social scoring systems, autonomous weapons, and possible long-term problems around an AI that could, for example, develop its own consciousness. Or must readers read this as: “Attention, as an entrepreneur committed to my shareholders I am not allowed to enter this cesspool of ethical entanglements, in other words, I am to put a lid on it”? It is hard to imagine that this would find consensus in this heterogeneous group of AI experts.
One point of criticism, which Yogeshwar also raises, deals with the principle of “explicability” of the results supplied by AI. Hiding behind this demand for explicability, which is to be achieved by the user giving an “informed consent”, is there perhaps a backdoor for the AI developers to undermine fundamental principles of human dignity, for example, by pretending explicability to the user in order to get his consent? If the user then presses the “Okay” button, the AI developer would not only be legally but also ethically off the hook, as, after all, the user has agreed to the explanations! Considering the implementation of the General Data Protection Regulation (DGPR) by Google, Facebook and Co. such fears can hardly be completely dismissed. At the same time, however, the document also refers clearly to the EU Charter for finding ethical guidance, in which fundamental human rights are explicitly set out,. In case of doubt, it states, “it may however help to return to the principles and overarching values and rights protected by the EU Treaties and Charter “. One would perhaps wish for this to be expressed a little more clearly, but conversely claiming that the explicability principle opens the door to a future violation of our human rights by AI, or even threatens to legitimize the principle of “competition before ethics”, does seem somewhat exaggerated.
The discussion about the design of AI technology is of course too important to be solely guided by the capitalist logic of exploitation and led only with regard to the return prospects of tech investors or the ideology of the market fundamentalists. It must be conducted on the basis of broad democratic processes involving a broad spectrum of interests and opinions. Therefore, the discussion about the AI HLEG’s statements is to be welcomed in principle, even if they are subject to some justified criticism, come with some ambiguities in their wording, and offers only a very tight time window for discussion (which might lead to suspicion that a discussion may be unwanted). But it is precisely such a process of critical debate with a broad section of the population that is needed to shape the potential of future technologies.
Recent Posts
- Another eco-crisis – and nobody takes notices
- Protect science! – The lack of intellectual integrity exemplified by a recent public discussion in Germany
- Globalization 4.0 – The World Economic Forum between the discussion of our technological future and the increasingly unbearable vanity of a dull global financial elite
- The discussion about shaping technological progress using the example of artificial intelligence – On the latest proposals of the EU Commission’s High Level Expert Group
- Caught between enjoyment of prosperity and fear of the future – Some more optimism at the beginning of 2019
Recent Comments
- Greg Bright on Another eco-crisis – and nobody takes notices
- Bernd Ehlert on Protect science! – The lack of intellectual integrity exemplified by a recent public discussion in Germany
- hiroji kurihara on 100 years General Relativity – The theory that made Einstein a genius
- Charly on The discussion about shaping technological progress using the example of artificial intelligence – On the latest proposals of the EU Commission’s High Level Expert Group
- Horst on Caught between enjoyment of prosperity and fear of the future – Some more optimism at the beginning of 2019
Archives
- February 2019
- January 2019
- December 2018
- November 2018
- October 2018
- September 2018
- August 2018
- July 2018
- June 2018
- May 2018
- April 2018
- March 2018
- February 2018
- January 2018
- December 2017
- November 2017
- October 2017
- September 2017
- August 2017
- July 2017
- June 2017
- May 2017
- April 2017
- March 2017
- February 2017
- January 2017
- December 2016
- November 2016
- October 2016
- September 2016
- August 2016
- July 2016
- June 2016
- May 2016
- April 2016
- March 2016
- February 2016
- November 2015
- October 2015
- September 2015
- August 2015
- July 2015
- June 2015
- May 2015
- April 2015
- March 2015
- February 2015
- January 2015
- December 2014
- November 2014
- October 2014
- September 2014
1 Comments
Wird dann die KI, auch Nahrung für Mensch und Tier besorgen sowie Rohstoffe?
Reply