-MCH: Thank you very much Professor Lütge for granting me this interview. Being a PhD in philosophy, computer science and economics as director of the Institute for Ethics in Artificial Intelligence and the Peter Löscher Chair of Business Ethics at the Technical University of Munich, could you tell us from your academic perspective, what is one of the biggest challenges that we have to assume as a technological society in the face of the advance and dynamism of AI?
-CL: There are various challenges and opportunities that come with AI, but the most important is being able to assess those risks and opportunities in order to establish an Artificial Intelligence that responds responsibly to society. This is related to the need to assess the risks that we are willing to accept as a consequence of the use of any type of technology and those that we will not accept.
Currently, many people are engaged in the task of identify in more detail the necessary limits to establish, because there are many opportunities for the responsible use of AI that can be assumed in favor of the well-being of societies.
An example of this is that in order to face possible discrimination, limits must be established, but the challenge is complex, since the differences between societies will condition how Artificial Intelligence is used in each case.
-MCH: One of the biggest concerns of liberal democratic societies regarding ICTs is that we are immersed in an era where lies have been naturalized through Fake News. And this concern has been transformed into a claim and discomfort towards the great technological emporiums as managers and administrators of Social Media. So, what are the measures that you consider most effective that the Big Data Giants should implement to correct this problem without monopolizing the technological discourse on the Internet?
CL: Your question is very interesting and complex to answer because in some cases it is possible to easily detect Fakes news, but in others it is more difficult to detect problematic situations on the Internet, such as in the Chinese case in which debates are moderated, and how to face them from a democratic and pluralist perspective.
I think it is not possible to let technology alone automatically make decisions about what’s right, because it wouldn’t work. It is necessary that people with their human conscience constitute the circle in which the final decisions are made considering what is correct and what is not.
-MCH: Is it possible to provide Machine Learning with training that links ethical data to Deep Learning such as deep neural networks, where algorithmic conclusions are reached that respect human rights based on ethics legitimized through history as the most appropriate to combat hate crimes, discrimination and violence?
CL: Machine Learning can operate in many different ways and beyond that, there is enough general evidence that training systems providing them with Big Data, is a key factor for proper and smoother operation. It must be taken into account that the algorithms themselves are capable of generating artificial information taking advantage of information from human sources, with consequences that are sometimes acceptable and sometimes unacceptable.
In the direction indicated, for example, the difference in skin color detected by facial recognition systems can be a discriminatory factor that is expressed in different ways. It is necessary for children, especially from infancy, to learn to distinguish what is acceptable and what is not acceptable.
MCH : Do you consider interdisciplinary and multisectoral interaction feasible where not only the States, academia, the media and civil society are involved, without the latter being relegated “to the next room”, integrating its scope, for example, through a popular consultation to establish new public policies in relation to the ethical management not only of Big Data, but also of everything related to AI, from an approach that really permeates the new society.com as the main user and recipient of digital marketing campaigns?
CL: The questions of that if all citizens as civil society should have access to the use of digital platforms, or the possibility of establishing certain limits on the use of technology, was the type of issues that were the subject of discussion in the forums during the 90s.
In this regard, it is interesting to appreciate the current development of the Metaverse about which the big companies talk, which manifests the differences involved in the multisectoral and interdisciplinary interaction of people. I believe that we need to better analyze, in particular, the responsibility of individuals and governments in this new field.
In countries like Germany where there is greater relative progress on digital issues, this issue could be less controversial, but from another perspective we can see that there is a strong debate on these issues in the media. We are witnessing the participation of wide and different sectors in the debate on the interaction with new technologies.
MCH: Currently, the colonization of virtual space has led various ICT companies, North American, Chinese and European, to develop new ways of managing their information and communications, lowering costs in relation to the training of their Machine Learning models, using synthetic data to enable so-called metaverses turning these into a new business opportunity.
Therefore, from the field of the philosophy of law, what would be the legal reasoning that should be applied in order not to disregard the rights of workers and the guarantees of users on the Web, in the sense of taking advantage of the technological opportunity from the labor field but without undermining the achievements in terms of human rights?
CL: This is a very important challenge that we have to pay attention to. How to manage to transform the vision of a disconnected world that we had in the past and that mutated to the digital world of the present?
I think it is possible to achieve this, especially if we talk about human rights in relation to discrimination, but also linked to workers’ rights.
It seems to me that it is not extremely difficult to project the rights to the Metaverse, as long as all the players involved collaborate to face the challenge. However, you have to consider that it is impossible exactly copy reality and paste it into the Metaverse, because you are not physically in that room and therefore they are different things.
It is necessary to ensure that no one is left behind by preventing systemic discrimination against women and children with disabilities, for example, can be transferred to virtual reality.
I also believe that the great actors are aware of this equation as part of the long interdisciplinary process carried out by specialists.
MCH: You were part of the interdisciplinary committee of specialists that drafted the proposal for an ethical regulation for driving autonomous vehicles in Germany. What is the status today of the discussion of substance and content in relation to the constitution of a new legislation, where the protection of human life prevails over the dynamism of AI and the technological race currently waged by large corporations of the automotive industry in the world?
CL: There are different aspects involved. For example, in Germany there was legislation on autonomous driving last year allowing tests at a higher level with high-level automation and this law was based to some extent at least on the ethical recommendations that we gave a few years ago in the Committee that you mentions.
I would say that a stage has been fulfilled. Not the definitive one, but it is a new stage in any case.
Now it is time to establish similar standards at European level and we are at it. As we know, the European AI law is under discussion in the European Parliament which would also apply to a certain extent to the automotive industry. The discussion glimpses impacts in the treatment of the protection of human rights in the driving of autonomous vehicles. We must pay attention to the costs and benefits that the new standard will have.
We always have to be careful to weigh that we are not alone in the world and that there are other players like the US and China. One should try to avoid establishing regulations that hinder taking advantage of the opportunities presented by AI.
I think we are on a good path for it.
Artículos Relacionados: