Artificial intelligence and machine learning have quickly demonstrated their unparalleled predictive ability in identifying threats and critical situations: an example of virtuous use comes from scientists operating in developing countries, using AI-based technologies to identify effective solutions to health and production problems. AI helps predict outbreaks of epidemics, or to support farmers facing the relentless advance of desertification and its catastrophic consequences on agriculture. The other side of the coin lies, however, in the fallacious nature of these systems: although they currently represent the highest point of technological innovation, they are still products of human ingenuity, of which they suffer the influence. Although they were created to be completely neutral and unbiased, many algorithms used in law enforcement, health, and educational environments have learned the prejudices of those who designed and trained them, worsening the discrimination against more vulnerable social groups. To train these algorithms, which shape and refine their operation proportionally to the amount of information accumulated, it is necessary to employ a considerable amount of data, the fuel of new technologies. The Ethical and Societal Challenges of Machine Learning, the online conference held on November 7 to 11 by the International Centre for Theoretical Physics (ICTP) in Trieste, focuses on the importance of nurturing an independent public debate on new technologies, beyond the influence of the big players in the field, to achieve an ethical approach in the use of artificial intelligence and machine learning. The topics of the meeting encourage us to broaden our gaze on AI and ML, applying critical reasoning to the potential misuse and fragility of these systems.
In 2006 Clive Humby, an English data scientist and mathematician, coined the slogan "Data as the new oil”. Exactly as oil allowed socio-economic development in the previous two centuries, big data and technologies benefiting from these enormous amounts of information move global economies, providing new development and business opportunities. The list of players of this fourth industrial revolution, able to mine this huge amount of data and manage it, thanks to computing power, is quite limited. The correlation between large international corporations such as Amazon, Google, and Facebook, and the use of new technologies is very stringent. An almost oligarchic scenario, which provides these companies with strategic advantages in terms of production and market competitiveness, raises the question of their political and decision-making relevance in areas of public interest. Roberto Trotta, an astrophysicist at the International Higher School of Advanced Studies (SISSA) of Trieste and King’s College London (KCL), among the organizers of the aforementioned conference, stresses the need to develop a public debate that goes beyond the mere narrative of the companies benefiting from new technologies, developing a critical, informed and, as far as possible, independent opinion. According to Trotta, it is fundamental to understand the pervasiveness of these systems in the daily life of collectivity, taking into account its implications, without underestimating their proclivity to error. "The danger is waking up in a society that has lost the ability to be shocked. We should have the vigor to imagine a different world. It is not a scientific and technocratic inevitability, but a human choice".
The use of AI is becoming an increasingly widespread practice within, for example, government agencies, private companies, the health system, and education, thus further impacting our daily lives. Another topic of discussion in the conference Ethical and Societal Challenges of Machine Learning is the trust in artificial intelligence, considering them completely infallible and super partes. But can technology heal all the biases and prejudices of the human being, providing solutions that are completely neutral, irrevocable, and free of criticalities? The short answer is no. The algorithms behind AI are the product of those who created them and which, more or less consciously, inoculates their prejudices. The artificial intelligence community is paying great attention to finding and eliminating the biases that creep into these systems, which reverberate stereotypes and prejudices that are typically characteristic of human behavior. A Machine Learning algorithm, to perform its functions, needs to be trained. According to the concept of Garbage In - Garbage Out, if data containing prejudices are used for this purpose, the algorithm will learn wrong reasoning, assimilating the preconceptions contained in the data. This is the case of an algorithm analyzed by researchers at the University of Virginia, accused of sexism: when presented with scenes of people in the kitchen, if one of these multiple images depicted a man at the stove, the algorithm registered the presence of a woman. This draws attention to another foundation of machine learning, namely "correlation is not causation": the correlation between two facts does not imply that one is necessarily the cause of the other. So why did the algorithm indicate that the man in the kitchen was a woman? The explanation lies in the data used to train the neural network: in the kitchen scenes proposed to the algorithm, there were almost exclusively women. As a result, the algorithm learned and normalized that, if a person is present in the kitchen, that will most likely be a woman. The question of AI bias applies to sensitive areas, such as law and justice: in some US States, judges are allowed and encouraged to use a risk assessments algorithm (risk indicator) establishing, based on an analysis of approximately 150 parameters, the accused’s propensity to commit new crimes. ProPublica, an independent investigative paper, analyzed 7,000 of these scores, reporting that the algorithm systematically disadvantaged black people. These results provided by artificial intelligence that does not take into account structural and impactful socio-economic indicators in American society have correlated the color of the skin to crime.
Another reflection by Professor Trotta concerns the human and natural resources deployed to feed these technologies. This use, directly proportional to the increasingly widespread use of AI, will reverberate negative and heavy impacts on developing countries, both in social and environmental terms. Just take into account that the people who train machine learning systems are often working for a dollar a day. At the same time, the exploitation of natural resources is a critical issue to take into consideration: the extraction of metals and the commitment of large amounts of energy will penalize the territories from which they are found, in favor of areas of the planet where these technologies are actively used. The question has now overcome the limits of our planet: even Space is the ground of quarrying resources, disputed between the same few large companies that, in absence of regulations restricting their reach, are destined to become yet another resource for the exploitation of a limited number of players.
The case of Starlink, the satellite connection service promoted by Elon Musk designed to ensure super-fast Internet on Earth, is a striking example of the criticality of these computer systems that amplify disparities, also on the economic level, between those who can use it and those who can’t. The cost of the subscription, which is around 100 dollars per month, is a hardly sustainable figure for most of the world’s population, which will find itself cut off. Services such as those provided by Starlink, which aim to make the use of the Internet even faster, will further transform the delivery of some key services, including health, exacerbating the gap between those who can afford to make use of it and those who can’t.
This socio-economic gap related to the use of AI, Machine Learning, and algorithms, may seem completely anachronistic applied to the daily use we make of it: a use devoted to simplifying our lives, connecting people, and disseminating ideas. In contemporary society, headed towards the irreversible change in management involving AI and its supporting structures, mathematic and writer Catchy O'Neil warns on how blind faith in data and an extreme algorithmic culture, can produce great damage to people and society, if not constantly accompanied by a critical analysis of data, methods, and practices.