Dr. Miguel Luengo-Oroz is the Chief Data Scientist of UN Global Pulse. In a piece for Nature Machine Intelligence magazine, he discusses why solidarity should be a primary principle of AI ethics. Below, we highlight some of the ideas discussed in the article.
Solidarity is one of the fundamental values at the heart of the construction of peaceful societies and is present in more than one third of the world’s constitutions. Nonetheless, solidarity is rarely included as a principle in ethical guidelines for the development of artificial intelligence.
Solidarity as an AI principle means (1) sharing the prosperity and burdens created by AI, and (2) understanding the long-term implications of developing and deploying AI systems. Shared prosperity means implementing mechanisms to redistribute the augmentation of productivity to all – including those whose actions and data were used to create AI models- , and creating and releasing digital public goods in the form of open AI algorithms and models. Shared burdens means – among other things – digital cooperation to face the challenges that this technology can create, from lethal autonomous weapons to deep fakes that could generate automated hate speech to instigate a state of political instability.
Beyond trustworthy AI – that is AI that works as it should – assessment of the long term implications of both developing and deploying AI systems for humans means understanding the risks and harms of embarking in new AI deployments and ensuring AI does not make people irrelevant. An example of long term thinking is the urgent need to understand the climate impact of computing resources used to train AI models and what are the real benefits we as a society gain from them.
Considering solidarity as a core principle for AI development can provide not just a human-centric approach to AI but rather a humanity-centric approach.
The full article is accessible here.