The AI governance challenge is vast and complex, straddling the fields of technology, law, philosophy and ethics. Autonomous intelligent systems (also known as artificial intelligence, or AI) have transformative applications in development and humanitarian aid but they also pose unique risks for human rights if used irresponsibly. Above all, these systems entail challenges for transparency and human oversight, since designers and implementers are often unable to “peer into” AI systems and understand how and why a decision was made. This so-called “black box” problem can preclude effective accountability in cases where these systems cause harm, such as when an AI system makes or supports a decision that has discriminatory impact.
As an innovator in the area of AI for humanitarian aid and development, UN Global Pulse is committed to ensuring that emerging technology supports the protection of human rights across the globe. With that goal in mind, we are pleased to have served as a Co-Champion for the UN High-level Panel on Digital Cooperation’s Recommendation 3C on artificial intelligence–calling for enhanced cooperation to “think through the design and application of […] standards and principles such as transparency and non-bias in autonomous intelligent systems in different social settings”– and as a Key Stakeholder for Recommendations 3A/B on digital human rights.
In line with these engagements, we have continuously been working in the area of AI governance policy, building on the consultations hosted together with the co-champions of Recommendations 3 A/B/C and with the participation of our relaunched Expert Group on the Governance of Data and AI.
Across these projects, our work is concentrated on ensuring trustworthy, privacy-respectful and inclusive AI. In an effort to connect these goals to the current global pandemic and to bring to the table various stakeholders and opinions, we held a high level event on Protecting Human Rights During the Covid-19 Crisis and Beyond: Digital Pandemic Surveillance and the Right to Privacy, during the UN General Assembly in September. Co-hosted with UN Human Rights, Access Now, and sponsored by UN Member States, the discussions drew upon existing multi stakeholder initiatives, including the UN Secretary-General’s High-level Panel on Digital Cooperation Roadmap and delved into the implications of increased surveillance on human rights during and beyond the COVID-19 pandemic.
Throughout our work in 2019-2020 we have been also looking to amplify Global South participation in the development of AI Governance processes, including through regional and multi-stakeholder level discussions at RightsCon in Tunis, Ghana, ath the Internet Governance Forum in Berlin, and at the World Data Forum.
Across our consultations thus far, several points of consensus on how we should approach overall AI governance have emerged.
First, there is wide agreement that, while ethics should play an important role in governing AI, a human rights-based approach to AI governance is essential. The “human rights vs. ethics” debate, which has been a focus of global AI governance discussions, came up in several discussions. However, this may be a false dichotomy. Human rights advocates are concerned about the phenomenon of “ethics-washing,” whereby the makers of technology–often private companies–self-regulate through vague and unenforceable “codes of ethics.” Technical experts, for their part, are often skeptical that human rights law can be adapted to the novel features, risks and harms of AI and machine learning. While both of these concerns are valid, the two approaches may actually complement, rather than undermine each other.
For example, human rights law is useful in setting widely applicable minimum standards. Human rights law is both binding and universal, and offers a common vocabulary and set of principles that can be applied across borders and cultures to ensure AI is serving human interests and promoting the Sustainable Development Goals (SDGs). The international human rights regime also offers a set of mechanisms for accountability and enforcement, from working groups and special rapporteurs to the International Court of Justice, reducing the need to start from scratch in applying these principles to AI. But it can take time for human rights jurisprudence to develop the specificity necessary to regulate emerging technology. Moreover, states or companies may wish to elevate their ethical standards to an even higher level than that required by human rights law. In such cases, ethics can be very helpful.
Second, our consultations indicated that it is essential to be specific about terms and to identify what exactly is “new” about the broad set of technological capacities collectively known as “AI.” For instance, their obscurity, scalability or ability to self-teach may distinguish them from other technologies we have regulated in the past. But it is worth noting that many of the underlying challenges are hardly new–for example, the imperative of using quality, unbiased data to train AI systems. In this way, the work of Recommendations 3A/B and 3C are fundamentally intertwined. In developing AI governance, we should ask, when can traditional human rights principles, mechanisms and best practices be implemented in the context of AI, and when do we need to invent new tools or strategies?
Third, there is an emerging consensus that certain ethical principles deserve special attention. The expert meetings we co-hosted together with the Human Rights Office in Geneva in 2019 focused on three interrelated principles: transparency, accountability and non-discrimination. While these principles are not the only governance challenges associated with AI, they offer a starting point for conversations about what makes AI different from other technologies and why it poses unique challenges for human rights. Articulating these principles in the universal language of human rights may require creativity, but the underlying principles that each speaks to will surely find support in human rights jurisprudence.
Fourth, there is an urgency to these questions. While we wrangle with how to apply human rights mechanisms to AI, the technology continues to evolve rapidly. Organizations at the UN and elsewhere deploy AI for social good every day, which means new risks are constantly emerging in our work. The ongoing COVID-19 pandemic has seen the use of AI and other emerging technologies (to varying degrees of success) to facilitate contract tracing, combat misinformation, and even conduct biomedical research. To ensure these and other AI tools enable human progress and contribute to achieving the SDGs, we need to be proactive and inclusive in developing policies and accountability mechanisms that protect human rights, including those that ensure access to reliable quality and unbiased data for training safe and trustworthy AI models. That will be UN Global Pulse’s guiding imperative as we forge ahead in 2021.
This work is carried out thanks to the generous support of the Government of the Federal Republic of Germany represented by the Federal Ministry for Economic Cooperation and Development, the William and Flora Hewlett Foundation, and the Jain Family Institute.