Category
Reading time
Published
Author

At UNGP, we believe in dialogue. This is why we participated in the Conference on the State of Artificial Intelligence in Africa (COSAA) held in March 2023. The conference was hosted by the Center for Intellectual Property and Information Technology Law (CIPIT). 

As one panellist was quick to point out, Africa already has an AI ecosystem, made up of researchers, scientists, engineers, data practitioners, policymakers and business leaders from many organizations. And this ecosystem is active.

You can see this from the use of AI for applications and solutions in sector-specific entities and the rising use of AI to meet sustainable development goals by both state and non-state actors. What’s more, law practitioners are looking into the ethical issues. Among papers presented at the conference were “How Can We Regulate Artificial Intelligence in Africa in the Face of the Fourth Industrial Revolution?”, “Assessing the State of Responsible Artificial Intelligence Policies and Regulations in Africa” and “The Role of Data in AI Innovation and Research”.

All this points to efforts to advance the adoption and implementation of AI on the continent. 

Governance of AI in Africa is at a crucial stage.  Discussions highlighted the need for regulations rather than strategies. For example, Mauritius and Egypt, have AI strategies, but as one speaker pointed out, no country in Africa has regulations exclusive to AI. She noted that what most countries have are regulations on data privacy and protection, but not AI usage. This gap provides an opportunity for innovation.

Thanks to the legal minds at the conference, all understood the need for regulations in two ways. One is the rights-based approach to regulating AI, which looks at issues such as transparency, legality, accountability, equality, non-discrimination and public participation. The other is the risk-based approach, which classifies AI applications according to the risks they pose (see visual below).  

AI Risk Categorization as set out by the European Union legal framework on AI:

The topic of responsible AI can spark lively debate, and the COSSA conference was no exception. We talked about the need to formulate both national and regional principles. Here are just some of the things participants said that give a flavour of the discussion: 

“We need to put the same energy into AI regulations as we do in building good data systems and AI solutions.”

“We need to account for our local and indigenous context.”

“We can trust but we need to verify (i.e. interrogate what we are being told).”

“We should not risk becoming consumers of AI but innovate our own.”

Let us not copy and paste or just use templates.”

Some even thought a lot of things being considered in relation to AI “might not necessarily be necessary!”

One speaker noted the prohibitive cost of the infrastructure and capacity needed to run AI. Another said Africans were looking too much at using AI in the way it had been done elsewhere, rather than being innovative and finding their own approaches.

At the end of this awesome discussion, we considered ways of creating enabling environments for data and AI, an area in which we at UNGP Kampala have considerable experience. Our lab head, Dr. Martin Mubangizi, stressed that when fostering the growth of data and AI, it is important to align with national visions and priorities.

In the case of Uganda, this means the National Development Plan 3 (NDPIII), which lays out 18 programs for responsible digital transformation. A specific action under this program was to develop a strategy for the so-called “Fourth Industrial Revolution (4IR) technologies. The 4IR strategy was worked out by a task force of experts, including our team, and matches the recommendations of Uganda’s national priorities, as stated in the National 4IR Strategy.

At the end of the two-day conference, participants recognized the need for enabling environments for AI. As one said: “We have all noticed the potential of AI when it comes to developing applications that empower people, for example AI that preserves indigenous languages.” 

A number of questions remained and when it comes to the state of AI in Africa, we must ask:  

1) What is the definition of African AI?

2) What are we intending to protect by regulating AI?

3) Who has the power when it comes to AI? Who should have it?

4) Have we mistakenly been looking to people with no knowledge of AI to regulate it?

5) What is our vision? Do we want to follow the paths of others or find our own?

Perhaps the conclusion is that we should look at other AI regulatory frameworks, such as that of the European Union, as a foundation rather than a solution, and explore regulatory frameworks that we can build on rather than take “as is”.

In a nutshell, a number of best practices were shared, my favourite being the need to discuss these principles and what is required of us as stakeholders in the AI ecosystem. Otherwise we are just working in silos. And while such engagements may raise more questions than they answer, it doesn’t matter. Because dialogue is the key.

Here is a short video snippet of the Conference on the State of Artificial Intelligence in Africa 2023.

Image credit: CIPIT