Photo Credit: International Water Management Institute (IWMI)
By Leila Toplic, Head of Emerging Technologies Initiative at NetHope
Today, Artificial intelligence (AI) technology is being adopted across sectors and industries around the world. According to the IBM Global AI Adoption Index 2022, the global AI adoption rate grew steadily and is now at 35%. In some industries and countries, the use of AI is practically ubiquitous.
As AI technologies evolve and advance, nonprofits and their partners are grappling with questions about how to design and use these technologies to serve society’s needs.
Much has been said about the potential of AI to help us tackle some of the biggest problems we face and to accelerate digital transformation of nonprofit organizations. The time is now to transition from talking about the potential of AI to realizing it through actual implementation and progress. To get there, we’ll need to invest in making AI more beneficial, equitable, and trustworthy.
Let’s explore this briefly.
Extreme poverty is on the rise for the first time in two decades. A record 100 million people have been forcibly displaced worldwide. This current decade we could see a 50% increase in the number of people needing international aid due to the effects of climate change, and this number could double by 2050. These are some of the challenges we’re facing today.
With its ability to reach people in need, detect and predict issues before they escalate, recommend interventions, and discover new solutions - AI, along with large quantities of high-quality data, can benefit society and the planet in many ways.
Global nonprofits are slowly adopting and implementing AI to help communities around the world build resilience in the face of poverty, displacement, and climate change. They’re using AI to connect smallholder farmers with climate-smart agricultural practices and relevant resources, predict natural disasters before they happen and recommend interventions that prevent loss of life and assets, connect refugees with education and work, and more. Much of that work is done in collaboration with partners such as Microsoft, IBM, Avanade, and University of Innsbruck.
The urgency and scale of the problems we’re facing calls for the accelerated adoption of AI and the transition from experimentation and pilots to sustainable solutions delivering impact at scale. At the NetHope Global Summit 2022, we’ll explore various practical examples of applied AI and how we move beyond pilots.
Deploying powerful technology like AI to solve problems of (unequal) access to resources and information can be risky if we’re not intentional about who builds it, for what purposes, and with what values.
While AI is pervasive, it has not reached most of the nonprofit sector or the Global South. Most of the AI systems are built by companies in the Global North, focusing on the use cases, data sets, and values from the Global North’s perspective. What this means is that those systems may not work well for the use cases and contexts in the Global South and could further exacerbate inequality.
This power imbalance is caused by unequal access to information, knowledge, opportunities, and resources which in turn reinforces existing structural inequality and biases.
Creating AI that benefits all requires revisiting current approaches to how AI systems get designed, deployed, and used through the lens of equitable participation and empowerment.
Specifically, we need to shift the focus from, access to AI and situational inclusion in AI projects as the end goal, to equitable AI that is centered on active participation of, and ownership by, underrepresented communities.
How do we do that? Here are three recommendations that came out of a recent conversation with NetHope’s AI Working Group Members (global NGOs) and our partners (Microsoft, IBM, Avanade, UNICEF, and University of Innsbruck):
At the NetHope Global Summit 2022, we’ll talk about different practical steps we can take to achieve equitable AI. We’re launching a new project focused on Gender Equitable AI - join us to hear more, including how you can contribute.
The power of AI to serve people is undeniable, but so is AI's ability to cause adverse impacts at an unprecedented scale. Widespread adoption of AI comes with several risks and challenges including unintended consequences due to bias in the data and in the teams creating AI, overestimating the accuracy of AI systems, and intentional misuse of AI (e.g for surveillance).
AI is not magic - it makes mistakes, just like humans do. While AI is a reflection of our own human imperfection (e.g. our biases), AI needs to be better than our imperfect human systems because AI systems have the potential to operate at a massive scale which means that the impact of its errors can be significant.
As a result of numerous cases of AI systems discriminating and causing harm, trust in AI is at an all-time low. According to Accenture’s 2022 Tech Vision research only 35% of global consumers trust how AI is being implemented by organizations, and 77% of respondents think that organizations must be held accountable for their misuse of AI.
Trust is key to AI adoption.
AI systems and data are reflections of those who collected the data, and those who design the technology. They reflect and reinforce both personal, and institutional biases. Achieving trustworthy AI requires improving trust both in technology and through technology - using both technological and non-technological approaches, such as:
More on the topics of trust and trustworthy AI - and technology more broadly - will be explored in several sessions at the NetHope Global Summit 2022.
In closing, as you get started with AI consider the following:
Register for NetHope Global Summit here.