Nethope Logo

What’s next for AI in the nonprofit sector?

At NetHope, we regularly convene conversations at the intersection of Artificial Intelligence (AI) and the work of our Members (65 global NGOs). Recently we brought together an expert panel with the representatives from UNICEF, Microsoft, IBM, Avanade, and University of Innsbruck to talk about what’s next for AI. This blog post incorporates key insights from that discussion, plus other conversations, and research we’ve been conducting over the past few months. It’s meant to be a preview of the discussions we’ll be having at the upcoming NetHope Global Summit 2022. Make sure to check out the list of relevant sessions at the end of this blog post.

October 5, 2022

Photo Credit: International Water Management Institute (IWMI)


By Leila Toplic, Head of Emerging Technologies Initiative at NetHope

Today, Artificial intelligence (AI) technology is being adopted across sectors and industries around the world. According to the IBM Global AI Adoption Index 2022, the global AI adoption rate grew steadily and is now at 35%. In some industries and countries, the use of AI is practically ubiquitous.

As AI technologies evolve and advance, nonprofits and their partners are grappling with questions about how to design and use these technologies to serve society’s needs.

Much has been said about the potential of AI to help us tackle some of the biggest problems we face and to accelerate digital transformation of nonprofit organizations. The time is now to transition from talking about the potential of AI to realizing it through actual implementation and progress. To get there, we’ll need to invest in making AI more beneficial, equitable, and trustworthy.

Let’s explore this briefly.

Beneficial AI

Extreme poverty is on the rise for the first time in two decades. A record 100 million people have been forcibly displaced worldwide. This current decade we could see a 50% increase in the number of people needing international aid due to the effects of climate change, and this number could double by 2050. These are some of the challenges we’re facing today.

With its ability to reach people in need, detect and predict issues before they escalate, recommend interventions, and discover new solutions - AI, along with large quantities of high-quality data, can benefit society and the planet in many ways.

Global nonprofits are slowly adopting and implementing AI to help communities around the world build resilience in the face of poverty, displacement, and climate change. They’re using AI to connect smallholder farmers with climate-smart agricultural practices and relevant resources, predict natural disasters before they happen and recommend interventions that prevent loss of life and assets, connect refugees with education and work, and more. Much of that work is done in collaboration with partners such as Microsoft, IBM, Avanade, and University of Innsbruck.

The urgency and scale of the problems we’re facing calls for the accelerated adoption of AI and the transition from experimentation and pilots to sustainable solutions delivering impact at scale. At the NetHope Global Summit 2022, we’ll explore various practical examples of applied AI and how we move beyond pilots.

Equitable AI

Deploying powerful technology like AI to solve problems of (unequal) access to resources and information can be risky if we’re not intentional about who builds it, for what purposes, and with what values.

While AI is pervasive, it has not reached most of the nonprofit sector or the Global South. Most of the AI systems are built by companies in the Global North, focusing on the use cases, data sets, and values from the Global North’s perspective. What this means is that those systems may not work well for the use cases and contexts in the Global South and could further exacerbate inequality.

This power imbalance is caused by unequal access to information, knowledge, opportunities, and resources which in turn reinforces existing structural inequality and biases.

Creating AI that benefits all requires revisiting current approaches to how AI systems get designed, deployed, and used through the lens of equitable participation and empowerment.

Specifically, we need to shift the focus from, access to AI and situational inclusion in AI projects as the end goal, to equitable AI that is centered on active participation of, and ownership by, underrepresented communities. 

How do we do that? Here are three recommendations that came out of a recent conversation with NetHope’s AI Working Group Members (global NGOs) and our partners (Microsoft, IBM, Avanade, UNICEF, and University of Innsbruck):

  • Put AI technologies into the hands of underrepresented communities so they can apply it to the problems they care about - for example, to address systemic racism or address gender inequality. Low-code and no-code tools can help. These solutions take some of the existing capabilities and make them much more broadly available by abstracting away the codebase and replacing it with a graphical User interface (GUI). Gartner predicts that by 2025, 70% of new applications developed by organizations will use low-code or no-code technologies — up from less than 25% in 2020.
  • For AI to extend beyond access to equitable participation in the design and deployment of AI systems, it’s critical to address knowledge asymmetries by investing in underrepresented communities as skilled designers, informed Users, and policymakers.
  • AI is nothing without data, so equitable AI requires equitable access to large amounts of high-quality, representative data. To unlock the value of data we need to address issues such as data silos, and the tension between being able to collect and share data, and efforts to keep information private. Approaches like differential privacy can insert white noise into the data so that data can be shared with considerations for data privacy. Digital Public Good Alliance - a globally distributed, multi-stakeholder alliance focused on digital cooperation for a more sustainable, equitable world - is working on tools such as trusted data sharing systems.

At the NetHope Global Summit 2022, we’ll talk about different practical steps we can take to achieve equitable AI. We’re launching a new project focused on Gender Equitable AI - join us to hear more, including how you can contribute.

Trustworthy AI

The power of AI to serve people is undeniable, but so is AI's ability to cause adverse impacts at an unprecedented scale. Widespread adoption of AI comes with several risks and challenges including unintended consequences due to bias in the data and in the teams creating AI, overestimating the accuracy of AI systems, and intentional misuse of AI (e.g for surveillance).

AI is not magic - it makes mistakes, just like humans do. While AI is a reflection of our own human imperfection (e.g. our biases), AI needs to be better than our imperfect human systems because AI systems have the potential to operate at a massive scale which means that the impact of its errors can be significant.

As a result of numerous cases of AI systems discriminating and causing harm, trust in AI is at an all-time low. According to Accenture’s 2022 Tech Vision research only 35% of global consumers trust how AI is being implemented by organizations, and 77% of respondents think that organizations must be held accountable for their misuse of AI.

Trust is key to AI adoption.

AI systems and data are reflections of those who collected the data, and those who design the technology. They reflect and reinforce both personal, and institutional biases. Achieving trustworthy AI requires improving trust both in technology and through technology - using both technological and non-technological approaches, such as:

  • Transparent, clear, and equitable communication about when AI is being used, potential harms related to the use of data and AI, and the paths for remedy and redress when harm has occurred. This communication needs to be appropriately designed for use in the most vulnerable communities and fragile contexts.
  • Continuous monitoring of AI systems behavior over time and retraining as needed. This means, not just pre-deployment model validation but continued testing post-deployment that provides fall-back options in case models don’t perform accurately.
  • Educating AI Users to understand how and why AI makes decisions and how to deal with probabilistic outputs.
  • Building technology systems that are less biased than the current human systems and processes to close the inequity gaps and advance human rights.

More on the topics of trust and trustworthy AI - and technology more broadly - will be explored in several sessions at the NetHope Global Summit 2022.

In closing, as you get started with AI consider the following:

  1. Starting small, with projects that can deliver immediate impact.
  2. Setting the right objectives for AI adoption. Ask yourself: What pain points need to be addressed? Is AI the best solution for this problem? Who may be impacted by AI?
  3. Focusing on the use cases that augment, not replace, people.
  4. Building transparent AI solutions where the outcomes are explainable.
  5. Bringing in expertise through a partnership or leverage open-source initiatives like Call for Code to get access to expertise.

Join the conversation at the NetHope Global Summit 2022. Here are some of the sessions we’re excited about:

  • Data – The Fuel for Artificial Intelligence
  • Beyond Pilots: Sustainable Implementation of AI/ML in the NGO World
  • Empowered Participation of Women and Girls in AI ‘for Good’ Initiatives (First Look)
  • Advancing Human Rights with Technology
  • Digital Innovation for the Coming Decade
  • Anticipatory Action for Climate Resilience
  • Digitally Enabled Climate Advisory Services

Register for NetHope Global Summit here.


Twitter @NetHope_orgFacebook NetHopeorgYouTube iconLinkedIn Company NetHopeInstagram nethope_org
crossmenu