Nethope Logo

NetHope launches new toolkit to scale AI Ethics conversations among nonprofits

The materials you need to host your own AI Ethics Workshop, focused on the ethical considerations related to the principle of Fairness.

December 14, 2020

By Leila Toplic, Lead for Emerging Technologies Initiative at NetHope, and Nora Lindstrom, Global Lead for Digital Development at Plan International


Today, NetHope has released the first installment of the AI Ethics for Nonprofits toolkit! This toolkit will help nonprofit organizations who are exploring, developing or using Artificial Intelligence/Machine Learning in humanitarian and international development contexts learn how to optimize for fairness in AI/ML systems and mitigate the risk of bias on the individuals and communities we support.

NetHope’s AI Working Group started work on this new resource 11 months ago, partnering with USAID and MIT D-Lab to produce a set of materials, workshops, and discussions that build the capacity in the social impact sector to design, deploy and use AI responsibly and ethically. The launch of this brand-new toolkit is the result of months of development and testing in both broad-reach webinars and targeted virtual workshops with nonprofit representatives from all over the world, and we are excited to share it with you.

Why nonprofits need to talk about AI ethics

Across the world today, over 600 million youth are not in education, employment or training. That represents a third of youth worldwide. Marginalized youth, especially girls, often struggle to compete for jobs as they may lack formal training, work experience and their talents and employable attributes aren’t often recognized.

Nonprofit organizations typically use Community Development Facilitators and similar roles to support youth to understand and ‘formalize’ their skills, and then link them to opportunities. Unfortunately, the demand for support vastly exceeds the number of Community Development Facilitators.

In 2018, NetHope Member Plan International decided to develop an AI system – a chatbot – to bridge the gap and reach youth at scale, providing consistent, quality support for all youth equally.

In piloting the project in the Philippines, Plan International however quickly learnt that the introduction of AI into the programming also introduced codified biases of chatbot users (ie youth) into the process. Moreover, lack of team diversity, biased data, and lack of intentional inclusion throughout the development process led to the system providing recommendations that reinforced gender inequalities in the labor market by providing users with gender-stereotypical advice on skills, jobs, and careers. Instead of increasing opportunities for girls in particular, it decreased them.

Plan International’s experience is just one example of how even with the best intentions, we in the nonprofit sector can create biased AI systems and use AI in a way that causes unintentional harm.

Yet, this is not inevitable. And, as Plan International also learned, however pervasive and complex, bias can and should be addressed in order to achieve fair outcomes for those impacted by the AI systems.

AI Ethics for Nonprofits

In order to guide the design and use of AI toward optimal public benefit, we in the nonprofit sector have a responsibility to understand the risks and to learn how to anticipate consequences of the technology solutions we’re creating and using, and to take deliberate steps to address the risks. This is why NetHope’s AI Working Group has put together a set of AI Ethics resources for nonprofit organizations.

In this first installment of the AI Ethics for Nonprofits toolkit, we focus on achieving Fairness (a just and equitable treatment across individuals and/or groups) and avoiding Bias (systematically favoring one group relative to another based on specific categories or attributes such as gender, race, age, education level). Today, bias is one of the most recurrent harms automated technologies that are being developed and used to assist in delivery of critical services (eg to determine who gets food, health, education) can learn to reproduce, maintain, and scale.

As decision making and recommendations become increasingly automated, it is critical that we are intentional about producing more fair outcomes and proactive in mitigating harmful effects of bias. Nonprofits play an important role in ensuring just and equitable outcomes for individuals and communities we support. This is why our first installment of the toolkit focuses on building capacity in the nonprofit sector to apply the ethical considerations related to the principle of fairness and take deliberate steps in the process of development and implementation of AI systems to mitigate the risk of algorithmic bias on the end user.

The toolkit

This first installment of the AI Ethics for Nonprofits toolkit provides the materials you need to host your own AI Ethics Workshop, focused on the ethical considerations related to the principle of Fairness, including:

  • Facilitators Guide
  • Workshop deck
  • Supporting materials

This toolkit is intended to serve as a resource for technical, program, and operational staff at nonprofit organizations who are exploring or already developing and using AI/ML in humanitarian and international development contexts. With this toolkit, we want to encourage and support you to become a champion for AI ethics in your organization and programs. The toolkit is designed to help you learn some of the fundamentals of AI ethics and then immediately get to practice applying ethical considerations related to the principle of Fairness in the context of several humanitarian and international development use cases. It includes:

  • AI ethics primer with the key concepts of ethics literacy.
    Key considerations related to the principle of Fairness in AI/ML projects.
  • An overview of how to design and develop a Machine Learning project with Fairness in mind. We’ve provided a step-by-step process that explores Fairness across all stages of an ML project, from problem definition and data collection to model creation, implementation, and maintenance.
  • A real-world case study from the nonprofit sector that highlights risks, issues, and a path to achieving ethical AI/ML solutions.
  • A set of use cases from the humanitarian and international development sector to practice applying ethical considerations related to the principle of Fairness. We’ve also included some of the practical things you can do to mitigate risks.

We want your feedback

As you start using this toolkit to guide conversations and projects in your organization, we invite you to share your feedback on the toolkit materials, and provide ideas for future installments, including other important AI ethics topics, questions, and use cases that are relevant for the nonprofit sector. Please email your ideas and suggestions to leila.toplic@nethope.org.

Development of the AI Ethics for Nonprofits toolkit was led by Leila Toplic and Nora Lindstrom from NetHope’s AI Working Group, in close collaboration with Amy Paul from USAID and Amit Gandhi and Kendra Leith from MIT D-Lab. We are grateful to USAID for the financial support provided for the development of the materials that are included in the toolkit. This toolkit benefited from the inputs from the representatives of NetHope’s AI Working Group and NGO sector, especially Steve Hellen (Catholic Relief Services), Bo Percival (Humanitarian OpenStreetMaps Team), and Paola Elefante (Plan International).

Twitter @NetHope_orgFacebook NetHopeorgYouTube iconLinkedIn Company NetHopeInstagram nethope_org
crossmenu