By Leila Toplic, NetHope Emerging Technologies Lead
In order to create AI solutions that are ethical, sustainable, and beneficial for all, we need nonprofit organizations to get involved in their design and implementation. NGOs, such as the 59 global NGOs that are NetHope members, have so much to contribute including a deep knowledge of the problems that need to be solved (poverty, refugee crisis, education, infectious disease outbreaks, human trafficking) and the cultural, economic, and political contexts in which those problems exist. Nonprofits can help ensure that the most marginalized are included in the design and use of new solutions, and aware of how those solutions might affect them and their communities.
Over the past few months, NetHope’s AI Working Group, USAID, MIT D-Lab, and NetHope member Plan International have partnered to deliver a set of AI Ethics webinars to equip the nonprofit sector with the information it needs to implement AI responsibly and ethically.
In the first three webinars in this series, we focused on how to design ethical AI solutions and programs:
- In PART I, we provided an introduction to ethics and an example from the nonprofit sector of what can go wrong with AI and how to address the issues—specifically the issue of bias.
- In PART II we talked about some of the values that should guide responsible use of AI in practice and specific considerations related to fairness. We identified risks that might be encountered with respect to implementing ML/AI tools, and identified several broad approaches to mitigate risks and engage in responsible development and use of AI.
- In PART III we explored how ethical considerations—specifically, the principle of Fairness—arise in the process of developing Machine Learning models, and the technical choices made in the development of machine learning models that impact the fairness of their use.
Here are the 5 key takeaways from my conversations with Amy Paul (USAID), Nora Lindstrom (Plan International), Amit Gandhi (MIT D-Lab), and Maria Hycinth Umaran (Plan International).
1. Data is not neutral. AI is data driven technology—AI systems require massive amounts of data. So when we talk about ethical technology solutions in the context of AI, we need to give proper attention to data because data can be used to inform and create valuable solutions that people need, but it can also be used in a discriminatory and exploitative way. Data is never neutral, particularly historical data as the world has never been equal and neither has data collection practice.
2. Bias is pervasive, even if unintentional. In Part I, we talked about a few of the most recent and recurrent ethical issues in this space: intentional harms, infringement on rights and values, and unfair outcomes like discrimination and prejudice stemming from bias that might be embedded in AI systems. In the same webinar, we went through an example from Plan International that shows how AI systems can learn to reproduce existing societal discrimination and propagate bias. What that showed is that even with the best intentions, we can create biased solutions. While AI systems can learn to reproduce, maintain, and scale existing societal discrimination and propagate social inequalities, they could also learn to propagate equality—embracing the bias inherent in any tech solution and using it to advance equality.
3. Fairness is complex. We talk about "fair" as though we all agree on what it is, but it's not something that everyone answers the same way. As discussed in Part II of the series, fairness is not one thing—it requires diverse perspectives to discuss and determine the best approach in context. There is no one thing to guarantee fairness and it's not always obvious how to apply the principle of fairness in a real-life situation. For example: you might have a situation where you need to balance the need for privacy (i.e. not asking for any personal data) and fairness (that might require you to understand better who your solution is serving, or not). In this instance by prioritizing privacy you might introduce bias. As designers we’ll often need to engage in a collective process of deliberation and decide what is most important in a given context.
4. Intentionality is imperative. We need to look for bias and take deliberate steps to optimize for fairness—lack of intentionality will almost certainly not result in fair outcomes given how much bias (intentional or not) is already embedded in the data we have and the systems in which we work. In practice, responsible innovation involves a number of intentional steps:
- Forming a diversified development team for equal representation.
- Deciding what problems to focus on solving (e.g. infectious disease outbreaks, refugee crisis, climate change)
- Determining what values to embed in the solution.
- Evaluating what a solution might enable as well as what might no longer be possible due to the solution and asking who this solution might empower or disempower.
- Working collaboratively—including, with the marginalized communities—to identify how those values should be embedded in practice and what steps one takes at various stages of the project to check on whether/how those values are upheld. Responsible innovation needs to be intentionally inclusive.
We hope you’ll take time to review the webinars, learn about the key AI ethics concepts, and apply them in your work. Our next step is to host a set of virtual workshops focused on practical application of the concepts and frameworks covered in the webinars. Workshop participants will have the opportunity to explore the questions surrounding the principle of fairness in the context of several use cases from the humanitarian and international development sector, including Education, Health, Agriculture, Workforce, and Humanitarian Response. Register here if you’d like to hear about future workshops.
AI Ethics webinar series was envisioned and produced by Leila Toplic (NetHope), Amy Paul (USAID), Nora Lindstrom (Plan International), Amit Gandhi (MIT D-Lab), and Kendra Leith (MIT D-Lab).