Local mappers are using OpenStreetMap to contribute data and create community powered maps.
By Leila Toplic, Head of Emerging Technologies Initiative, NetHope and Bo Percival, Director Technology and Innovation, Humanitarian OpenStreetMap Team (HOT)
Our world is facing an ever more intense and protracted humanitarian crises and as a result, the pressure to bring the best, most appropriate tools to bear and innovate through technology is constant. In the discussions at the
NetHope 20th Anniversary Summit last month, speakers offered insights into the challenges and opportunities of technology across sectors. They also highlighted practical guidance as to how nonprofits and their partners can approach designing and using powerful technologies such as AI in the way that benefits our society and the planet. Here are four actions nonprofits can take today to innovate responsibly:
Build a responsible culture, with the systems to support it.
Recent years have seen an increasing interest in technology ethics, leading to a proliferation of ethical principles and ethics frameworks by a significant number of institutions and companies - intended to guide the responsible development and use of technology. Earlier this year,
PwC analyzed 200+ principles from over 90 organizations and identified 10 core AI ethics principles. Establishing principles is just the starting point. Concrete practices that translate principles into action are crucial for developing responsible and humane solutions to the problems individuals and communities face. They include processes for considering ethical issues when constructing AI systems and relevant training for developers building AI systems. At the NetHope Summit, Dr. Nicolas Jaccard from Orbis emphasized that “there is no shortcut for responsible innovation. It’s a set of processes and practices that are applied to everything that the organization does.” Here are some of the relevant examples and resources that anyone can use to build a responsible culture:
- Resource. NetHope has developed two toolkits (AI Suitability toolkit, AI Ethics toolkit) and a training program for building capacity in the nonprofit sector to responsibly develop and use AI. At the Summit, Lance Pierce, CEO of NetHope, called for ‘extra diligence around the tools that allow us to help more people in more places.’ To exercise due diligence, we must develop the capacity to design and use technology responsibly. Gartner predicts that ‘by 2023, all personnel hired for AI development and training work will have to demonstrate expertise in responsible AI.’
- Example. Orbis, a NetHope member, has developed a number of practices for guiding responsible innovation across products and programs. This includes: setting practical approaches to assess data for bias; building active countermeasures to minimize risk of harm to users; involving local communities and experts and ensuring that they have a voice throughout the project design and implementation; prioritizing human in-the-loop approaches (e.g., clinicians are involved in the eye screening procedure); planning for sustainability from the start by creating pathways for the solution to be used and sustained locally.
- Resource. As part of the Responsible Use of Technology project, of which NetHope is a member, the World Economic Forum has documented practical resources for organizations to operationalize ethics in their design and use of technology and published two reports of practices by technology companies (Microsoft, IBM). Kay Firth-Butterfield, Head of AI at the World Economic Forum, highlighted that we ‘should not be using any tools unless they conform to responsible AI practice.’
Be intentional about responsible design and use.
Gartner estimates that 85% of all algorithms will
deliver erroneous outcomes by 2022 because of bias in data and the teams creating them. At the NetHope Summit, Brad Smith, President of Microsoft, said “technology is part of the solution for every good cause that exists in the world, but technology has also become a problem.” While the risk of AI/ML reinforcing existing social biases is real, we have an opportunity to intentionally build technology systems that are less biased than the current human systems and processes. Here are a few things you can do to get started: First, design technology systems for well-defined problems and clear outcomes, and with adequate resources and meaningful participation of individuals and communities. This will not only help you achieve the impact you envision but it can also limit the likelihood of unintended consequences, and can surface risks and issues early enough to mitigate them. Second, engage in multi-stakeholder collaborations for greater, timelier, and more sustainable impact. At the Summit, Jan Egeland, Secretary General of the Norwegian Refugee Council said “We need partnerships like never before. We cannot meet these challenges alone.” Third, ask questions such as:
- Is the use of technology in your context solving a relevant problem? Organizations might be tempted to use AI enabled technology for reasons that are not directly related to the problem that needs to be solved (e.g., to appeal to donors) or actionable in the contexts in which it will be used (e.g. lack of connectivity, low digital literacy, language or cultural barriers, political situation). It’s important to determine how application of technology is adding value - e.g., timelier intervention, broader reach, and better outcomes.
- Do you have resources and management buy-in to develop, implement, monitor, and sustain new AI systems? According to Gartner, only half of AI projects make it from pilot into production, and those that do - take an average of nine months to do so. It’s our responsibility to think about the sustainability of the solution from the start and to have a plan for monitoring its performance, fixing issues, and maintaining the impact.
- Do you have and can you get access to required data, preferably in the digital format? If clean, formatted, representative data is not available, technology like AI/ML will fail to deliver value.
Prioritize sustainable and participatory accountability.
Centering around the needs of vulnerable populations means a number of things beyond meeting their most pressing needs. It means adopting a human-in-the-loop approach in the design and use of automated technologies where for example, those using the AI systems (e.g. clinicians) feel empowered to override the system. It also means operationalizing accountability throughout the project lifecycle - from problem definition and design to implementation and sustainability. To operationalize accountability consider:
- Scanning for unintentional consequences and not developing systems that may cause harm and amplify existing structural inequities and biases. The key to realizing the full value of AI and doing this responsibly is applying AI to the right problems and contexts, and having realistic expectations of what the technology can achieve.
- Developing an accountability framework that enables you to act on the feedback from those affected by the issues, monitor performance of technology systems, and redress possible harms when things go wrong.
- Operationalizing participatory accountability across the whole project lifecycle. Participatory accountability should begin at the design stage and be an intentional and integrated part of any responsible innovation. It also needs to be a key aspect of any agile feedback process to make sure it isn’t a ‘one and done’ approach.
- Including a diverse set of perspectives in the co-creation process. Prioritize active participation of impacted communities, not just engaging them in consultation. This means, working closely with those who will be most impacted to (1) understand what they need and don’t need, and how they feel they will be impacted and (2) create a common ground from which common goals can be achieved.
- Having a clear path that connects insights to action by developing an adoption methodology to support integration of new solutions across the organization and its programs.
Participate in shaping the future of digital.
While technology’s impact on our society and the planet is growing, governance and policy still lag behind the rapid deployment of technologies like AI across all contexts - humanitarian, public, commercial. We in the nonprofit sector are in a unique position to guide and shape the governance and policy that informs innovation. And, when we do that, we must bring in the voices of the individuals and groups that are typically underrepresented and marginalized. At the NetHope Summit, we had the opportunity to hear about the first ever global guidelines on the ethics of the development and deployment of artificial intelligence -
UNESCO’s Recommendation on the Ethics of AI. The Recommendation defines a common set of values and principles to guide the development of responsible AI globally. Irakli Khodeli from UNESCO highlighted that the Recommendation includes strong language discouraging the use of AI in the way that infringes on or abuses people rights, “in particular, AI systems should not be used for social scoring or mass surveillance purposes.” It also emphasizes the impact of AI on climate change and gender and promotes participation of “local and indigenous communities throughout the lifecycle of AI systems.” The Recommendation was recently adopted by 193 UNESCO member states and we at NetHope’s AI Working Group look forward to collaborating with UNESCO on the operationalization of the policies set forth in the Recommendation.
In closing ... As nonprofit organizations and the world around them become increasingly digitally-enabled and even automated, knowing how to design, use, and govern technologies such as AI/ML in a responsible, humane, and impactful way is needed, now more than ever. And, NetHope will continue to be the place where “the all critical questions of governance and ethical use can be asked, and where the member community can come together to help shape the standards for use so that in advancing the spread of connected data and digital technology we are the same time tackling inequality and ensuring protections for basic human dignity and rights,” Lance Pierce, CEO of NetHope.