Nethope Logo

NetHope's new Emerging Technologies Working Group: Focus on AI and blockchain

Today’s news is filled with stories about Artificial Intelligence (AI) and blockchain. These technologies promise massive benefits to nearly every type of organization, including those focused on driving social impact.

January 22, 2019

By Leila Toplic, Lead for NetHope’s Emerging Technologies Working Group

Today’s news is filled with stories about Artificial Intelligence (AI) and blockchain. These technologies promise massive benefits to nearly every type of organization, including those focused on driving social impact.

 width=
The AI for Good workshop focused on human-centered, ethical AI held during NetHope's Global Summit in Dublin in November 2018.

While there’s concern about the readiness of these technologies for our sector and challenges related to data, infrastructure, and ethics – the downside of staying on the sidelines of the work related to development of powerful technologies like AI and blockchain would mean lack of representation of the nonprofit sector’s important voice. By actively exploring and testing those technologies in our contexts and programs, nonprofits will not only activate new tools to tackle some of the most pressing problems we’re tasked with solving, but also the development and evolution of those technologies will happen with our input. After months of consultations with NetHope members, tech partners, academic institutions, and others actively involved in AI and Machine Learning (ML) and blockchain, it became clear that a formalized, continuous approach to sharing and learning together, as well as collaborating on new programs and solutions, is necessary. 

This is why NetHope is setting up a new Emerging Technologies Working Group, with its initial focus on these two emerging technologies – AI and blockchain.

This Working Group, spearheaded by NetHope members, will be a sector-wide approach to integrating AI and blockchain in humanitarian, development, and conservancy work. Initially, the Working Group will focus on three areas identified as immediate needs at the NetHope Global Summit: (1) Education - capacity needs will be addressed via workshops, webinars, conferences, and the NetHope Solutions Center. (2) Programs – Working Group will facilitate collective impact collaborations focused on leveraging those technologies to solve problems across humanitarian, development, and conservancy contexts. (3) Toolkits & standards –the whole sector will benefit from this Working Group through the toolkits grounded in the learnings from our collective work, with the focus on scaling the most promising programs, processes, and methodologies.

You too can play a role in this Working Group. The key component to successful Working Group is the involvement of individuals and organizations actively working on AI/ML and blockchain technologies as well as NGOs that know the problems that need to be solved, contexts that need to be planned for, and barriers that need to be overcome to make ethical and widespread adoption possible. We encourage people to contact us to become involved.

NetHope members can register here for the Working Group. We also encourage other thought leaders in the space – technologists, humanitarian agencies, academic institutions, philanthropic organizations – to contact us to become involved.

For those interested in reading more about the insights that informed the formation of the Emerging Technologies Working Group, I’ve put together a brief recap of the AI for Good sessions from the recent NetHope Global Summit. We'll provide highlights from Blockchain for Good discussions in the next post.


 width=
AI for Good plenary session at the 2018 NetHope Global Summit featuring tech partners Microsoft, Google, IBM, as well as funder USAID, and NetHope member CRS.

In November 2018, I hosted a set of conversations on the topic of AI for Good at the NetHope Global Summit in Dublin. I was joined by the experts from the NGO community including NetHope members like CRS, NRC, Oxfam, and Plan International, as well as Amnesty International, War Child, and NetHope supporters like Microsoft, Google, IBM, University College Dublin, and the University of Michigan.

Our sessions set out to explore the significance of AI/ML for humanitarian, development, and conservancy contexts. We talked about both barriers and opportunities, the roles of the NGOs and technologists, and the necessary conditions for AI/ML to benefit all.

While we’re still in the early days of AI for Good, many in the NetHope community believe it is important to engage in the discussions now. The reasons are many:

  • Technology advances are now real. While the idea of AI is as old as first modern computer architecture (1950s), in the last 10 years the combination of three things—lots of data, greater and cheaper computing capacity, and better machine learning algorithms—has brought AI out of research labs into our everyday lives.
  • Problems need to be solved. There is a whole set of problems across humanitarian, development, and conservancy space that AI, along with other tools can help us solve, like refugee education, natural disaster response and recovery, and disease outbreaks and hunger just a few that could benefit.
  • Solutions need to benefit all. As AI becomes embedded all around us, we in the humanitarian sector have both the opportunity and responsibility to activate our own expertise. We know the problems that need to be solved, issues that need to be addressed including discrimination, bias, and lack of inclusion, as well as humanitarian and development principles that need to be considered and incorporated in the development and implementation or AI-powered solutions.

What are the examples of applied AI in our sector?

We’re in the early days of applying AI in social impact sector. In the AI for Good sessions at the Summit, we discussed several examples, including:

  • Enabling broad access to online educational content by underserved populations like millions of refugee youth in the Middle East
  • Predicting food insecurity in Malawi
  • Detecting online hate-speech content for removal
  • Digital credit scoring and agricultural input loans
  • Early warning systems for earthquakes in Mexico
  • Identifying Zika virus reservoirs in the Americas

Many of these and other examples of applied AI in our sector are in POC stage (i.e., not yet delivering any major benefits on a sustained basis) and focused on improving existing programs and processes rather than creating solutions that would not be possible without AI. As applications of AI/ML in international development move from POCs to broader adoption and scaled impact, we will learn more about what AI/ML can do – what needs it’s suited for, what risks need to be prevented or mitigated, and what dependencies need to be accounted for.

What are some of the challenges that prevent adoption and effective use of AI in our sector?

AI/ML is new for our sector and complex. Barriers to adopting AI/ML remain high in comparison to other technologies such as mobile apps. Also - how we define, design, implement, scale, maintain technology solutions has an ethical impact on people’s lives. Nonprofits have an obligation to have an adequate understanding of what those technologies can do and implications on their work and populations they’re supporting.

Three challenges were identified at the AI for Good workshop at the Summit:

  • Lack of knowledge about what AI/ML can do today and processes for evaluating whether AI/ML is the right solution for their work and how to frame problems for AI/ML.
  • Lack of technical expertise and resources to develop AI/ML solutions. NGOs don't have AI/ML technical resources in-house and don't always know how to access expertise outside of the sector.
  • Lack of “good” processes for developing, scaling, and sustaining AI/ML applications. This includes processes for ensuring ethical development and implementation of AI-enabled solutions.

Tech companies that attended the Summit sessions said that they are looking to understand what types of problems humanitarian organizations are working to solve and support them with expertise and resources. This can be accomplished with trusted coordinating bodies (like the Emerging Technologies Working Group) where both humanitarian agencies and tech experts can share expertise, work and learnings.

What are the necessary conditions for AI to benefit all?

AI-for-development shares similarities with other technology-for-development (ICT4D), including data, sustainability, inclusion, funding, oversight – so necessary conditions must include addressing those challenges and applying the existing frameworks and principles such as the Principles for Digital Development.

But there are two other important conditions that need to be considered given how nascent AI/ML is in our sector: Resources and Knowledge.

Resources are necessary for NGOs to explore and incorporate AI into their programming. Funding could be in the form of grants, employee volunteering, product donations (e.g., Azure credits). Here are a few examples of the organizations that have already established AI for Good programs and are actively looking for ways to engage with NGO community so resources and expertise can be applied in an effective and sustainable way.

  • Microsoft: In September, 2018, Microsoft launched AI for Humanitarian Action initiative, a combination of financial grants, partnerships, and technology investments in addition to technical expertise with the focus on disaster recovery, addressing the needs of children, protecting displaced people, and promoting human rights.
  • Google: In October, 2018, Google issued an open call to organizations around the world to submit ideas for how they could use AI to help address societal challenges. Proposals are due this month (January 22, 2019) and selected organizations will receive support from Google’s AI experts, Google.org grant funding from a $25 million pool, credit, and consulting from Google Cloud, and more.
  • IBM: IBM's Science for Social Good initiative partners IBM Research scientists and engineers with academic fellows and subject matter experts at NGOs to tackle a range of societal challenges. Over the past three years, IBM has carried out 25 initiatives (chosen out of 200+ put forward by various NGOs) where more than 120 IBM scientists have applied their expertise in AI, cloud computing and deep science, etc. to collaboratively solve challenging problems put forth by these NGOs.
  • USAID has funded numerous proposals, for example, through its Grand Challenges or through the Development Innovation Ventures open innovation initiative, leveraging machine learning in a variety of sectors.

Knowledge of AI/ML needs to be transferred from a few experts at tech companies and research institutions to many, including humanitarian staff and affected communities, so they can become active participants and creators of solutions and better informed about how AI-powered products and services might impact them in their work and daily lives. That can happen through training or through the process of co-creation of AI-enabled solutions which would in turn inform how technologies might evolve in order to meet a diverse set of needs, across a diverse set of contexts.

What should nonprofit sector do today?

Though it is early for our sector, the time for learning about and incorporating AI into our work is now. We need to shift the mindset from “It’s too soon,” to “We need to get started now.”

Here are a few examples of how to get started that were highlighted at the Summit:

  • Focus on the problems that can be solved today with the capabilities that are working well and can complement existing programs, such as computer vision, natural language processing, and sound and video processing. With those capabilities we can tackle all sorts of problems according to the recent study from McKinsey.
  • Incubate programs focused on those problems, collaborate within our sector and outside of the sector, and share. Resources and innovations (processes, technologies) should be reused. Reuse could come in multiple forms including sharing of “reference design” of a chatbot, an example of which I mentioned in a recent blog post, or sharing of processes for taking an AI-enabled program from POC to scale.
  • Learn about AI, including what questions you should ask and what risks to pay attention to, including bias—in data, algorithms, implementations. How we define, design, implement, scale, maintain technology solutions has an ethical impact on people’s lives. Nonprofits have an obligation to have an adequate understanding of what those technologies can do and implications on their work and populations they’re supporting. 
  • Get our data “in order.” This includes getting high-quality data representative of the populations we’re supporting, and data context, e.g., in Tanzania, multiple people may share one SIM card, or in Ghana, it’s common practice to shut off phones altogether when you’re not expecting a call.
  • Work with technologists to influence how AI gets developed, applied, and used. While nonprofits need to learn about AI, technologists need to be aware of the societal problems that need to be solved, contexts in which they exist, and needs of the populations that are typically not prioritized for emerging technologies.

RESOURCES:

 width=
Leila Toplic (second from right), is joined at the NetHope Global Summit's AI for Good session with Kentaro Toyama, Nora Lindstrom (left) and Anna Bacciarelli.

Speakers who participated in AI for Good discussions at the NetHope Global Summit:

  • Aubra Anthony leads a “think-tank” style team at USAID’s Center for Digital Development that researches the social, cultural and practical implications of the integration of emerging technologies in international development and humanitarian programming and offers guidance on responsible use of technology like AI.
  • Anna Bacciarelli leads on AI research and policy for Amnesty Internationally globally - largely documenting and exploring the human rights risks and harms, focusing on discrimination and privacy infringements.
  • Cyrill Glockner is the Principal Program Manager at Microsoft Business AI group focusing on autonomous systems development.
  • Steve Hellen facilitates use of technology in Catholic Relief Service’s field programs including new technologies like AI.
  • Brigitte Hoyer Gosselink leads Google.org’s work to leverage emerging technology for social good, including Google’s own product and expertise. In her work, she supports social sector organizations with funding, Googlers, and product.
  • Nora Lindstrom is the Global Lead for Digital Development at Plan International. She explores the opportunities and threats technology raises for gender (in)equality.
  • Moninder Singh is a research Staff Member at IBM Research AI, involved with the Social Good program at IBM Research since its inception 3 years ago. During that time, Moninder worked with several nonprofit organizations to apply AI in nonprofit sector.
  • Kristin Tolle is an 18-year veteran of Microsoft who has spent most of that time in Microsoft Research. Kristin teaches Machine Learning at the University of Washington and is now a Principal Data Scientist and AI Strategist in the Technology for Social Good team at Microsoft.
  • Kentaro Toyama is the faculty member at University of Michigan doing research on digital technology's interactions with international development. Kentaro has a PhD in computer vision (AI). AI research contributed to Microsoft's Kinect system.
Twitter @NetHope_orgFacebook NetHopeorgYouTube iconLinkedIn Company NetHopeInstagram nethope_org
crossmenu