by
Greg Lipstein
Finding ways to make big data useful to humanitarian decision makers is one of the great challenges and opportunities of the network age.
UN Office for the Coordination of Humanitarian Affairs (OCHA)
The best minds of my generation are thinking about how to make people click ads… That sucks.
Jeff Hammerbacher, Former Data Manager, Facebook
When social sector organizations think about data, the conversation often begins and ends with measuring impact.
This is an important question that needs data, but there are so many more ways to use data to drive impact. An increasing number of mission-driven organizations are now in a position to use the advances in data technology that are transforming other industries. These powerful tools fundamentally alter how organizations operate, not just how they measure what they do. The truth is: There is more to data-for-good than impact measurement.
For example:
- Where can our organization’s resources have the most impact?
- What are our best opportunities for developing better services or programs?
- Which of our team’s work processes could be more automated?
Across sectors there is a lot of talk about the promise of machine learning, big data, predictive analytics, and artificial intelligence. What is it about? Why is it happening now? And how might I think about making it useful for the purposes I care about?
This is a quick introduction intended for the curious manager or analyst* in an impact organization, who is increasingly hearing these terms and is looking for an explanation of what’s going on and how to think about making use of it. Our intent is that this perspective will help you notice valuable opportunities in the organizations that matter to you.
Note: One of the best ways to get a sense of what’s happening is through examples. We include a number of case studies that focus on applications addressing social challenges because that is what we work on at DrivenData. As you’ll see these are just instances of broader uses of machine learning, and they can easily extend to applications in other sectors.
Let’s start with what machine learning is.
The idea: A shift in how computers deliver value for humans¶
Phase I: From the dawn of computers, people programmed rules for computers to follow.
Computers are very good at doing what they are told to do, over and over again, very quickly and without making mistakes. That’s basically all they do (with some memory to store things along the way).
Rules say what should be done: do a, then b; if x, do y. Much of computer-based progress over the past 75 years has been layering (“abstracting”) longer rules wrapped into shorter ones, over and over again, like booting up Excel with the click of a button. We can now make a new website with advanced functionality in minutes, whereas doing the same 30 years ago would have taken years with less to show for it.
Phase II: With machine learning, computers program themselves by learning from data.
What does that mean? Here’s the classic example:
These are cats and dogs. Humans are very good at looking at a picture of a cat or a dog and telling which it is, by looking at the whole image.
Machines have traditionally had a hard time with this. Remember, computers follow rules that you give them. It’s hard for humans to write out all the rules for what makes a cat different than a dog in a picture (especially when some are wearing glasses!), even though we know it by sight. Not only do we need to give it the rules (something about ears or noses, maybe), but we also have to tell it how to recognize a nose in an image, in any position. This would be incredibly tedious to write out, and is practically impossible.
But computers are now very good at looking at an image and telling if it’s a cat or dog. How?
If you give a computer enough examples of cat and dog images, labeled with which animal it is, the computer can learn the rules on its own. In other words, the computer will make statistical associations between the properties of cat images that make them cat-like, and the properties of dog images that make them dog-like. They do this at the pixel level, combining pixels that are near each other as helpful. This takes a lot of memory and a lot of processing, but we now have enough of both at a low enough cost to make this possible.
Once the computer has been “trained” on the labeled images, you can feed it a new image without a label and it can tell what animal it contains. And it can provide a level of confidence for how certain it is, given all the examples it’s seen before. Not only that, but it can look through thousands of images and categorize them all correctly in a matter of seconds.
Of course, this isn’t super useful unless you are making captions for your very large pet photo album. But this simple process has been used to do things that are changing the way we live.
Putting it into practice: From cats to cancer¶
The question is: how do we make this idea useful?
Here’s the paradigm we just saw: use examples to work out a set of rules, then use those rules to make inferences when encountering something new. If that sounds familiar, that’s because humans do that all the time. We have all sorts of patterns in our heads for how the world works, often learned from our experiences. We don’t always work out the rules consciously (cats vs dogs), though sometimes we do (alligators vs crocodiles).
Compared with humans, computers have the distinct advantage of doing this at a much larger scale and much more quickly than we can, and the distinct disadvantage of only being able to use information that has been captured and provided in a machine-readable way. As increasingly more data is created and captured, the advantage dominates the disadvantage in many instances that matter for organizations. Let’s consider a few that we have encountered in our work.
Use 1: Free up human attention through automation¶
Case: Smart school budgeting
Education Resource Strategies helps school districts organize their resources and understand how their spending compares with others. With little consistency in reporting standards across districts, the process of saying if a budget line item was for transportation, teachers’ salaries, facilities, or any of over 100 other categories took months of an analyst’s time for each project. Using examples of labeled budgets from the past, algorithms can predict the right spending categories for new budgets automatically and rank its confidence in each result, freeing up months of staff time to spend interpreting the comparisons and helping schools implement better practices.
What’s happening here? Humans are already following implicit rules when doing some parts of their work. For example, the abbreviation “TCHR” probably means teacher, especially if it’s accompanied by “1st grade”. By looking at examples, an algorithm can discover rules like this—and, importantly, even ones that are more complicated than we can wrap our heads around—and apply them to new cases.
A few other use cases we’ve seen in our work:
- Lung cancer detection: Cancer-fighting engineers use thousands of examples of CT scans previously labeled by clinical teams to programmatically flag concerning nodules from early screens, prioritize follow-up for those who need it, and streamline reporting for radiologists.
- Wildlife research and conservation: Anthropologists and conservationists use examples of labeled camera trap footage to build tools that autonomously identify which frames contain wildlife and which species appear.
- Safe aging: Researchers of safe aging at home use examples of activity logs and sensor readings from wearables to model the physical safety of seniors living independently at home, watching for dangerous scenarios like falls.
Cases like this, where we automate or semi-automate a process that used to take a lot of human time, have a number of benefits. 1) They make use of manual efforts in the past, where humans have provided the types of outputs that machines can learn from. 2) Machines take care of more and more routine cases and call attention to uncertain cases for human attention. 3) Computers may notice patterns that humans do not, and bring that learning to bear on new cases. 4) More broadly, machines and humans each are spending more time where they are most useful. Human time and resources are freed up for important work like responding to emergencies, interpreting wildlife behavior, and communicating with patients.
Use 2: Illuminate strategic insights for planning and product design¶
Case: Distributing water from clouds
Dar Si Hmad (DSH) harnesses clouds to help alleviate water shortages in southwest Morocco. Typically, women and girls spend up to four hours a day collecting poor quality water from wells and carrying it back to their communities. DSH’s fog collection nets trap water coming over nearby mountains and disseminate it to local communities. Using several years of data about weather patterns and resulting yield from the nets, machine learning models can provide a better picture of what factors drive water output and how to position new nets for the greatest impact.
In this case, there are a lot of possible relationships among factors (location, altitude, seasonality, humidity, etc.) that might matter to what we care about (water yield). There is a lot of weather data, and it’s hard to know which of these factors will help make better strategic decisions about where to place new nets.
If you want to change the future, it helps to get the best possible vision of what that future looks like. By identifying factors that trend together from many different examples in the past, we can make better inferences about the relationships that will shape future events and behaviors, from purchasing a recommended product to being readmitted to a hospital. Where are the biggest gaps or pain points in current offerings? What are common experiences or segments of users that act similarly? How do trends in what we see help us manage the future rather than the past?
A few other use cases we’ve seen in our work:
- Financial inclusion: Designers use millions of examples of mobile money transactions to inform new interventions which increase use of mobile money in Tanzania, providing access to critical financial tools that have not been available to large portions of the population.
- Reproductive health: Healthcare providers use examples of the care pathways that patients take from one visit to the next to understand and support the role of specific early services in promoting better health outcomes.
- Remote sensing for smallholder farmers: Development organizations are starting to use examples of satellite imagery and farm production records to sustainably infer information about what farmers are growing, so that farmers can access stabilizing financial services like loans for agricultural inputs or crop insurance.
Use 3: Targeting services for greater impact¶
Case: Restaurant safety
The City of Boston regularly inspects every restaurant to monitor food safety for public health. A handful of health inspectors are tasked with assessing risks across the city and catching hygiene violations before they spread. Meanwhile, each year millions of people read and write Yelp reviews about experiences at these same restaurants. By comparing public hygiene records with the Yelp reviews left in the weeks leading up to the inspection, algorithms were developed to predict the incidence and severity of new health risks based on recent reviews. The city has put the top algorithm to the test and is finding 25% more health violations with the same number of inspections as before.
We should all be in favor of our governments using limited resources as effectively as possible. In this case, Boston wants to catch health risks before they pose a danger to citizens. In other cases, a school might want to offer tutors to kids at risk of falling behind or match online lessons with the students who are most likely to be interested and prepared. Past examples (data) provide us with clues as to where there is a need or opportunity based on information we have access to for new cases.
This use of machine learning is about personalization and prioritization, and we see it all over the place. Google uses examples of online behavior to match search results and ads with searchers likely to find them relevant and useful. Netflix uses examples of viewing behavior to help you discover new entertainment through its recommendation engine (and even tailor cover art). Around 75% of the content people watch on Netflix comes from these recommendations.
A few other use cases we’ve seen:
- Public benefits: Benefits platforms can use examples of public assistance and eligibility rules to match available public services with those who are likely to qualify and engage.
- Development in higher education: Advancement teams use examples of donations connected with available social media data to predict where large gifts are more likely to come from and prioritize outreach.
- School programs: K-12 educators are starting to use examples of past student performance and digital learning interactions to identify which students are at greatest risk of dropping out or falling behind, then target interventions earlier.
These are just a few areas where machine learning has the potential to help humans better understand the inner workings of big challenges and apply these learnings at scale to improve people’s lives.
The road ahead¶
Computers can store, process, and learn from far more examples than people can, so it's clear why organizations are investing so much in machine learning. On top of that, our machines can continue to update these learnings as more examples come in. And they can act on this intelligence quickly, cheaply, and at scale. Increasingly the organizations that are not taking advantage of these approaches will be left behind.
This does not mean that humans are obsolete—not even close. Tasks that rely on complex interpretation and judgment, and the management of relationships and thoughtful communication, are still done far more effectively by humans. Moreover, humans still need to define the machine learning approaches that computers take: scope the problems, curate the data, design the technical approach, and decide on the right way to use the output.
There is still a long way to go in making many machine learning applications useful and user-friendly in real-world settings. There are tradeoffs between how well algorithms work, the amount of data and computational power they need, and our ability to interpret how they are making decisions. And there are many implications of bringing algorithms into decision-making that human need to own, like perpetuating historical bias or enabling the spread of misleading or hateful content. These will be big topics in machine learning in the year ahead.
Challenges like these will involve technologists, managers, and civil society working together to ensure that the benefits of these approaches far outweigh the risks. As the field evolves, so will our understanding of what it means to use its capabilities wisely.
Ultimately the potential for machine learning to serve humanity and nature is immense. We need more people who understand how to channel this power for the causes they value most.
DrivenData is a social enterprise that brings the power of data science to organizations tackling the world’s biggest challenges. DrivenData works directly with mission-driven organizations to harness data for greater intelligence and impact, and also runs online competitions where data scientists from around the world build algorithms for social good.
* McKinsey has estimated that “By 2018, the United States alone could face a shortage of...1.5 million managers and analysts with the know-how to use the analysis of big data to make effective decisions.”
Banner image of the Earth is courtesy of NASA.