Classy - Allison Guass-avatar1
Allison Gauss
8 min
ethics of data collection

Why We Need Data Ethics to Do Good

Doctors and the medical community live by the motto, “first, do no harm.” It’s meant to always keep the welfare of the patient top-of-mind. Undoubtedly, many nonprofit professionals and humanitarians try to act similarly for the people they serve, but as the industry pushes forward, it’s entered a new frontier in which the ability to judge what does harm becomes fuzzy: big data.

Data is king in the modern business world, and more than ever, social impact organizations are striving to collect, analyze, and strategically act on data. Information communications technology (ICT), which encompasses telecommunications and other tools used to gather, transmit, and store data, is one of the most important assets in this endeavor. While the smart use of data is important for making decisions around donor communications, program execution, and more, today’s data technology also has the potential to worsen social problems and even create new ones.

The Problem With Data

We don’t have a framework for data responsibility.

Nathaniel Raymond
Harvard Humanitarian Initiative

Science and technology has served humanity well. From the discovery of penicillin to the invention of the smart phone, we tend to see new developments in technology as inherently good and helpful. According to Nathaniel Raymond, director of the Harvard Humanitarian Initiative’s Signal Program on Human Security and Technology, this assumption can blind us to the complex and dangerous consequences of technology used to collect, analyze, and share data.

In his talk on ICT and data ethics at the Collaborative + Classy Awards, he explained that our historic views on personal privacy are now at odds with our ability to collect data and use it for good. These good intentions to help society can prevent us from seeing how some forms of monitoring and data collection violate the rights of the stakeholders. “The stuff we do in Africa might win you a Nobel Peace Prize. Do it in the United States, you might get arrested by the FDA,” said Raymond to the audience of researchers, nonprofit professionals, and entrepreneurs.

The stuff we do in Africa might win you a Nobel Peace Prize. Do it in the United States, you might get arrested by the FDA.

Nathaniel Raymond
Harvard Humanitarian Initiative

One simple example Raymond pointed to was the work of a colleague in Europe who collected data on a string of assaults at LGBT night clubs. Academics compiled this data and published maps showing where the attacks were happening. The intention was to inform citizens and encourage safety, but instead, “within 24 to 48 hours, the level of attacks against LGBT people in this city went through the roof.” Data intended to aid insecure populations was used by others to victimize them. The increase in attacks might have been avoided if there was a system of data ethics in place that researchers and the social sector could refer to for guidance.

How We Integrate New Concepts

In some ways, our current predicament follows a longstanding cycle by which humans have adopted and integrated new ideas and technology, said Raymond in his talk. From the discovery of fire to aviation, he says, we generally progress through four steps.

  1. Initial Adoption – The discovery of the new concept or technology, when we begin to see how we can use it.
  2. Friction and Adaptation – When early users run into the technology’s potential negative consequences and adapt their use to avoid these detrimental effects.
  3. Negotiation – Society assesses the value of the new technology along with the risks it carries. It negotiates regulations and best practices to make the technology safer.
  4. Integration and Acceptance – Society accepts the technology as normal and it is widely used.

Information communications technology is currently in the friction and adaptation stage, he said. We have begun to use and test the limits of these modern tools, but we are now running into unintended consequences. From each misstep, we can learn and adjust how we use ICT, but because this tool is already widely used, these early mistakes can endanger people, even when the intention is to help. To prevent undue harm and enable humanity to fully integrate ICT, we need to come to some use case agreements.

That’s where the negotiation stage comes in. This is where rules are set and a framework can be created to allow people to use data responsibly. In his talk, Raymond gave the example of the automobile to illustrate how friction or conflict can lead us to develop and implement a system of rules that helps society use new technology safely. Because driving too fast causes accidents and those accidents cause deaths and injuries, we adopted speed limits and seatbelt laws. While the laws governing cars and drivers continue to be tweaked, by now we have negotiated a fairly comprehensive system dictating how cars can be used. In a similar fashion, to harness the power of ICT and big data without causing harm, many different sectors must collaborate to create a framework of data ethics.

Who’s in Charge of Data Ethics?

To create a system of regulations and principles for data ethics, one of the obstacles to overcome is that neither the social sector, academia, nor government has laid claim to the task yet. “Right now,” Raymond said in an interview, “the way I would describe it is that we’re all standing at the revolving door saying ‘after you,’ ‘after you,’ ‘after you.’” Although government may be best suited to enforce a system of data ethics, the academic and humanitarian communities are at the forefront of the application of ICT. Intentionally or unintentionally, none of these actors has taken responsibility for this daunting endeavor.

“I think, frankly, it will start with the funders,” said Raymond. He suggests that when funders see the need to invest in data ethics and people in the field listen to the feedback of affected populations, then a framework can be built.

In the absence of a data ethics framework, Raymond has implored social impact organizations to halt work that could have dire, if unintended, consequences. “If you can’t answer certain harm questions–don’t do it. You’re experimenting, sometimes, on the most vulnerable people in the world at the worst moment of their life.”


While nonprofits that don’t explore new initiatives and possibilities do little to push the sector forward, it can be just as, if not more, harmful to charge forward without considering how a new technology or program will affect stakeholders. As Raymond demonstrated in his talk at the Collaborative + Classy Awards, we must not assume that ICT or other technologies are inherently good or safe. Moreover, they should be treated as tools to be evaluated, tested, and regulated for the safety and security of all.


Where social entrepreneurs go to learn and grow

Join over 20,000 leaders just like you who get their weekly dose of technology, innovation, fundraising ideas, and the latest industry trends.

Subscribe to the Classy Blog