Using Data for More than Operations

While at Stanford to talk about “ethical data” I had a chance to read through the latest issue of the Stanford Social Innovation Review within the walls where it is published.  One particular article, Using Data for Action and Impact by Jim Fruchterman, caught my eye.  Jim lays out an argument for using data to streamline operational efficiencies and monitoring and evaluation within non-profit organizations.  This hit one of my pet peeves, so I’m motivated to write a short response arguing for a more expansive approach to thinking about non-profit’s use of data.

This idea that data is confined to operational efficiency creates a missed opportunity for organizations working in the social good sector. When giving talks and running workshops  with non-profits I often argue for three potential uses of data – improving operations, spreading the message, and bringing people together. Jim, who’s work at Benetech I respect greatly, misses an opportunity here to broaden the business case to include the latter two.Data_Architecures_Workshop___SSIR_Data_on_Purpose

Data presents non-profits with an opportunity to engage the people they serve in an empowering and capacity-buiding way, reinforcing their efforts towards improving conditions on whatever issue they work on. Jim’s “data supply chain” presents the data as a product of the organization’s work, to be passed up the funding ladder for consumption at each level. This extractive model needs to be rethought (as Catherine D’Ignazio and I have argued).  The data collected by non-profits can be used to bring the audiences they serve together to collaboratively improve their programs and outcomes.  Think, for example, about the potential impacts for the Riders for Health organization he discusses if they brought drivers together to analyze the data about their routes and distances.  I wonder about the potential impacts of empowering the drivers to analyze the data themselves and take ownership of the conclusions.

Skeptical that you could bring people with low data literacy together to analyze data and find a story in it?  That is precisely a problem I’ve been working on with my Data Mural work. We have a process, scaffolded by many hands-on activities, that leads a collaborative groups through analyzing some data to find a story they want to tell, designing a visual to tell that data-driven story, and paint it as a mural.  We’ve worked with people around the world to do this.  Picking it apart leaves us with a growing toolkit of activities being used by people around the world.

Still skeptical that you can bring people together around data in rural, uneducated settings? My colleague Anushka Shah recently shared with me the amazing work of Praxis India. They’ve brought people together in various settings to analyze data in sophisticated ways that make sense because they rely on physical mappings to represent the data.

10648582_10154595083945290_8278261691487553342_o
Charting crop production and rainfall trends over time.
10661624_10154595081990290_4122872920084994817_o
Yes, that looks like a radar chart to me too.

These examples illustrate that the social good non-profits can deliver with data is not constrained to operational efficiencies.  We need to highlight these types of examples to move away from a story about data and monitoring, to one about data and empowerment.  In particular, thought leaders like SSIR and Jim Fruchterman should push for a broader set of examples of how data can be used in line with the social good mission of non-profits around the world.

Cross-posted to the civic.mit.edu blog.

Practicing Data Science Responsibly

I recently gave a short talk at a Data Science event put on by Deloitte here in Boston.  Here’s a short write up of my talk.

Data science and big data driven decisions are already baked into business culture across many fields.  The technology and applications are far ahead of our reflections about intent, appropriateness, and responsibility.  I want to focus on that word here, which I steal from my friends in the humanitarian field.  What are our responsibilities when it comes to practicing data science?  Here are a few examples of why this matters, and my recommendations for what to do about it.

 

People Think Algorithms are Neutral

I’d be surprised if you hadn’t heard about the flare-up about Facebook’s trending news feed recently.  After breaking on Gizmodo if has been covered widely.  I don’t want to debate the question of whether this is a “responsible” example or not.  I do want to focus on what it reveals about the public’s perception of data science and technology.  People got upset, because they assumed it was produced by a neutral algorithm, and this person that spoke with Gizmodo said it was biased (against conservative news outlets).  The general public thinks algorithms are neutral, and this is a big problem.

Deloitte_Responsible_Data_Talk.png

Algorithms are artifacts of the cultural and social contexts of their creators and the world in which they operate.  Using geographic data about population in the Boston area?  Good luck separating that from the long history of redlining that created a racially segregated distribution of ownership.  To be responsible we have to acknowledge and own that fact.  Algorithms and data are not neutral third parties that operate outside of our world’s built-in assumptions and history.

Some Troubling Examples

Lets flesh this out a bit more with some examples.  First I look to Joy Boulamwini, a student colleague of mine in the Civic Media group at the MIT Media Lab.   Joy is starting to write about “InCoding” – documenting the history of biases baked into the technologies around us, and proposing interventions to remedy them. One example is facial recognition software, which has consistently been trained on white male faces; to the point where she has to literally done a white-face mask to have the software recognize her.  This just the tip of the iceberg in computer science, which has a long history of leaving out entire potential populations of users.

Deloitte_Responsible_Data_Talk.png

Another example is a classic one from Latanya Sweeney at Harvard.  In 2013 She discovered a racial bias trained into the operation Google’s AdWords platform.  When she searched for names that are more commonly given to African Americans (liked her own), the system popped up ads asking if the user wanted to do background checks or look for criminal records.  This is an example of the algorithm reflecting built-in biases of the population using it, who believed that these names were more likely to be associated with criminal activity.

My third example comes from an open data release by the New York City taxi authority.  They anonymized and then released a huge set of data about cab rides in the city.  Some enterprising researchers realized that they had done a poor job of anonymizing the taxi medallion ids, and were able to de-anonymize the dataset.  From there, Anthony Tockar was able to find strikingly juicy personal details about riders and their destinations.

A Pattern of Responsibility

Taking a step back form these three examples I see a useful pattern for thinking about what it means to practice data science with responsibility.  You need to be responsible in your data creation, data impacts, and data use.  I’ll explain each of those ideas.

Deloitte_Responsible_Data_Talk.png

Being responsible in your data collection means acknowledging the assumptions and biases baked into your data and your analysis.  Too often these get thrown away while assessing the comparative performance between various models trained by a data scientist.  Some examples where this has failed?  Joy’s InCoding example is one of course, as is the classic Facebook “social contagion” study. A more troubling one is the poor methodology used by US NSA’s SkyNet program.

Being responsible in your data impacts means thinking about how your work will operate in the social context of its publication and use.  Will the models you trained come with a disclaimer identifying the populations you weren’t able to get data from?  What are secondary impacts that you can mitigate against now, before they come back to  bite you?  The discriminatory behavior of the Google AdWords results I mentioned earlier is one example. Another is the dynamic pricing used by the Princeton Review disproportionately effecting Asian Americans.  A third are the racially correlated trends revealed in where Amazon offers same-day delivery (particularly in Boston).

Being responsible in your data use means thinking about how others could capture and use your data for their purposes, perhaps out of line with your goals and comfort zone.  The de-anonymization of NYC taxi records I mentioned already is one example of this.  Another is the recent harvesting and release of OKCupid dating profiles by researchers who considered it “public” data.

Leadership and Guidelines

The problem here is that we have little leadership and few guidelines for how to address these issues in responsible ways.  I have yet to find an handbook for a field that scaffolds how to think about these concerns. As I’ve said, the technology is far ahead of our reflections on it together.  However, that doesn’t mean that they aren’t smart people thinking about this.

Deloitte_Responsible_Data_Talk.png

In 2014 the White House brought together a team to create their report on Big Data: Seizing Opportunities, Preserving Values.  The title itself reveals their acknowledgement of the threat some of these approaches have for the public good.  Their recommendations include a number of things:

  • extending the consumer bill of rights
  • passing stronger data breach legislation
  • protecting student centered data
  • identifying discrimination
  • revising the Electronic Communications Privacy Act

Legislations isn’t strong in this area yet (at least here in the US), but be aware that it is coming down the pipe.  Your organization needs to be pro-active here, not reactive.

Just two weeks ago, the Council on Big Data, Ethics and Society released their “Perspectives” report.  This amazing group of individuals was brought together to create this report by a federal NSF grant.  Their recommendations span policy, pedagogy, network building, and area for future work.  The include things like:

  • new ethics review standards
  • data-aware grant making
  • case studies & curricula
  • spaces to talk about this
  • standards for data-sharing

These two reports are great reading to prime yourself on the latest high-level thinking coming out of more official US bodies.

So What Should We Do?

I’d synthesize all this into four recommendations for a business audience.

Deloitte_Responsible_Data_Talk.png

Define and maintain our organization’s values.  Data science work shouldn’t operate in a vacuum.  Your organizational goals, ethics, and values should apply to that work as well. Go back to your shared principles to decide what “responsible” data science means for you.

Do algorithmic QA (quality and assurance).  In software development, the QA team is separate from the developers, and can often translate between the  languages of technical development and customer needs.  This model can server data science work well.  Algorithmic QA can discover some of the pitfalls the creators of models might not.

Set up internal and and external review boards. It can be incredibly useful to have a central place where decisions are made about what data science work is responsible and what isn’t for your organization.  We discussed models for this at a recent Stanford event I was part of.

Innovate with others in your field to create norms.  This stuff is very new, and we are all trying to figure it out together.  Create spaces to meet and discuss your approaches to this with others in your industry.  Innovate together to stay ahead of regulation and legislation.

These four recommendations capture the fundamentals of how I think businesses need to be responding to the push to do data science in responsible ways.

This post is cross-posted to the civic.mit.edu website.

Ethical Data Review Procesess Workshop at Stanford

The Digital Civil Society Lab at Stanford recently hosted a small gathering of people to dig into emerging processes for ethical data review.  This post is a write up of the publicly shareable discussions there.

Introduction

Lucy Berholz opened the day by talking about “digital civil society” as an independent space for civil society in the digital sphere.  She is specifically concerned with how we govern the digital sphere in line with the a background of democracy theory.  We need to use, manage, govern in ways that are expansive and supportive for independant civil society.  This requires new governance and review structures for digital data.

This prompted the question of what is “something like an IRB and not an IRB”?  The folks in the room bring together corporate, community, and university examples.  These encompass ethical codes and the processes for judging adherence to them. With this in mind, in the digital age, do non-profits need to change?  What are the key structures and governance for how they can manage private resources for public good?

Short Talks

Lucy introduced a number of people to give short talks about their projects in this space.

Lasanna Magassa (Diverse Voices Project at UW)

Lasanna introduced us all to the Diverse Voices Project, an “An exploratory method for including diverse voices in policy development for emerging technologies”. His motivations lie in the fact that tech policy is generally driven by mainstream interests, and that policy makers are reactive.

They plan and convene “Diverse Voices Panels”, full of people whole live an experience, institutions that support them, people somehow connected to them.  In a panel on disability this could be people who live it and are disabled, law & medical professionals, and family members.  These panels produce whitepapers that document and then make recommendations.  They’ve tackled everything from ethics and big data, to extreme poverty, to driverless cars. They focus on what technology impacts can be for diverse audiences. One challenge they face is finding and compensating panel experts. Another is wondering how to prep a dense, technical document for the community to read.

Lasanna talks about knowledge generation being the key driver, building awareness of diversity and the impacts of technologies on various (typically overlooked) subpopulations.

Eric Gordon (Engagement Lab at Emerson College)

Eric (via Skype) walked us through the ongoing development of the Engagement Lab’s Community IRB project.  The goal they started with was to figure out what a Community IRB is (public health examples exist).  It turned out they ran into a bigger problem – transforming relationships between academia and community in the context of digital data.  There is more and more pressure to use data in more ways.

He tells us that in Boston area, those who represent poorer folks in the city are asked for access to those populations all the time.  They talked to over 20 organizations about the issues they face in these partnerships, focusing on investigating the need for a new model for the relationships.  One key outcome was that it turns out nobody knows what an IRB is; and the broader language use to talk about them is also problematic (“research”, “data”).

They ran into a few common issues to highlight.  Firstly, there weren’t clear principles for assuring value for those that give-up their data.  In addition, the clarity of the research ask was often weak.  There was a all-to-common lack of follow-through, and the semester-driven calendar is a huge point of conflict.  An underlying point was that organizations have all this data, but the outside researcher is the expert that is empowered to analyze it.  This creates anxiety in the community organizations.

They talked through IRBs, MOUs, and other models.  Turns out people wanted to facilitate between organizations and researchers, so in the end what they need is not a document, but a technique for maintaining relationships.  Something like a platform to match research and community needs.

Molly Jackman & Lauri Kanerva (Facebook)

Molly and Lauri work on policy and internal research management at Facebook.  They shared a draft of the internal research review process used at Facebook, but asked it not be shared publicly because it is still under revision.  They covered how they do privacy trainings, research proposals, reviews, and approvals for internal and externally collaborative research.

Nicolas de Corders (Orange Telekom)

Nicolas shared the process behind their Data for Development projects, like their Ivory Coast and Senegal cellphone data challenges.  The process was highly collaborative with the local telecommunications ministries of each country.  Those conversations produced approvals, and key themes and questions to work on within the country.  This required a lot of education of various ministries about what could be done with the cellphone call metadata information.

For the second challenge, Orange set up internal and external review panels to handle the submissions.  The internal review panels included Orange managers not related to the project.  The external review panel tried to be a balanced set of people.  They built a shared set of criteria by reviewing submissions from the first project in the Ivory Coast.

Nicolas talks about these two projects as one-offs, and scaling being a large problem.  In addition, getting the the review panels to come up with shared agreement on ethics was (not surprisingly) difficult.

Breakouts

After some lunch and collaborative brainstorming about the inspirations in the short talks, we broke out into smaller groups to have more free form discussions about topics we were excited about.  These included:

  • an international ethical data review service
  • the idea of minimum viable data
  • how to build capacity in small NGOs to do this
  • a people’s review board
  • how bioethics debates can be a resource

I facilitated the conversation about building small NGO capacity.

Building Small NGO Capacity for Ethical Data Review

Six of us were particularly interested in how to help small NGOs learn how to ask these ethics questions about data.  Resources exist out there, but not well written enough for people in this audience to consume.  The privacy field especially has a lot of practice, but only some of the approaches there are transferrable.  The language around privacy is all too hard to understand for “regular people”.  However, their approach to “data minimization” might have some utility.

We talked about how to help people avoid extractive data collection, and the fact that it is disempowering.  The non-profit folks in the group reminded us all that you have to think about the funder’s role in the evidence they are asking for, an how they help frame questions.

Someone mentioned that law can be the easiest part of this, because it is so well-defined (for good or bad).  We have well established laws on the fundamental privacy right of individuals in many countries.  I proposed participatory activities to learn these things, like perhaps a group activity to try and re-identify “anonymized” data collected from the group.  Another participant mentioned DJ Patel’s approach to building a data culture.

Our key points to share back with the larger group were that:

  • privacy has inspirations, but it’s not enough
  • communications formats are critical (language, etc); hands-on, concrete, actionable stuff is best
  • you have to build this stuff into the culture of the org

DataPop White Paper: Beyond Data Literacy

The Data-Pop Alliance recently released a “working draft” of a white-paper I co-authored: Beyond Data Literacy: Reinventing Community Engagement and Empowerment in the Age of Data.  The paper is a collaboration with folks there, and at Internews, and attempts to put the nascent term “data literacy” in historical context and project forward to future uses and the role of data in culture and community.  Data-Pop published some of the presentation on their blog.

Beyond_Data_Literacy_2015_pdf__page_1_of_37_

The paper begin with some history – focusing on the anthropologist Claude Lévi-Strauss and his ideas about literacy being used as a weapon of those in power to ensure and educated work populace.  We move into an argument about “literacy in the age of data” being a better way to start asking questions that “data literacy”.  As I talk about often, we focus on how data should serve the purpose of greater social inclusion.  This requires a focus on the words we use to talk about this stuff (is. “information” or “data”?).  This is all built on a definition of data literacy that includes the “desire and ability to constructively engage in society through and about data”.

If you’re interested in some academic reading about the history and potential of this type of work, give it a read!  It will be especially relevant to those trying to craft policies or programs that support building people’s capacity to work with data to create change.

Big Data’s Empowerment Problem

Catherine D’Ignazio and I just presented a paper titled “Approaches to Big Data Literacy”  at the 2015 Bloomberg Data for Good Exchange 2015.  This is a write-up of the talk we gave to summarize the paper.

When we talk about data science for good, collaborating with organizations that work for the social good, we are immediately entered into a conversation about empowerment.  How can data science help these organizations empower their constituencies and create change in the world?  Catherine and I are educators, and strongly believe learning is about empowerment, so this area naturally appeals to us!  That’s why we wrote this paper for the Bloomberg Data for Good Exchange.

Data Literacy

We’ve been thinking and working a lot on data literacy, and how to help folks build their capacity to work with information to create social change.  We define “data literacy” as the ability to readwork withanalyze and argue with data.  So how do we help build data literacy in creative and fun ways?  One example is the activity we do around text analysis.  We introduce folks to a simple word-couting website and give them lyrics of popular musicians to analyze.  Over the course of half and hour folks poke at the data, looking for stories comparing word usage between artists.  Then they sketch a visual to share a story.

Photos of stories created by students showing the artist that talks about themselves the most, and the overlap in lyrics between Paul Simon and Kanye West.
Photos of stories created by students showing the artist that talks about themselves the most, and the overlap in lyrics between Paul Simon and Kanye West.

Another example are my Data Murals – where we help a community group find a story in their data, collaboratively design a visual to tell that story, and paint it as a community mural.

The Data Mural created by youth from Groundwork Somerville.
The Data Mural created by youth from Groundwork Somerville.

This stuff is fun, and makes learning to work with data accessible.  We focus on working with technical and non-technical audiences.  The technical folks have a lot to learn about how to use data to effect change, while the non-technical folks want to build their skills to use data in support of their mission.

Empowerment

However this work has been focused on small data sets… when we think about “big data literacy” we see some gaps in our definition and our work.  Here are four problems related to empowerment that we see in big data, related to our definition of data literacy:

  • lack of transparency: you can’t read the data if you don’t even know it exists
  • extractive collection: you can’t work with data if it isn’t available
  • technological complexity: you can’t analyze data unless you can overcome the technical challenges of big data
  • control of impact: you can’t argue for change with data unless you can effect that change

With these problems in mind, we decided we needed an expanded definition of “big data literacy”. This includes:

  • identifying when and where data is being collected
  • understanding the algorithmic manipulations
  • weighing the real and potential ethical impacts
Some extensions to define "Big Data Literacy".
Some extensions to our definition of data literacy , to support an idea of “Big Data Literacy”.

So how do we work on building this type of big data literacy?  First off we look to Freire for inspiration.  We could go on for hours about his approach to building literacy in Brazil, but want to focus on his “Population Education”.  That approach was about using literacy to do education and emancipation.  This second piece matters when you are doing data for good; it isn’t just about acquiring technical skills!

Ideas

We want to work with you on how to address this empowerment problem, and have a few ideas of our own that we want to try out.  The paper has seven of these sketched out, but here are three examples.

Idea #1: Participatory Algorithmic Simulations

We want to create examples of participatory simulations for how algorithms function.  Imagine a linear search being demonstrated by lining people up and going from left to right searching for someone named “Anita”.  This would build on the rich tradition of moving your body to mimic and understand how a system functions (called “body syntonicity“).  Participatory algorithmic simulations would focus on understanding algorithmic manipulations.

Ideas #2: Data Journals

Data can bee seen as the traces of the interactions between you and the world around you.  With this definition in mind, in our classes we ask students to keep a journal of every piece of data they create during a 24 hour period (see some examples).  This activity targets identifying when and where data is being collected.  We facilitate a discussion about these journals, asking students which ones creep them out the most, which leads to a great chance to weigh the real and potential ethical implications.

Ideas #3: Reverse Engineering Algorithms:

We’ve seen a bunch of great work recently on reverse engineering algorithms, trying to understand why Amazon suggests certain products to you, or why you only see certain information on your Facebook.  We think there are ways to bring this research to the personal level by designing experiments individuals can run to speculate about how these algorithms work.  Building on Henry Jenkin’s idea of “Civic Imagination”, we could ask people to design how they would want the algorithms to work, and perhaps develop descriptive visual explanations of their own ideas.

Get Involved!

We think each of these three can help build big data literacy and try to address big data’s empowerment problem.  Read the paper for some other ideas.  Do you have other ideas or experiences we can learn from?  We’ll be working on some of these and look forward to collaborating!

People-Centered Approaches

I recently re-read the report on Big Data, Communities and Ethical Resilience: A Framework for Action from the Rockefeller Foundation’s 2013 Bellagio/PopTech Fellows.  Though kind of academic, It is well-worth your time (when you feel like getting head-y about this big data stuff).  I particularly enjoyed and wanted to share this paragraph, because it is written more eloquently than I’m able to:

Of primary importance is to focus on people-centered, community-driven approaches. The discourse of big data and community resilience often excludes local participation by less powerful or technically literate populations. As a result, external experts may reduce complex social problems like community resilience to terms that are suited to technological solutions. This crowds out local knowledge, participation and agency, which undermines trust, social connectedness and resilience. Clear public policy and corporate governance frameworks are needed to foster a generative and inclusive environment that is conducive to local communities participating in their own data projects.

I strongly agree with this.