An Introduction to the Data Experience Diagnostic

Marc Hebert
Designing Human Services
4 min readNov 5, 2021

--

By “data experience,” I mean the user experience (UX) of data, information, stories, insights, dashboards and other artifacts of evidence or subjective truth.

How people experience data involves anyone using data and information to make decisions. An inroad to understand data experience is around measures of success. The way people can experience data about impact or performance can have varied influences and outcomes on our behavior. A negative one may happen when evaluating success focuses more on metrics than the intended outcomes. This can happen when determining the performance and impact of a person, project, service or policy.

Here is a Sketchplanation based on insights from John Goodhart and Marilyn Strathern about how the measures of success can change the results.

“Goodhart’s Law: When a measure becomes a target it ceases to be a good measure.” If the “Number of nails made” is the success metric then the output may be “1000’s of tiny nails.” If “Weight of nails made,” is the new success metric then people may be incentivized to make , “A few giant nails.” There are two stick figures in this image featured twice; near each pile of nails. One seems to a supervisor, with a tie a clipboard, pulling their hair out in frustration.

For more relevant examples, consider a mandate requiring paperwork to be processed in 30 days. The measure of success can orient people towards meeting a 30-day timeframe, not expediting the process. To improve this process, exploring the role of mandated metrics on people’s behaviors offers a more holistic understanding of the problem.

Policy and program metrics can also center on the number of people served without sufficient attention on the quality of that experience or whether people benefitted from the service or product.

The business, philanthropic, educational and nonprofit sectors struggle with performance metrics similarly.

In light of this, ask yourself, “Are mandated metrics helping me to design a better service or product? Are they working against me? Do they have any role at all? How would others on my team / project respond to these questions?”

Unpacking the power that measures of success have on human behavior is nothing new. People have been researching this for decades, and experiencing it even longer. I’ve shared this with many managers who have acknowledged that their teams may be incentivized towards the wrong thing or might be confused as to what indicator of success is most important. It’s complicated stuff. I don’t have all the answers, but I’ve found it’s uncommon to research how measures of success (and data, more broadly) are experienced within organizations or by external partners or the people ultimately being served.

Co-creating or -evaluating the effectiveness of these measures, especially with frontline teams and “end users,” also seems to happen infrequently. Why is that?

Many of us in work in environments where responding to urgencies occupies much of our attention. It’s hard to prioritize other things when there are so many pressing needs. Our colleagues and bosses can feel engulfed by fires. Problem prevention research or even a discussion about the UX of success metrics may seem out of touch within a fire containment work culture.

Given this context, we need an ethical and effective way to frame our ask for this sort of conversation and research.

The diagnostic may help by trying to answer three questions:

Why do measures of success shape people’s behaviors in my work place (including my own)?

What may be missing from our current practice of understanding the performance and impact of my team/organization?

What could we do better with others’ help?

The diagnostic is an invitation for teams to ask: “To what degree…

  • Are organizational values a part of a decision-tree when creating success metrics?
  • Are bias, discrimination and trauma used as a lens to examine performance metrics, dashboards, and evaluative practices?
  • Is there shared knowledge about data or its use in the organization?
  • Is there recognition and support for the different ways frontline teams and managers may be experiencing the success metrics?
  • Are success metrics of end users aligned or complimentary with those working internally/on back-end systems?
  • Is enough time spent on gathering and analyzing necessary data instead of available data?
  • Is there genuine agreement on the intended results (and how to measure and recognize them) for a project, product, service or policy?
  • Are people experiencing the ‘Spiral of Mistrust’: Where frontline teams feel metrics are meaningful only to managers/those with more formal power? In response, these teams treat the metrics as a checkbox in order to return to the real work. If “checkboxing” is discovered the response is creating more metrics to control the frontline teams. The downward spiral continues.
  • Is there a reflective practice to understand how history / what’s happened before is shaping today’s approach to success metrics?
  • Is there uncertainty of how to have the above conversations or shame at what they may reveal?

The “diagnostic” part of this tool includes identifying which of the obstacles you’re experiencing, to what degree, adds what’s missing, unpacks assumptions, prioritizes them, explores what could be done differently, then helps you to prototype, test and learn how the changes are going.

OK, so how to do all of this?

Here is a template to get you started. The diagnostic includes a simple implementation plan.

Let me know how it goes or reach out for help (marc.hebert@sfgov.org).

Acknowledgments: I’m grateful for feedback on the diagnostic from the rest of the Innovation Office team, the Rapid Research Evaluation and Appraisal Lab, and the federal Service Design in Government group. (Last updated 8 June 2022)

--

--

Marc Hebert
Designing Human Services

Anthropologist | Director, Innovation Office, San Francisco Human Services Agency