Your feedback can change what we ask in this survey. Please help! Details below

Research Questions

How much harm results from the security technologies we rely on backfiring against us?

We started this research because we know far less about the scale of harm we collectively suffer from security technologies backfiring against us than we do about security technologies failing to protect us from others (breaches). For example, we know more about the prevalence with which people's email or social accounts are broken into than the prevalence with which people are permanently locked out of these accounts.

Backfires are rarely reported or measured because, unlike breaches, there is no legislation requiring their reporting. Further, most backfires impact one user at a time, at a scale too small to be newsworthy, and users often blame themselves. Yet, security technologies can cause as much harm when they backfire as when they are breached; they can prevent us from accessing and services at times of critical need, or they can lock us out permanently. The companies that build and deploy security technologies rarely suffer consequences when their products backfire. These companies are reluctant to measure backfires, and are even more reluctant to report their measurements, as doing so would call attention to their product's potential for harm, leading fewer people to use them. The potential for security technology to backfire will not be addressed unless outsiders draw attention it, or compel companies to report incidents of backfiring.

How much harm results from all the technology we rely on backfiring against us?

As our research team set out to gauge the prevalence with which security technologies backfire, and the scale of the collective harm that results, we encountered a challenge: we lacked a baseline for the prevalence and harm we collectively suffer from the ways in which all the technology we have come to rely on can fail us. And so, we endeavored to also how often technology, loosely defined, fails us, is used against us, or otherwise causes harm.

Research Approach

We surveyed participants via Prolific to ask about 16 topics: 13 harm events they may or may not have suffered, as well as three harms that need not be attributed to a single specific event.

TBD
The percent of pilot participants who reported experiencing events causing harms (the 13 bars on the left) or technology-related harms that they did not need to associate with specific events (the 3 bars on the right). Losses due to failures of security measures to protect participants from attack are paired (left bar) against harms due to security measures themselves harming participants (right bar). Each bar is broken down into colors by the Likert severity of harm each participant reported on a Likert scale.
Figure 1

We asked about both breach events and backfire events for the security protecting participants'…

We also asked about three events that were not specifically security related and did not have breach/backfire pairings:

We also asked about the three harms independently of the events that caused them:

Some of those topics, such as the one about mental health[1], were inspired by reading the answers to free-response questions in earlier pilot surveys. Figure 1 shows the fraction of our pilot participants who responded to these topics by reporting that they had experienced these harmful events (1-13) and harms (14-16), broken down by the severity of the harm experienced. Not surprisingly, more participants reported experiencing harms that need not be attributed to a specific event than reported specific events that may have been the cause of harms.

One might wonder whether participants would have attributed the same significance to these technologies, events, and harms had we not prompted them with questions about them and asked them to rate them on a 7-point scale. This is why, prior to introducing these events and harms, we began our survey by asking participants simply to describe, without prompting, "the three most harmful technology-related harms or losses" they had experienced. We left it to the participant to infer the scope of what we considered technology and what harms we might want to know about. When we later asked about each of the 13 events and 3 harms specific to our survey, we asked participants if they included the event or harm as part of three worst harms they had described unprompted, or whether, in retrospect, they should have included it.

You can read the three most-harmful events each pilot participant described along with the topics they associated with each.

You can also the fraction of events and harms that participants' connected to one of their three events they described illustrated in Figure 2. That figure reveals, for example, that every all but one of the pilot participants who reported mental-health harms connected this concern back to one of the their initial three experiences they described in free responses.

A bar chart summarizing the percent of participants who had experienced each harm scenario.
The percent of pilot participants who reported experiencing events causing harms (the 13 bars on the left) or technology-related harms that they did not need to associate with specific events (the 3 bars on the right). Losses due to failures of security measures to protect participants from attack are paired (left bar) against harms due to security measures themselves harming participants (right bar). Each bar is broken down into colors based on whether the participant connected the experience/harm to one of the three worst experiences they described at the start of the study ("original"), whether they said they should have included the experience/harm as one of their three worst ("revised"), or whether it did not warrant a position in their top three ("not worst"). Those that had suffered the experience/harm are broken down into those who believe the harm could or could not happen to them.
Figure 2

We Need Your Help!

Help us improve our survey before we run our full study. Unlike traditional peer review, provided after participants have taken the time and effort to participate and researchers have paid them, your feedback can help us revise our survey before we share it with participants.

We believe science should be more open, transparent, and collaborative. That's why we've shared the history of our pilot studies, all of our data analysis tools and blogging tools[2], and why we're sharing our work before submitting it for publication so that we can get community feedback.[3]

We have shared the advertisement for our survey and the full contents of the survey and the full set of reports and graphs from our most-recent pilot.

You can share your feedback however you like. If you want to send us private feedback, you can email us at team@uharm.org. You can provide public feedback as part of a discussion by commenting on our Fediverse/Mastodon threads, or @ us in your own threads(Stuart). You can also publish your feedback anywhere else on the web and send us a link.

 

  1. The prevalence of mental-health harms reported was so high we re-checked our data. Note that following the first ten participants of this pilot, we discovered that participants were only randomly assigned two of the three harms. We fixed this for the remaining participants, and corrected the percentages of those who reported each harm remove those who weren't given the opportunity to report that harm. We had already made this adjustment when noticing just how prevalent mental-health concerns were among our (relatively small) pilot group. Of 29 participants, 26 were presented the question about mental health and 22 connected it to one of their three worst experiences with technology. ↩︎

  2. That includes the code that turns exported Qualtrics surveys (.qsf files) into the beautiful HTML you see here. ↩︎

  3. Too many researchers are afraid to get feedback and seek collaboration out of fears of getting "scooped". As an alternative to secrecy, we would like to promote the approach of sharing ideas early, often, and in public view. ↩︎