Research Questions
How much harm results from the security technologies we rely on backfiring against us?
We started this research because we know far less about the scale of harm we collectively suffer from security technologies backfiring against us than we do about security technologies failing to protect us from others (breaches). For example, we know more about the prevalence with which people's email or social accounts are broken into than the prevalence with which people are permanently locked out of these accounts.
Backfires are rarely reported or measured because, unlike breaches, there is no legislation requiring their reporting. Further, most backfires impact one user at a time, at a scale too small to be newsworthy, and users often blame themselves. Yet, security technologies can cause as much harm when they backfire as when they are breached; they can prevent us from accessing and services at times of critical need, or they can lock us out permanently. The companies that build and deploy security technologies rarely suffer consequences when their products backfire. These companies are reluctant to measure backfires, and are even more reluctant to report their measurements, as doing so would call attention to their product's potential for harm, leading fewer people to use them. The potential for security technology to backfire will not be addressed unless outsiders draw attention it, or compel companies to report incidents of backfiring.
How much harm results from all the technology we rely on backfiring against us?
As our research team set out to gauge the prevalence with which security technologies backfire, and the scale of the collective harm that results, we encountered a challenge: we lacked a baseline for the prevalence and harm we collectively suffer from the ways in which all the technology we have come to rely on can fail us. And so, we endeavored to also how often technology, loosely defined, fails us, is used against us, or otherwise causes harm.
Research Approach
We surveyed participants via Prolific to ask about 16 topics: 13 harm events they may or may not have suffered, as well as three harms that need not be attributed to a single specific event.
We asked about both breach events and backfire events for the security protecting participants'…
- (1/2) devices,
- (3/4) email accounts,
- (5/6) social media accounts,
- (7/8) financial accounts, and
- (9/10) stored passwords (password manager accounts).
We also asked about three events that were not specifically security related and did not have breach/backfire pairings:
- (11) losing data when replacing a device (e.g., upgrading a phone),
- (12) having technology fail or behave in a way other than what was promised/expected, and
- (13) having technology used to abuse, harass, or embarrass the participant.
We also asked about the three harms independently of the events that caused them:
- (14) permanently losing photos,
- (15) permanently losing emails, and
- (16) experiencing mental distress or health issues connected with the participant's use of technology.
Some of those topics, such as the one about mental health[1], were inspired by reading the answers to free-response questions in earlier pilot surveys. Figure 1 shows the fraction of our pilot participants who responded to these topics by reporting that they had experienced these harmful events (1-13) and harms (14-16), broken down by the severity of the harm experienced. Not surprisingly, more participants reported experiencing harms that need not be attributed to a specific event than reported specific events that may have been the cause of harms.
One might wonder whether participants would have attributed the same significance to these technologies, events, and harms had we not prompted them with questions about them and asked them to rate them on a 7-point scale. This is why, prior to introducing these events and harms, we began our survey by asking participants simply to describe, without prompting, "the three most harmful technology-related harms or losses" they had experienced. We left it to the participant to infer the scope of what we considered technology and what harms we might want to know about. When we later asked about each of the 13 events and 3 harms specific to our survey, we asked participants if they included the event or harm as part of three worst harms they had described unprompted, or whether, in retrospect, they should have included it.
You can read the three most-harmful events each pilot participant described along with the topics they associated with each.
You can also the fraction of events and harms that participants' connected to one of their three events they described illustrated in Figure 2. That figure reveals, for example, that every all but one of the pilot participants who reported mental-health harms connected this concern back to one of the their initial three experiences they described in free responses.
We Need Your Help!
The suggestions you provide now can improve our survey before we next share it with participants.[2]
We believe science should be more open, transparent, and collaborative. That's why we're publicly sharing our work for community feedback without concern for whether this will make the work less attractive to traditional scientific publications.[3] We've shared the advertisement for our survey and the full contents of the survey, the full set of reports and graphs from our most-recent pilot, past results and notes from our earlier pilot studies, and all of our data analysis tools and blogging tools[4].
Please share your suggestions, and help us demonstrate the benefits of open science, using the medium of your choice. You can share publicly by commenting on our Mastodon thread, or start your own Fediverse thread and tag us (@fredheiding@infosec.exchange @eglassman@hci.social @v0max@infosec.exchange @MildlyAggrievedScientist@mastodon.social). If you prefer, publish your feedback anywhere on the public web and send us a link. If you want to send us private feedback, you can email us at team@uharm.org.
The prevalence of mental-health harms reported was so high we re-checked our data. Note that following the first ten participants of this pilot, we discovered that participants were only randomly assigned two of the three harms. We fixed this for the remaining participants, and corrected the percentages of those who reported each harm remove those who weren't given the opportunity to report that harm. We had already made this adjustment when noticing just how prevalent mental-health concerns were among our (relatively small) pilot group. Of 29 participants, 26 were presented the question about mental health and 22 connected it to one of their three worst experiences with technology. ↩︎
Traditional peer review comes too late to improve human-subjects experiments. By the time it arrives, participants have already provided their time and effort, researchers have spent their participant budget, and key members of the team have often graduated or otherwise changed institutions/jobs. The only thing to be done with the feedback is to update the write-up to reflect what researchers wish they had done had their peers' suggestions arrived earlier. ↩︎
Researchers often are afraid to seek out feedback and collaboration out of fears of getting "scooped", opting instead to keep their work secret. Modern publishing permits us to protect the provenance of our ideas by sharing them early, often, and in public view for automatic archival. ↩︎
That includes the code that turns exported Qualtrics surveys (
.qsf
files) into the beautiful HTML you see here. ↩︎