By Tom Fleischman for the Cornell Chronicle
It’s a common occurrence: Your phone or computer’s operating system runs an automatic update, and all of a sudden things look a little different.
Most of us understand that it happens occasionally, and it’s no big deal. But for people who’ve experienced digital stalking or harassment at the hands of a current or former intimate partner, these seemingly innocuous changes can be terrifying.
That and other types of computing-related retraumatization can be lessened or avoided in a few low- or no-cost ways, said Nicola Dell, associate professor of information science at the Jacobs Technion-Cornell Institute at Cornell Tech, and in the Cornell Ann S. Bowers College of Computing and Information Science.
She and colleague Tom Ristenpart, associate professor of computer science at Cornell Tech and in Cornell Bowers CIS, led a research group focused on “trauma-informed computing” – an approach that acknowledges trauma’s impact and seeks to make technology safer for all users, not just those who’ve experienced trauma.
Janet X. Chen, doctoral student in the field of information science, is co-lead author of “Trauma-Informed Computing: Towards Safer Technology Experiences for All,” which the research group presented at CHI ’22: Conference on Human Factors in Computing Systems, held April 29-May 5 in New Orleans. The other lead authors are Allison McDonald and Yixin Zou, doctoral students from the University of Michigan.
Dell and her colleagues define trauma-informed computing as “an ongoing commitment to improving the design, development, deployment and support of digital technologies by: explicitly acknowledging trauma and its impact; recognizing that digital technologies can both cause and exacerbate trauma; and actively seeking out ways to avoid technology-related trauma and retraumatization.”
Several of the paper’s co-authors have experience with communities who’ve experienced trauma, including victims of intimate partner violence (IPV).
“Over time, we noticed that there were a lot of survivors who were really just freaked out by technology,” Dell said. “They were having responses to what you or I might consider mundane technology things – a website crashing, a software update or their email changing because Google updated something – that would really cause a disproportionate response in how they were reacting to it.
“And often, they would assume that it meant that they had been hacked, or that they were being abused,” she said, “We started to realize that what they were describing, and many of the reactions we were seeing, correlated very well with well-known trauma or stress reactions – things like hypervigilance, numbness or hopelessness.”
The group’s framework consists of six principles, adapted from the Substance Abuse and Mental Health Services Administration for the design, development, deployment and evaluation of computing systems. Those principles include safety, trust, collaboration, peer support, enablement (empowerment) and intersectionality (relating to cultural, historical and gender issues).
The paper – which illustrates trauma in computing via three fictional vignettes, based on publicly available accounts as well as the authors’ experiences – explores application of these principles in the areas of user-experience research and design; security and privacy; artificial intelligence and machine learning; and organizational culture in tech companies.
“We know from our work with IPV survivors that many of these advocacy organizations, social work organizations, hospitals and schools have really worked to incorporate trauma-informed approaches,” Dell said. “For us, it was bringing this idea to the computing community to say, ‘What would it take to make your products and technologies more trauma-informed?’”
One approach, Dell said, could be to let users manage a list of potential triggers for their trauma.
“Everyone knows that Facebook is going to show you ads,” she said, “but maybe you can just say, ‘Don’t show me ads about baby products, because I just experienced pregnancy loss.’ Allowing people some control over what they see, and explaining why you don’t want to see a certain thing, could help enable and empower people.”
The authors made 22 such suggestions for ways to make computing safer for all users, such as: conducting user studies in a safe, secure location; providing clear information when software updates are pending, with options for whether and when to install; creating content policies with input from impacted communities; and providing training and resources to help tech workers better interact with trauma survivors.
One thing the researchers urge tech companies not to do: seek out people and ask them questions about their traumatic experience. That can cause needless retraumatization, they said.
Getting buy-in from the tech community “definitely could be a challenge,” Dell said, but some simple steps are achievable.
“We’ve talked quite a bit to various technology companies and have generally received a very enthusiastic response,” she said. “I think they're very interested in trying to do some of these things. Certainly we would hope that technology companies don’t want to be traumatizing or retraumatizing people.”
Other collaborators include doctoral student Emily Tseng; Florian Schaub, assistant professor of information science at Michigan; and Kevin Roundy and Acar Tamersoy of the NortonLifeLock Research Group.
This research was supported by the National Science Foundation, Google and the Defense Advanced Research Projects Agency.
This story originally appeared on the Cornell Chronicle