z-logo
open-access-imgOpen Access
Bringing the Security Analyst into the Loop: From Human‐Computer Interaction to Human‐Computer Collaboration
Author(s) -
ROGERS LIZ
Publication year - 2019
Publication title -
ethnographic praxis in industry conference proceedings
Language(s) - English
Resource type - Journals
eISSN - 1559-8918
pISSN - 1559-890X
DOI - 10.1111/1559-8918.2019.01289
Subject(s) - ibm , computer science , visualization , human–computer interaction , world wide web , artificial intelligence , materials science , nanotechnology
This case study examines how one Artificial Intelligence (AI) security software team made the decision to abandon a core feature of the product – an interactive Knowledge Graph visualization deemed by prospective buyers as “cool,” “impressive,” and “complex” – in favor of one that its users – security analysts – found easier to use and interpret. Guided by the results of ethnographic and user research, the QRadar Advisor with Watson team created a new knowledge graph (KG) visualization more aligned with how security analysts actually investigate potential security threats than evocative of AI and “the way that the internet works.” This new feature will be released in Q1 2020 by IBM and has been adopted as a component in IBM's open‐source design system. In addition, it is currently being reviewed by IBM as a patent application submission. The commitment of IBM and the team to replace a foundational AI component with one that better aligns to the mental models and practices of its users represents a victory for users and user‐centered design, alike. It took designers and software engineers working with security analysts and leaders to create a KG representation that is valued for more than its role as “eye candy.” This case study thus speaks to the power of ethnographic research to embolden product teams in their development of AI applications. Dominant expressions of AI that reinforce the image of AI as autonomous “black box” systems can be resisted, and alternatives that align with the mental models of users proposed. Product teams can create new experiences that recognize the co‐dependency of AI software and users, and, in so doing, pave the way for designing more collaborative partnerships between AI software and humans.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here