Avoiding harm and continuing to seek and clarify informed consent is vitally important. Harm mitigation takes many forms and can involve nearly every part of an organization—especially when harms are not immediately noticed, or their scope is large.
Data-related harms usually occur in one of two categories: unintended disclosure of raw data (such as photos of a user or their credit-card information) or improper decisions that have been made based on data about a user. These decisions can be made by humans (such as a decision on whether or not to prescribe medication), hybrid decisions (such as a credit report influenced decision whether to offer a loan), or machine decisions (such as automatic re-routing of vehicles based on traffic data). Strategies to mitigate such harm and respond to it depend on the types of decisions being made.
While pre-release design is critical to meet the “do no harm” expectation, designing with the ability to adapt post-release is equally critical. For example, a social network in which users directly offer up data about themselves (whether for public or private consumption) would likely be launched with privacy controls available from day one. However, the system’s owners may find that users are unaware of the available privacy controls and introduce a feature whereby users are informed/reminded of the available settings. In such a case, users should be able to modify permissions on their past shared data retroactively—i.e., any change a user makes to their privacy settings should affect not only future shared data but anything they have previously shared. In this way, a system that was not initially designed to allow fully informed consent could be adjusted to allow revocation of consent over time. However, such a capability requires the system’s designers to have planned for adaption and future changes. And, given the interdependence of various software features, if a breach or unintended effect occurs, plans should include how data can be removed from the entire data supply chain—not just one company’s servers.
One practice for mitigating the risks associated with data usage is coordination between stakeholders in webs of shared computing resources. Collective coordination and alignment on key standards are often covered by the discussion of “federation” in the IT context. Standards for ethical and uniform treatment of user data should be added alongside existing agreements on uptime, general security measures, and performance.²⁰ Federated identity management (and, as part of that management, single-sign-on tools) is a subset of broader discussions of federation. Identity management is another critical component of managing data ethics so that stakeholders accessing customer data (and customers themselves) are verified to be permitted to access this data.
As part of an investigation into a 2015 shooting incident in San Bernardino, California, The US Federal Bureau of Investigation (FBI) filed a court request for Apple’s assistance in creating software to bypass security protocols on an iPhone owned by one of the perpetrators. Apple’s public letter to its customers explaining its decision to challenge that request provides a glimpse into the complexity and potential risks. How society addresses matters such as these will be central to shaping 21st-century ethics and law. Apple chose a very intentional, bold, and values-driven stance.²¹ Perhaps most relevant to the issues of informed consent and intent to do no harm was Apple’s choice to not only make a statement about its position and intention, but also to explain in layman’s terms how the current security features function, how the proposed software would potentially subvert them, and what the future risks of having such software in existence would be, should the company comply with the FBI’s request. In the words of Tim Cook, Apple’s CEO, “This moment calls for public discussion, and we want our customers and people around the country to understand what is at stake.”