This site is in beta. Tell us what you think.
Chapter 4 | Data Ethics Guidebook

Mechanisms for Consent

Informed consent is especially challenging in the digital age. The rapid transit of data from server to server, transformed through algorithms and intermingling with other data, means that the intentions and realities of each organization and individual that touches that data must to be understood in order to give fully informed consent. This happens in one of two ways: either through data-fluent, savvy users with access to transparency throughout the data supply chain; or through proxies of trust, where a trusted third party, approach, or standard is used as a proxy for trust in all entities along the data supply chain. For the latter category, there are multiple approaches to establishing trusted data use:

Trusted party

Facebook, Google, other single-sign-on and API aggregators, consortia, or other trusted groups assure users that private data is passing through only systems with certain standards of protection.

Vetting approach

Apple’s App Store (and how, for example, its review process caught data leaks introduced across many apps through spurious analytics tools for developers) is an example of an approach where users trust new services (provided by apps, in this case) based on their vetting by a familiar party.

Kit (trusted core) approach

Software Development Kits (SDKs) are sets of common tools, often hosted by a single party or consortium, which empower app developers to create value with less need to invent their own frameworks and tools. Apple’s HealthKit and related ResearchKit manage data about users. If users trust HealthKit’s standard for privacy, they may be more likely to use apps that abide by the privacy standards set out by such an ecosystem.

Industry standards and compliance

Trust can be established through industry standards and/or governmental compliance, like health codes, security clearances, or other commit-and-audit by- third-party strategies. These allow organizations to opt-in to standards that are sufficient to meet government regulations.

Embedded trust / Trusted technologies

Technology-based strategies (such as blockchain) assure users that their data is protected not based on where it is stored but by the mechanisms that encrypt data, record, and prove that exchanges have occurred. In-hardware protection, such as specific forms of encryption or embedded SIM cards, can also be considered a point of embedded trust.

Communicating and verifying intent

Users must be aware of the intended uses of the data they furnish to an app or service to provide informed consent. This communication is not complete if users don’t actually know or understand what data already exists or could be gathered by devices, such as location data in a mobile device or a microphone embedded in a smart television remote which they didn’t realize is “voice-capable.” Once users understand both what the data being gathered about them and how an application or service intends to use that data, informed consent can be established. However, it’s critical to ensure that the communication generated to achieve consent is created using language that truly informs rather than obfuscates.

From obfuscation to clarity: Decoding the EULA

End-User License Agreements may use technical language that renders them insufficient to achieve truly informed consent (as discussed earlier). However, several models exist for translating the complex technical concepts of user consent agreements into accessible language for the public.

Example

The Privacy Nutrition Label

One such model is the “Privacy Nutrition Label” developed by a joint team from Carnegie Mellon University and Microsoft.¹³ Their project cataloged the types of information required on safety and nutritional labels and approached the problem of informed consent and data fluency as a design problem. The result of their work is a comparison chart that distinguishes opt-in data from that which is automatically collected, states how various types of data would be used, and includes a glossary of terms for further clarification.¹⁴

Example

The Usable Privacy project

Carnegie Mellon’s CyLab Usable Privacy and Security Laboratory has developed an online platform that allows users to decode complex privacy agreements for themselves.¹⁵ Their website lets users search by URL or website name, then pulls the privacy agreement language from the source site. The agreement is then presented to the user in its original form, along with a sidebar that categorizes each statement by function or topic and offers common language explanations clause-by-clause.¹⁶ An added benefit of this platform is that common language phrases are visually linked to the “legalese” in the original—the translated text is highlighted when mousing over the corresponding explanation. This allows users to gradually become familiar with the practical meaning of the more complex privacy notice language, encouraging and enabling them to read and translate data-use agreements in the future without needing a translation tool. In this way, this particular solution acts as a vehicle to further data fluency while simultaneously preparing the user to self-educate in the future.

These solutions point to an important consideration: not all users understand information through the same mediums. Some people need visual examples and graphical representations of information, others prefer concise statements that focus on consequences and impact, and others may want to know all the details. As we consider what informed consent requires concerning communication, education, and understanding, adult learning styles become a design consideration in a way that legal departments and traditional UX developers cannot be left to solve on their own.

Example

The Usable Privacy project

Carnegie Mellon’s CyLab Usable Privacy and Security Laboratory has developed an online platform that allows users to decode complex privacy agreements for themselves.¹⁵ Their website lets users search by URL or website name, then pulls the privacy agreement language from the source site. The agreement is then presented to the user in its original form, along with a sidebar that categorizes each statement by function or topic and offers common language explanations clause-by-clause.¹⁶ An added benefit of this platform is that common language phrases are visually linked to the “legalese” in the original—the translated text is highlighted when mousing over the corresponding explanation. This allows users to gradually become familiar with the practical meaning of the more complex privacy notice language, encouraging and enabling them to read and translate data-use agreements in the future without needing a translation tool. In this way, this particular solution acts as a vehicle to further data fluency while simultaneously preparing the user to self-educate in the future.

These solutions point to an important consideration: not all users understand information through the same mediums. Some people need visual examples and graphical representations of information, others prefer concise statements that focus on consequences and impact, and others may want to know all the details. As we consider what informed consent requires concerning communication, education, and understanding, adult learning styles become a design consideration in a way that legal departments and traditional UX developers cannot be left to solve on their own.

Managing consent over time

The best implementations of informed consent manage consent not just at the beginning of using a service, but over time, especially at critical junctures.

Privacy checkups: Adjusting and maintaining consent

One example of actively managed consent over time is Apple’s Location Services function within iOS. It prompts users not just when an app first asks to know the user's location but also when an app has been using the phone’s location in the background for an extended period to confirm whether the user intended to let that app continue accessing their location. This ensures that the user consents initially to a specific app’s request (rather than simply granting access to all apps) and that they can continue to enjoy the benefits of allowing location access or revoke consent to that usage if they change their mind.

In another example of actively managed consent over time, Google’s security check-up helps Google users understand how the data they disclose is used in delivering Google products and services.¹⁷ The six-step process, which is periodically offered to users but can also be accessed any time on request, walks the user through different permissions that may or may not have been granted when the user last agreed to Google’s terms and conditions. Users can modify these permissions either through restricting data use, pausing data collection, or updating basic information like telephone numbers. For example, if a user does not want to be served targeted ads, the checkup allows them to turn off targeting entirely or adjust the topics for ads that Google deemed relevant to that user. As terms of services and end-user license agreements are updated, reviewing this information allows users to reconfirm that their expectations around data use are being met and modify permissions if they are not.

Example

Apple

One example of actively managed consent over time is Apple’s Location Services function within iOS. It prompts users not just when an app first asks to know the user's location but also when an app has been using the phone’s location in the background for an extended period to confirm whether the user intended to let that app continue accessing their location. This ensures that the user consents initially to a specific app’s request (rather than simply granting access to all apps) and that they can continue to enjoy the benefits of allowing location access or revoke consent to that usage if they change their mind.

Example

Google

In another example of actively managed consent over time, Google’s security check-up helps Google users understand how the data they disclose is used in delivering Google products and services.¹⁷ The six-step process, which is periodically offered to users but can also be accessed any time on request, walks the user through different permissions that may or may not have been granted when the user last agreed to Google’s terms and conditions. Users can modify these permissions either through restricting data use, pausing data collection, or updating basic information like telephone numbers. For example, if a user does not want to be served targeted ads, the checkup allows them to turn off targeting entirely or adjust the topics for ads that Google deemed relevant to that user. As terms of services and end-user license agreements are updated, reviewing this information allows users to reconfirm that their expectations around data use are being met and modify permissions if they are not.

Proactive vs. Prescriptive End-user Agreements

This move is in stark contrast to the early, and often lampooned, iTunes EULA (see R. Sikoryak’s “The Unabridged Graphic Adaptation [of] iTunes Terms and Conditions”), which may be seen as a small annoyance to users as they scroll past and accept without reading in order to access their music.²² Like most EULAs, the dense legalese gives users a difficult time in determining the significance of any changes they need to “review” and “accept.”

As users increasingly realize the importance and value of securing their data and sharing it selectively and with intent, brands that declare responsibility for educating their users about data security and use have an opportunity to build trust and loyalty from their customers. By focusing on more than just removing themselves from liability through processes that occur as a technicality in the user flow and instead utilizing proactive measures (as Apple does by reminding users that apps are using location data), companies can establish themselves as industry leaders in ethical data use.